Armilla Review - AI Governance in Focus: Global Reviews, Openness, Ethical Concerns, and 2024 Priorities

In this week’s edition of the Armilla Review, generative AI continues to dominate industry advancements just as controversies around AI ethics erupt at leading technology companies and regulators continue to issue notices of caution and guidance. The pace of AI innovation, and what this means for managing its risks, quickly emerged as the dominate theme of the year for governments, industry and the public alike. It remains so as December draws to a close. While OpenAI's ChatGPT drove momentum for closed models throughout the the first half of 2023, since then open source models have steadily gained steam with new champions, Hugging Face, Mistral AI and, also, Meta AI. Perhaps as a foreshadowing of what 2024 may hold, this week the National Telecommunications and Information Administration kicked off public engagement on its review of “AI openness”, with a focus on safety challenges but also opportunities for competition and innovation.
December 20, 2023
5 min read

Assessing and Improving AI Governance: A Global Review and Pathways for Enhanced Trustworthiness

The report delves into the critical need for evaluating and mitigating risks associated with deploying AI systems, stressing the importance of reliable AI governance tools. It scrutinizes existing tools across continents, exploring practical guidance, frameworks, technical codes, and more. It also highlights the shortcomings of these tools, emphasizing their lack of oversight and quality assessment, which can breed unintended problems and erode trust in AI systems. To enhance these tools, the report suggests procedural controls, conflict prevention, and aligning tool functionalities with policy objectives. It emphasizes the necessity of an evaluative environment, proposing the adoption of international standards to fortify AI governance tools and foster a more transparent and trustworthy AI ecosystem.

Source: World Privacy Forum

US Financial Regulators Alert to Risks of Unsupervised AI in Annual Stability Report

The Financial Stability Oversight Council (FSOC) cautioned in its annual financial stability report about potential risks posed by the unmonitored adoption of artificial intelligence (AI) in the U.S. financial system. While acknowledging AI's capacity for innovation and efficiency, the council highlighted concerns about safety, soundness, and model risks, emphasizing the need for firms and regulators to deepen their expertise to monitor AI innovation effectively. The report outlined challenges in understanding complex AI tools, raised concerns about biased or inaccurate results, and pointed out privacy and cybersecurity risks associated with AI's reliance on vast external datasets and third-party vendors. Regulators, such as the Securities and Exchange Commission, are intensifying scrutiny of AI usage, aligning with the White House's recent executive order to mitigate AI risk.

Source: Reuters

NTIA Kicks Off Review of AI Openness

Assistant Secretary Alan Davidson, speaking at a Center for Democracy and Technology session, highlighted the Biden Administration's commitment to address the complexities and risks posed by AI technology. Acknowledging the potential benefits and pressing risks of AI, he underscored the urgency to navigate concerns about safety, security, privacy, discrimination, and bias. Davidson outlined the multifaceted efforts by the Commerce Department, particularly the National Telecommunications and Information Administration (NTIA), in advancing AI policy, including a review of AI openness, aiming to balance safety with broad access. Seeking public engagement and pragmatic input, he emphasized the need for policies that harness the advantages of open-source AI tools while minimizing potential harms.

Source: National Telecommunications and Information Administration

Navigating Canadian Privacy Principles in Gen AI

The Office of the Privacy Commissioner of Canada (OPC) introduced principles tailored to generative AI technologies, emphasizing legal authority, limited data collection, openness, accountability, user access, and data safeguards. Private AI's technology assists in compliance by minimizing personal data scope, ensuring essential data usage, aiding transparency efforts, supporting impact assessments, facilitating user data access, and implementing data safeguards and bias mitigation. Despite challenges like retraining models and correcting information, Private AI offers crucial tools for organizations navigating the complexities of privacy compliance in Canadian generative AI development and usage.

Source: Private AI

Stanford Researchers Evaluate Risks, Highlight Benefits of Open Foundation Models

Stanford researchers released an issue brief discussing the openness complexities of governing foundation models in AI. Emphasizing the advantages of open foundation models in fostering innovation and transparency, the briefs highlight the need for nuanced regulation, cautioning against policies that could disproportionately burden open model developers without effectively mitigating risks such as disinformation or cyber threats. The discussions stress the importance of evidence-based policymaking and extensive consultation with developers to strike a balance between reaping the benefits of open models and minimizing potential harms.

Source: Stanford

ByteDance's Secret Use of OpenAI Tech Sparks Ethical Concerns and Suspension

ByteDance, the parent company of TikTok, has been reported using OpenAI's technology to develop its own large language model, Project Seed, in violation of OpenAI's terms of service. Internal communications instructed ByteDance employees to hide evidence of their API usage for training their model. OpenAI confirmed the suspension of ByteDance's account due to this breach and emphasized adherence to usage policies, while ByteDance denied any wrongdoing, claiming licensing from Microsoft for GPT APIs and using their self-developed model, Doubao, exclusively in China. As ByteDance expands into the AI market, concerns persist over potential data-handover to the Chinese government amid ongoing scrutiny by US regulators and lawmakers.

Source: Business Insider

CEO Ousted Amidst Sports Illustrated AI Article Controversy

Following the revelation of AI-generated articles with fake authors published in Sports Illustrated, the Arena Group, Sports Illustrated's publisher, terminated CEO Ross Levinsohn's employment. The dismissal, attributed to enhancing operational efficiency and revenue, comes after an investigative report revealed the use of fabricated personas in articles. While the company denied the AI-generated articles, it severed ties with the third-party advertising firm involved. The board announced Manoj Bhargava as interim CEO following the dismissals of several high-ranking executives in the wake of the scandal.

Source: The Guardian

Dropbox's AI Features Spark Trust Concerns: A Debate on Privacy and Data Usage

Dropbox introduced new AI features causing backlash over privacy concerns when it was suggested that data might be shared with OpenAI for training models. Although Dropbox denied this, users worry about data privacy implications, particularly as the feature's default consent setting raised confusion. The skepticism mirrors broader distrust in companies' claims, akin to suspicions surrounding Facebook allegedly using phone microphones for ad targeting. The overall issue underscores a growing crisis of trust in AI and the need for greater transparency in data usage by tech companies, demanding clear explanations to restore trust and accountability, especially regarding training data for AI models.

Source: Simon Willison

Generative AI Dominates 2023: A Year of Advancements and Ethical Concerns

The year 2023 witnessed the ascendancy of generative AI as a dominant force, showcasing breakthroughs in healthcare, education, creativity, and politics. Notable achievements included AI models surpassing human performance in complex tasks like medical question answering and dance choreography. However, concerns about transparency, bias, and ethical ramifications grew as these powerful models integrated into daily life. The need for regulations and ethical guidelines became increasingly urgent, highlighting the evolving landscape of AI deployment and the call for responsible and accountable usage in various fields. The top stories of the year covered advancements, challenges, and calls for transparency in the realm of artificial intelligence.

Source: Stanford

2024 AI Priorities and Predictions

The AI landscape in 2024 is anticipated to witness key priorities and developments from diverse perspectives. Yoshua Bengio and other leading AI researchers emphasize AI safety, democratic governance, and aligning regulatory frameworks with rapidly advancing capabilities. Venture capitalists foresee the coexistence of large and small language models, foreseeing a more grounded approach to AI hype and Nvidia's continued dominance in the industry. Infrastructure builders anticipate an escalated demand for GPU hardware and a shift towards specialized cloud infrastructure to support the surge in AI computing requirements. Responsible AI specialists predict a focus on human-centric AI, emphasizing the need for responsible AI development, standards, and tools in the industry.

Source: Turing Post