Armilla Review - AI Governance in Flux: EU Regulations, Ethical Challenges, and Industry Turmoil

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
December 6, 2023
5 min read

Trilogue negotiations between the EU Council, Parliament and Commission have reached an impasse over the contents of the draft AI Act. In many ways, the debate reflects global tensions in discussions over AI regulation, particularly surrounding generative AI, with countries like France, Germany and Italy pushing back against comprehensive rules in favour of a self-regulation through a voluntary code of conduct. In the wake of the OpenAI board's failure to self-regulate, Microsoft’s support for regulation and “safety brakes”, this issue of the Armilla Review focuses on Responsible AI and AI Assurance best practices that can help enterprises mitigate risk, as global AI rules remain in flux.

EU AI Act at Crossroads: Foundation Models and Regulation Rift

The ongoing trilogue negotiations within the European Union Council, Parliament, and Commission regarding the EU AI Act have encountered a critical impasse primarily centered around the regulation of foundation models and general-purpose AI. Despite initial progress toward a tiered regulatory approach, recent resistance from influential EU members—Germany, France, and Italy—favours self-regulation via a code of conduct, challenging the comprehensive regulation proposed. This unexpected shift jeopardizes the AI Act's progression, raising concerns about potential delays or failure in the legislative process. The article details the evolution of discussions around foundation models, highlighting recent disagreements and potential consequences, while underlining the urgency to resolve these differences amidst upcoming deadlines and global advancements in AI regulation.

Source: Tech Policy

Navigating the Assurance Chasm: Assessing AI Governance Challenges

In a paper published by the Stanford Center for International Security and Cooperation, Armilla's Head of AI Policy, Philip Dawson, writes about the need for robust assurance mechanisms to manage AI risks and trustworthiness. Assurance techniques like AI testing, assessment and audit are at the core of the NIST AI Risk Management Framework, Biden's Executive Order on AI, the EU AI Act and voluntary self-regulatory approaches to governing AI. To ensure these techniques are effective, governments and industry should prioritize investment in AI model testing and evaluation -- practices that are critical to governing AI risk with sufficient precision, and ultimately to the success of both AI regulation and private-sector led approaches.

Source: Stanford

The Ethical Terrain of AI Governance: A Blueprint for Business Leaders

As AI becomes increasingly central to value creation across industries, concerns regarding its ethical implications grow as well. The rise of generative AI amplifies both the potential and risks associated with AI technology. To harness its benefits while mitigating risks, organizations must prioritize robust AI governance. A comprehensive report by the Responsible Artificial Intelligence Institute and Boston Consulting Group, "Navigating Organizational AI Governance," offers crucial insights into this complex landscape, emphasizing the need for responsible AI (RAI) programs led by top executives. The report outlines various governance mechanisms, from principles and frameworks to laws, policies, and additional best practices, providing a roadmap for businesses to develop and implement effective AI governance strategies.

Source: BCG

Microsoft to Take Observer Role on OpenAI's Board as Altman Returns as CEO

Microsoft is set to assume a non-voting, observer position on OpenAI's board, as confirmed by CEO Sam Altman following his reinstatement. Altman detailed the governance changes, revealing Microsoft's observer status, with Satya Nadella previously highlighting the need for governance alterations within the AI firm. Altman's return saw restructuring, including the reinstatement of Greg Brockman as president and the exit of chief scientist Ilya Sutskever from the board, emphasizing a pivotal shift in OpenAI's leadership landscape.

Source: The Guardian

Microsoft Urges 'Safety Brakes' for AI: Addressing Risks Without Immediate Existential Threat

Microsoft's President, Brad Smith, acknowledges that while artificial intelligence doesn't present an immediate existential threat, urgent measures are required to tackle its potential risks. Smith advocates for the implementation of "safety brakes," akin to emergency mechanisms in vital systems, to maintain human control over high-risk AI systems that manage critical infrastructure. Emphasizing the importance of proactive governance, Smith encourages global collaboration, suggesting international codes and third-party audits to ensure AI safety standards and compliance across borders in a rapidly evolving AI landscape.

Source: The Toronto Star

Rising Demand for Chief AI Officers in D.C. Amid Biden's Executive Order

Following President Biden's AI executive order, federal agencies are hustling to fulfill the mandate of appointing over 400 Chief AI Officers (CAIOs) by year-end, aiming to embed AI expertise across government sectors. The CAIO's role spans coordination, innovation, risk management, and developing AI strategies, entwined with multiple agency functions, albeit with some exceptions, to ensure comprehensive AI governance. As agencies navigate the hiring process and gear up to meet the presidential directive, private sector insights emphasize the need for broad mandates, diverse skill sets, and robust support structures to enable effective AI leadership within governmental realms.

Source: AXIOS

Stability AI Faces Turmoil Amid CEO Controversy and Possible Sale

Amidst controversy surrounding CEO Emad Mostaque and criticism over the use of copyrighted content, UK-based AI developer Stability AI, known for its Stable Diffusion model, may be exploring a sale. Mostaque faces pressure to step down, with company VPs departing over disputes regarding copyrighted material used in their AI models, prompting interest from potential buyers like Coatue Management, Jasper, and Cohere. Despite participating in AI safety initiatives and signing pledges, senior executives' public resignation underscores industry-wide concerns regarding the ethical use of copyrighted material in AI development.

Source: Decrypt