Armilla Review - Global Powers Unite, AI Regulatory Hurdles, New Models for Consumers and Controversies in AI Content Generation

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
November 7, 2023
5 min read

With the UK’s AI Safety Summit, the G7’s commitment to create an international code of conduct for AI, and President Biden’s Executive Order on AI, policymakers are more aligned than ever: AI safety should be taken very seriously, and the time for industry to start assessing and mitigating risk is now.

In particular, Biden's Executive Order (EO) signals an important commitment to support enterprises as they build AI governance and risk management from the ground up, through additional resources for standard development and frameworks for testing and assessing AI systems. Biden's EO is yet another endorsement of NIST’s AI Risk Management Framework as the leading, most authoritative framework for enterprise AI governance in the US, including for generative AI. While the EO's focus is on the safety impacts of advanced generative AI systems, here at Armilla, we’ve already noticed clients reacting to this more broadly — exploring risk assessments and risk transfer solutions for a wide range of AI systems that may pose material risks to their business.

Top Story

Global Powers Unite in Historic Declaration on AI's 'Catastrophic' Risks

In a groundbreaking moment, the UK, US, EU, Australia, and China jointly endorsed the Bletchley declaration at the UK's AI Safety Summit, recognizing the potential catastrophic risks posed by advanced AI systems surpassing human intelligence. With 28 governments supporting the declaration, this landmark collaboration aims to address the dangers of so-called "frontier AI" models. The involvement of leading powers like the US and UK, alongside China, highlights a shared interest in cooperation to mitigate significant AI risks, even amidst competition.

Source: The Guardian

Top Articles

The Global Maze: AI Regulation Progress Amidst Divergent Paths

The article sheds light on recent advancements in AI policy evident from the UK's AI Safety Summit, the G7 declaration, and the US executive order. Despite a shared recognition of the necessity for AI regulation globally, differing approaches among nations persist. Experts emphasize industry self-regulation until a comprehensive global AI governance agreement materializes, highlighting the significance of the US's involvement while urging an inclusive, collaborative approach to shape AI regulation's future.

Source: BBC

Europe's Delicate Balancing Act: Striving for AI Regulation

The EU's negotiations for an AI regulatory framework face significant hurdles, with MEPs and Member States at odds over prohibitions on specific AI practices, fundamental rights impact assessments, and national security exemptions. Debates over generative AI and foundational models deepen divisions. The upcoming December 6 trilogue represents a pivotal moment for the EU's aim to lead AI regulation.

Source: TechCrunch

President Biden's Landmark Executive Order Shapes AI Future

President Biden's sweeping Executive Order represents a ground-breaking step forward for AI governance, advancing standards for AI safety, security, and trustworthiness while prioritizing privacy protection and advancing equity and civil rights. It introduces extensive measures such as mandatory safety testing, red-teaming, and critical information sharing for AI systems of advanced capabilities, introducing new rules for the procurement and use of AI by federal agencies, as well as other commitments to develop AI standards through NIST and support research and innovation. Summary from the White House Fact Sheet is attached.

Source: The White House

G7 Nations Unite to Establish AI Code of Conduct

The G7 countries and the European Union jointly released an 11-point code of conduct urging AI companies to voluntarily prioritize safety, security, and trustworthiness in their advanced AI systems. Initiated by Japan through the Hiroshima AI process, the code emphasizes risk assessment and mitigation, especially regarding potential misuse of AI products once they reach consumers. The code highlights testing throughout the AI lifecycle, combating criminal use, promoting transparency via incident reporting, preventing disinformation and discrimination, watermarking AI-generated content, and bolstering cybersecurity measures.

Source: Sifted

EU's AI Act Deadlocked Over Foundation Models

The European Union's crucial AI Act, aimed at regulating artificial intelligence with a risk-based approach, faces a significant deadlock as negotiations stall on the regulation of foundational AI models. Disagreements, led by influential EU members like France, Germany, and Italy, challenge the proposed tiered approach, disrupting initial consensus. This impasse jeopardizes the overall success of the AI Act, raising questions about the EU's potential role in shaping global AI standards. With a looming deadline for a political agreement, the deadlock heightens uncertainty, leaving the fate of the AI Act hanging as negotiations intensify at the highest political levels.

Source: EURACTIV

OpenAI's Ambitious Push into Personalized AI Apps: GPTs, APIs, and a Vision of AI Ecosystem

At its debut developer conference, OpenAI unveiled GPTs, early iterations of AI assistants aimed at handling practical tasks like flight bookings and educational assistance, as part of an initiative to strengthen its consumer business. This launch included the GPT Store, enabling users to create, share, and monetize their GPTs, following earlier attempts to establish a ChatGPT plugin ecosystem. The conference also introduced updates like the more cost-effective GPT-4 Turbo model, assistant APIs integrating vision and image capabilities, and a developer beta program for refining GPT-4 models. CEO Sam Altman emphasized deeper integrations with OpenAI technology, backed by Microsoft CEO Satya Nadella's support for advancing foundational AI models. Addressing enterprise concerns, OpenAI launched the Custom Models program, providing tailored GPT-4 models and covering legal costs related to copyright claims, showcasing the company's ambitious efforts in shaping the personalized AI application landscape.

Source: Reuters

Meta Takes On the AI Wave: New Policies for Transparency in Political Ads

Meta unveiled a new policy requiring global political advertisers to disclose the use of third-party AI software in ads depicting individuals and events synthetically. Starting next year, Meta will prohibit political and social issue ads generated using its own AI-assisted tools, expanding the restriction to sectors like housing, employment, credit, health, pharmaceuticals, and financial services. This policy shift highlights Meta's intention to address concerns surrounding manipulated or misleading content while navigating complexities related to political ad disinformation and free speech, as expressed by CEO Mark Zuckerberg. The policy emphasizes the disclosure of AI tools' usage in political ads and enforces penalties for non-compliance.

Source: The New York Times

Microsoft's AI-Powered News Curation Faces Criticism Over False Stories

Microsoft's adoption of AI and automation for news curation has come under fire as the tech giant faces criticism for showcasing false and controversial stories on its homepage, reaching a global audience. The shift from human editors to AI algorithms has resulted in the promotion of inaccurate articles, including those falsely depicting President Joe Biden and propagating COVID-19 conspiracy theories, prompting concerns about prioritizing profit over ethical considerations and amplifying unreliable sources. The inclusion of AI-generated content alongside credible articles has drawn ire from content partners like The Guardian, demanding accountability for the reputational damage caused by the tech giant's approach to AI-driven news aggregation.

Source: CNN

KPMG Raises Alarms on AI-Generated Misinformation Amid Senate Inquiry

Global consultancy giant KPMG filed a formal complaint following an unexpected turn where an AI-generated submission falsely implicated the company in fabricated scandals during an Australian Senate inquiry. The accusations, created by the Google Bard AI tool, raised concerns about the reliability of AI-generated information in crucial decision-making. KPMG's CEO emphasized the potential harm to their employees' reputations and highlighted the necessity for rigorous fact-checking and human oversight in critical forums, sparking discussions about responsible AI use and the need for better understanding and supervision when employing emerging AI tools in such settings.

Source: The Guardian