Armilla Review - Navigating Advanced AI Regulation: EU's Tiered Approach, Red-Teaming Challenges, and Top Strategic Trends for 2024

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
October 31, 2023
5 min read

Top Story

EU Countries Pave the Way for AI Regulation with Tiered Approach on Foundation Models

The European Union's efforts to regulate powerful AI models are gaining momentum as EU countries move closer to a comprehensive Artificial Intelligence (AI) rulebook. In a recent development, the Spanish presidency of the EU Council of Ministers shared a document outlining possible concessions and compromises for the AI Act, a flagship legislative proposal aimed at regulating AI's capacity to cause harm. One of the key areas of discussion is how to regulate foundation models, which are large, general-purpose AI systems that have garnered significant public attention in recent times.

The EU is considering a tiered approach to regulate foundation models, with the European Parliament leading the way in proposing this framework. These models, capable of performing a wide range of tasks, may be categorized into different tiers based on their capabilities and potential risks. The proposed approach includes transparency obligations for all foundation models and additional obligations for "very capable foundation models" with advanced capabilities.

In addition to regulating foundation models, the EU is addressing issues related to governance, biometric identification, banned practices, law enforcement, national security, and high-risk use cases in the AI Act. For instance, the introduction of an AI Office is being considered to centralize expertise and oversee compliance controls. The EU is also debating the use of real-time biometric identification systems by law enforcement and the prohibition of practices like emotion recognition and predictive policing.

As EU countries engage in negotiations on the AI Act, it's clear that the European Union is taking a comprehensive and tiered approach to regulate AI, with a focus on transparency, accountability, and addressing potential risks. These discussions are critical in shaping the future of AI regulation in Europe and setting a precedent for AI governance worldwide.

Source: EURACTIV

Also read: Frontier AI: capabilities and risks – a paper written to inform discussions at the AI Safety Summit 2023.

Top Articles

AI Regulation and the Nuclear Analogy: Where Do We Draw the Line?

The comparison between AI regulation and nuclear energy has been a recurring theme in recent discussions among prominent AI researchers and industry figures. The argument hinges on the notion that AI poses existential risks, similar to those posed by nuclear technology, with the hypothetical emergence of Artificial General Intelligence (AGI) being a primary concern. Some have proposed a licensing scheme for AI models, similar to the way nuclear facilities are licensed by regulatory bodies, to ensure the safe development and deployment of AI systems.

However, the analogy between AI and nuclear safety begins to unravel when we examine the specifics of AI regulation. While the risks associated with AI are certainly significant, they differ in fundamental ways from the well-established and rigorously regulated nuclear industry. Nuclear safety engineering is built on the principle that small failures can lead to catastrophic events, and all components are rigorously regulated to prevent such failures. In contrast, AI systems are a complex amalgamation of software, hardware, and data, making it challenging to regulate them in the same manner as nuclear components.

One of the key challenges in regulating AI is the balance between addressing current AI-related harms, such as algorithmic discrimination and cybersecurity vulnerabilities, and preparing for potential future risks, like AGI. While some argue that AI labs should adhere to nuclear-level rigor, others resist such regulations, fearing overreach or the stifling of technological advancements. The ongoing debate also diverts attention from the immediate need to address AI-related harms. Additionally, the comparison to nuclear risks lacks a scientific basis and evidence for when or if AGI will emerge, making it a hypothetical risk that may not warrant the level of regulation suggested.

In conclusion, the nuclear analogy in AI regulation raises important questions about the balance between addressing current AI-related harms and preparing for hypothetical existential risks. The complexities of regulating AI, with its diverse components and ever-evolving nature, challenge the direct application of nuclear safety principles. While the AI community continues to invoke nuclear comparisons, finding a pragmatic and effective regulatory framework remains a formidable task in ensuring the responsible development and deployment of AI technologies.

Source: TIME

Navigating the Uncharted Territory of Advanced AI: Balancing Progress and Responsibility

In a consensus paper, experts highlight the pressing need to address the risks associated with upcoming advanced AI systems. The rapid and astonishing progress in the field of artificial intelligence has left many in awe, with AI systems now capable of performing complex tasks like writing software and generating photorealistic scenes. The rate of advancement shows no signs of slowing down, and there is a race among companies to develop generalist AI systems that can match or exceed human capabilities in various cognitive domains.

The paper raises the crucial question of what happens when these advanced AI systems outperform humans across multiple domains. While the potential benefits are enormous, there are significant risks that must be addressed. These risks include the amplification of social injustices, erosion of social stability, and the potential for large-scale criminal or terrorist activities, particularly when controlled by a few powerful entities. The authors also warn of the dangers of creating highly advanced autonomous AI systems that could pursue undesirable goals, either maliciously embedded or accidentally.

The urgency to address these concerns is emphasized, with a call for a reorientation in AI research and governance. The paper highlights the need for research breakthroughs in AI safety and ethics and urges major tech companies and public funders to allocate a significant portion of their AI research budget to these critical areas. In terms of governance, the authors stress the importance of national and international institutions to enforce standards, prevent recklessness, and ensure that AI development remains within ethical and safe boundaries.

The paper also proposes a combination of governance mechanisms matched to the magnitude of risks, creating national and international safety standards that depend on model capabilities. It calls for holding developers and owners of advanced AI systems accountable for foreseeable harms. Additionally, AI companies are encouraged to make specific safety commitments that are independently scrutinized.

As AI continues to shape the future, the challenges and opportunities are immense. The paper serves as a vital reminder that while AI capabilities advance rapidly, progress in safety and governance must keep pace to ensure that AI remains a force for good and does not inadvertently lead to catastrophe.

Source: arXiv

American Federation of Teachers Partners with GPTZero: Embracing Responsible AI Use in Education

The American Federation of Teachers (AFT), the second-largest teacher's union in the U.S., has joined forces with GPTZero, an AI identification platform, to address the use of artificial intelligence in student homework. This partnership aims to help educators manage and monitor students' reliance on AI-driven tools while ensuring privacy and security.

AFT President Randi Weingarten recognizes the potential of AI in the classroom but emphasizes the need for safeguards and responsible use. She believes that AI can be a valuable supplement for educators if properly regulated. Weingarten sees a place for AI in education, but she also emphasizes the importance of not repeating past disruptions, similar to those seen during the industrial revolution.

GPTZero, co-founded by recent Princeton graduate Edward Tian, offers AI tools for both educators and students. The collaboration between AFT and GPTZero focuses on harnessing AI's potential and developing pedagogical solutions that promote collaboration between teachers and students in adopting AI.

GPTZero's tools include options for students to certify their content as human and disclose their use of AI, promoting responsible AI use in education. Edward Tian's goal is to demonstrate that AI in education need not be adversarial and that it can empower students while mitigating potential harm.

Weingarten acknowledges the benefits of AI for teachers, such as reducing administrative burdens and assisting in lesson planning. However, she stresses the need for ethical regulations to ensure that AI's integration into education is done responsibly, without causing harm. This partnership between AFT and GPTZero exemplifies a forward-thinking approach to incorporating AI into education while maintaining a focus on ethical and responsible practices.

Source: CBS

Revealing Systemic Failures in AI: A Closer Look at Ecosystem-Level Analysis

In a groundbreaking study, an interdisciplinary team from the Center for Research on Foundation Models (CRFM) sheds light on the systemic failures of AI in real-world applications. The research, led by Stanford CS PhD Rishi Bommasani, analyzes the collective impact of multiple machine learning models in various contexts, including computer vision, natural language processing, and speech recognition. The study, titled "Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes," uncovers patterns of systemic failure that are often invisible when considering individual models in isolation.

The research explores the concept of "outcome homogenization," which suggests that the reliance on the same datasets and foundation models across machine learning can lead to repetitive negative outcomes for certain individuals or groups. The study provides generalizable insights into how pervasive homogeneous outcomes are, how model changes impact the broader ecosystem, and how these outcomes vary across different racial groups.

One of the most significant findings is that commercial machine learning systems are prone to systemic failure, where some individuals consistently experience negative outcomes across all available models. This means that those who are misclassified by one AI model may face exclusion from using AI technology altogether. The study emphasizes the need for greater transparency from AI providers and suggests that policy interventions may be necessary to prevent these negative effects and improve the societal impact of machine learning.

This research not only raises awareness about the need for accountability and transparency in AI but also underscores the importance of ecosystem-level analysis when evaluating AI technologies in real-world applications. It offers valuable insights for future AI research and policy development.

Source: Stanford

Challenges in Red-Teaming AI: Why it's Not Enough for Comprehensive Regulation

Red-teaming, a practice that involves intentionally attempting to exploit AI models to identify and fix vulnerabilities, has gained popularity as a tool for enhancing AI safety. Major players in the AI industry, including the Biden White House, have endorsed this approach. However, a report from the nonprofit Data & Society and the AI Risk and Vulnerability Alliance argues that while red-teaming serves a purpose in AI safety, it falls short of addressing the larger societal challenges posed by AI.

The report highlights that red-teaming is effective in identifying obvious vulnerabilities but fails to address the broader, complex issues related to AI's impact on society. It does not bridge the structural gap in regulating AI for the public interest or ensuring democratic governance of AI technologies. The authors recognize the importance of red-teaming in the AI safety ecosystem but stress that it should not be seen as a complete solution.

The paper underlines the ambiguity surrounding the definition and scope of red-teaming for AI. It suggests that red-teaming should focus on troubleshooting specific issues when conducted transparently. However, it may not be suitable for testing contested ideas like fairness or political bias in AI systems, which require more profound human deliberation and holistic assessments. The report advocates for a more comprehensive approach to AI regulation and accountability, taking into account the far-reaching societal implications of AI.

The authors of the report urge the government and the AI industry to adopt additional forms of accountability alongside red-teaming. While red-teaming can play a role in identifying technical vulnerabilities, a broader accountability ecosystem is needed to address the multifaceted challenges of AI's societal impact. It highlights that red-teaming alone may not be sufficient to ensure responsible AI development and governance.

Source: POLITICO

Stanford Report Challenges Transparency of Major AI Models: An Urgent Call for Disclosure

Stanford University researchers have recently released a report, titled "The Foundation Model Transparency Index," shedding light on the lack of transparency in major AI models. The report assesses models from prominent organizations like OpenAI, Google, Meta, Anthropic, and others, aiming to emphasize the importance of transparency in AI model development. As AI models, particularly foundation models, become increasingly integrated into various applications, understanding their limitations and potential biases is essential.

The Foundation Model Transparency Index evaluated these models across 100 different indicators, including aspects such as training data, labor practices, and compute resources. Each indicator focused on disclosure, underlining the need for companies to be more transparent about their AI model development process. Notably, all the models in the report received underwhelming scores, with Meta's Llama 2 having the highest score at 54 out of 100, and Amazon's Titan model ranking the lowest at 12 out of 100. OpenAI's GPT-4 scored 48 out of 100.

Dr. Percy Liang, an associate professor at Stanford and the director of Stanford's Center for Research on Foundation Models, expressed concerns over the decline in transparency within the AI industry, coinciding with a significant increase in capabilities. This shift towards closed models like GPT-3 and GPT-4 has raised issues related to accountability, values, and the attribution of source material. As companies grapple with these concerns, the Transparency Index intends to not only encourage improved transparency but also provide guidance to governments seeking to regulate the rapidly evolving field of AI.

In response to the Stanford Transparency Index, some AI experts have raised concerns about its methodology, emphasizing that transparency should serve as a tool for accountability rather than an end in itself. While the report highlights the urgency of transparency, the AI community must engage in constructive discussions and actions to address these critical issues effectively.

Source: Ars Technica

Universal Music Sues AI Startup Anthropic for Copyright Infringement Over Song Lyrics Scraping

Universal Music, along with two other music companies, has filed a copyright infringement lawsuit against Anthropic, an artificial intelligence startup. The lawsuit alleges that Anthropic scrapes song lyrics from artists without proper authorization and uses them to generate lyrics through their AI model, Claude. This legal action comes in response to concerns about chatbots and AI systems reproducing copyrighted lyrics without permission.

The music companies argue that copyrighted material, such as song lyrics, should not be freely taken from the internet without licensing or permission. They claim that Anthropic has not made any efforts to obtain the necessary licenses for their copyrighted work. The lawsuit underscores the growing challenges faced by the music industry as AI technology can generate "deepfake" songs that closely mimic established musicians' voices, lyrics, and styles.

Anthropic, a prominent AI startup founded by former researchers from OpenAI, has attracted investments from tech giants like Amazon and Google. In addition to reproducing lyrics, the lawsuit alleges that Claude also generates unlicensed lyrics in response to prompts for writing in the style of popular musicians. This lawsuit reflects the ongoing battle over copyright issues in the age of AI and technology, similar to the music industry's past struggles with services like Napster. Universal has recently sought ethical partnerships for AI-driven music projects, emphasizing proper copyright practices and permissions.

Source: Ars Technica

Gartner Reveals Top 10 Strategic Technology Trends for 2024: A Roadmap for the Future

In a rapidly evolving technological landscape, staying ahead of the curve is imperative for organizations. Gartner, Inc. has unveiled its list of the top 10 strategic technology trends for 2024, offering a roadmap for businesses to navigate the ever-changing digital terrain. These trends were presented at the Gartner IT Symposium/Xpo, providing valuable insights for IT leaders and executives.

The first trend, "Democratized Generative AI," highlights the increasing accessibility of generative AI models, allowing workers worldwide to leverage these tools. This democratization is set to reshape how organizations access and use vast sources of information, driving knowledge and skills democratization in the enterprise.

With AI becoming more prevalent, "AI Trust, Risk, and Security Management" take center stage. Guarding against AI's unintended negative consequences is critical, and this trend emphasizes the need for ModelOps, proactive data protection, and risk controls to ensure AI's responsible and secure use.

"AI-Augmented Development" showcases the integration of AI technologies to aid software engineers, improving productivity and allowing them to focus on more strategic tasks. The demand for software in business operations is growing, and AI-infused development tools play a pivotal role in addressing this need.

"Intelligent Applications" bring learned adaptation and autonomy to the forefront, providing dynamic experiences that adapt to users' needs. CEOs have identified talent shortage as a significant risk, making intelligent applications crucial for improving business operations.

The "Augmented-Connected Workforce" strategy aims to optimize human worker value through intelligent applications and workforce analytics, providing guidance and support for employees. This approach will be instrumental in improving workforce experience, well-being, and skill development.

For enhanced security, "Continuous Threat Exposure Management" offers a systematic approach to evaluating digital and physical asset vulnerabilities. By aligning assessments with threat vectors and business projects, organizations can reduce breaches by two-thirds by 2026.

The rise of "Machine Customers" introduces non-human economic actors that can autonomously negotiate and purchase goods and services. This trend is poised to generate trillions in revenue by 2030 and will reshape the commerce landscape.

"Sustainable Technology" emphasizes using digital solutions to promote environmental, social, and governance (ESG) outcomes. Sustainability is becoming increasingly crucial, with CIOs' compensation potentially linked to their impact on sustainable technology by 2027.

"Platform Engineering" focuses on building and operating self-service internal development platforms to optimize productivity and accelerate business value delivery.

Finally, "Industry Cloud Platforms" predict that more than 70% of enterprises will utilize these tailored cloud proposals specific to their industry. By 2027, ICPs will help organizations address industry-relevant business outcomes.

As the technology landscape continues to evolve, these strategic trends provide guidance for organizations to thrive and adapt in 2024 and beyond. Gartner's insights serve as a valuable resource for businesses and IT leaders seeking to embrace these technological shifts.

Source: Gartner