Armilla Review - AI Regulations Worldwide: Insights from G20, China, and Public Opinion

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
September 26, 2023
5 min read

G20 Summit 2023: New Delhi Declaration Advocates 'Pro-Innovation' AI Regulation with a Focus on Ethics and Safety

The G20 Summit held in New Delhi in 2023 resulted in the adoption of the New Delhi Declaration, emphasizing the pivotal role of artificial intelligence (AI) in driving global digital economic growth. The declaration acknowledges the potential of AI to address various challenges but underscores the importance of safeguarding people's rights and safety in the process.

The New Delhi Declaration signals the G20 countries' commitment to taking a leadership role in addressing both the opportunities and risks associated with AI. Notably, it advocates for a "pro-innovation" regulatory approach aimed at harnessing the benefits of AI while also acknowledging and mitigating its potential risks.

Building on the foundation of the G20 AI Principles established in 2019, which emphasize responsible stewardship and trustworthy AI, the declaration reaffirms the importance of these principles. It also emphasizes the sharing of information among G20 countries on using AI to find solutions in the digital economy.

Furthermore, the G20 nations have committed to promoting the responsible use of AI to achieve the Sustainable Development Goals (SDGs). The declaration underscores that the deployment of AI must address critical aspects such as human rights protection, transparency, fairness, accountability, regulation, safety, human oversight, ethics, biases, privacy, and data protection.

In a bid to ensure effective governance of AI at the international level, the G20 countries have pledged to foster international cooperation and discussions on the global governance of AI, recognizing the need for a collaborative approach in dealing with the challenges and opportunities presented by AI technology.

Source: CNBC

Initial policy considerations for generative artificial intelligence

Generative artificial intelligence (AI) has emerged as a powerful tool with far-reaching implications in education, entertainment, healthcare, and scientific research. While it promises transformative benefits, it also presents policymakers with critical challenges. These challenges include potential disruptions in labor markets, copyright ambiguities, and the risks associated with perpetuating societal biases and the creation of disinformation. The consequences of mishandling generative AI could encompass the spread of misinformation, the reinforcement of discrimination, distortion of public discourse and markets, and incitement of violence. Governments are actively recognizing the transformative nature of generative AI and are working to address these issues. This paper aims to provide valuable insights into policy considerations, assisting decision-makers in navigating the complex landscape of generative AI regulation.

Source: Organisation for Economic Co-operation and Development (OECD)

Navigating the AI Revolution: Ensuring Competition, Consumer Protection, and Innovation with Foundation Models

Foundation models (FMs), large machine learning models trained on extensive data, have experienced rapid development since the release of OpenAI's first public FM in 2018. Approximately 160 FMs have emerged since, holding the potential to revolutionize various industries and aspects of daily life, impacting competition and consumers alike. The responsible development and use of FMs can lead to enhanced products and services, improved access to information, assistance in diverse tasks, scientific breakthroughs, and economic benefits.

However, the absence of robust competition poses immediate and long-term risks, such as the dissemination of false information, AI-enabled fraud, and market power consolidation, potentially resulting in reduced product quality and higher prices. Effective competition should be considered alongside safety, data protection, and intellectual property rights to ensure favourable market outcomes.

To harness AI's innovation while safeguarding consumers and competition, adherence to existing consumer and competition laws is imperative. The proposal outlines guiding principles, including accessibility, diversity, choice, flexibility, fair dealing, and transparency, to govern the development and deployment of FMs.

This review reflects collaborative efforts involving a wide range of stakeholders. Future engagement programs will seek input from consumer groups, FM developers, deployers, innovators, academics, government bodies, and regulators.

Source: UK Government

China's AI Regulations: Implications for U.S. Tech Firms in the Race for AI Supremacy

China's recent implementation of stringent artificial intelligence (AI) regulations has sparked the emergence of government-approved AI chatbots, reshaping the technological landscape. These regulations, introduced by the Cyberspace Administration of China (CAC), carry implications for China's AI industry and its global rival, the United States. While the rules were initially strict, Chinese regulators have adopted a more flexible approach to enforcement, aiming to balance control and AI development.

The CAC's Generative AI Measures, among the strictest globally, require AI services to avoid generating content that incites subversion of national sovereignty, advocates terrorism or extremism, promotes ethnic hatred, violence, obscenity, or disseminates fake and harmful information. However, regulatory adjustments have been made to relax some restrictions.

Flexible enforcement dynamics in China often depend on authorities' discretion, company-government connections, and public pressure, leading to arbitrariness and inconsistency. In the U.S., there is an ongoing debate on the impact of stringent regulation on AI competitiveness. Some argue that strict regulations may hinder competitiveness vis-à-vis China, while others contend that Chinese AI systems are already lagging due to content filters and fine-tuning, making regulatory leniency less critical.

Balancing regulation and fostering innovation is a key challenge for both China and the U.S. as they vie for AI supremacy. China seeks to strike a balance between control and innovation, while the U.S. must carefully consider its regulatory approach to maintain its competitive edge in the global AI market. Achieving this equilibrium will be pivotal for success in the evolving AI landscape for both nations.

Source: TIME

Aligning AI Policy with Public Opinion: How Voter Sentiment Shapes AI Regulation

A recent report card from the AI Policy Institute evaluates legislative AI proposals based on their alignment with U.S. public sentiment, shedding light on the evolving landscape of AI policy. The report relies on a survey of 1,118 voters conducted by YouGov in early September as its yardstick.

The significance of this assessment lies in the fact that politicians and potential presidential candidates are increasingly advocating for AI regulation, and understanding how these policies resonate with voters is crucial.

The AI Policy Institute advocates for political solutions to potential catastrophic risks stemming from AI and evaluates proposed AI regulation based on attributes such as adaptability to advancements, discouragement of high-risk AI deployment, and reduction of dangerous AI proliferation.

Notably, the report highlights that current proposed policies tend to focus on immediate harms rather than long-term existential threats of AI. However, polling data indicates that voters prefer future-proof AI regulation that takes into account unknown threats.

Key Points:

  1. Voters' Expectations: Polling reveals that voters expect technology companies to take responsibility for the products they create. Concerns over AI-generated misinformation potentially impacting the 2024 election have heightened expectations for AI regulation.
  2. Partisan Perspectives: Supporters of former President Trump are more likely than backers of President Biden to believe that AI will decrease their trust in election results. Additionally, self-identified liberals are more likely to have used generative AI for work or education compared to moderates and conservatives.
  3. Regulatory Dilemma: The U.S. is grappling with a regulatory dilemma as it seeks to balance AI regulation and innovation. While some argue that strict regulation may hinder competitiveness compared to China, others contend that Chinese AI systems are already lagging due to content filters and fine-tuning.
  4. Public Pessimism: A majority of Americans believe that humans will lose control of AI within the next 25 years, with pessimism about the future of AI more prevalent than optimism. This sentiment is consistent across ideological lines.
  5. Lack of Trust: Lack of trust in leaders and institutions presents challenges for AI regulation efforts on Capitol Hill, potentially hindering regulatory progress.
  6. Consensus on Regulation: There is no consensus on AI regulation, with one-third of respondents believing that AI cannot be effectively regulated. Creating a new federal government agency for AI regulation is not considered the best option by a significant portion of the surveyed population.

Source: Axios

Anthropic's Responsible Scaling Policy: Striking a Balance Between AI Innovation and Catastrophic Risk Mitigation

Anthropic, a prominent player in the field of artificial intelligence (AI), has unveiled its Responsible Scaling Policy (RSP), a comprehensive set of protocols aimed at addressing the escalating risks associated with the development of increasingly advanced AI systems. As AI models gain unprecedented capabilities, Anthropic acknowledges the potential for both significant economic and social benefits and the emergence of severe risks.

The RSP introduces the concept of AI Safety Levels (ASL), inspired by biosafety level standards used in handling dangerous biological materials. ASL categorizes AI systems based on their potential for catastrophic risk, with higher ASL levels requiring stricter safety, security, and operational standards. This framework ranges from ASL-1 (systems posing no meaningful catastrophic risk) to ASL-3 (systems substantially increasing the risk of catastrophic misuse or displaying low-level autonomy). ASL-4 and higher are yet to be defined but are expected to indicate qualitative escalations in risk potential and autonomy.

The policy emphasizes the importance of balancing risk management with the encouragement of beneficial AI applications and safety advancements. It requires temporary pauses in the training of more powerful models if safety procedures cannot keep up with scaling but incentivizes solving safety challenges as a means to unlock further scaling.

Importantly, the RSP does not disrupt current AI uses or product availability but mirrors pre-market testing and safety feature design practices seen in industries like automotive and aviation. It aims to rigorously demonstrate product safety before market release, benefiting customers and users.

Anthropic's RSP has received formal board approval, with any changes subject to board approval following consultations with the Long Term Benefit Trust. The policy also includes procedural safeguards to ensure evaluation integrity. However, Anthropic acknowledges the need for rapid iteration and course correction due to the fast-paced and uncertain nature of the AI field.

The full document provides detailed definitions, criteria, and safety measures for each ASL level and is intended to inspire policymakers, nonprofit organizations, and other companies grappling with similar deployment decisions in the ever-evolving landscape of AI technology.

Source: Anthropic