Armilla Review - Unveiling the AI Power Paradox: Navigating Governance Challenges in an Era of Powerful AI

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
August 23, 2023
5 min read

The AI Power Paradox: Navigating the Governance Challenge

Generative AI technology has ushered in a monumental shift, heralding a transformative era with profound political, economic, and societal implications. This shift, marked by both promise and peril, demands novel approaches to governance to ensure the responsible and beneficial deployment of artificial intelligence. As AI challenges conventional notions of geopolitical power, the urgency to establish effective regulations becomes increasingly evident.

Generative AI, capable of producing high-quality content and original creations, is at the forefront of an impending technological revolution. This revolution is expected to reshape politics, economies, and societies, yet it also introduces significant disruptions and risks. Unlike previous technological waves, AI challenges the very foundation of global power structures, potentially sidelining nation-states as primary actors on the geopolitical stage. The creators of AI systems themselves hold substantial geopolitical influence, blurring the lines between technology companies and nation-states.

The rapid advancement and complexity of AI present a formidable challenge for governments striving to create relevant regulatory frameworks. Failure to keep pace with AI's evolution could leave governments struggling to establish effective rules. Fortunately, global policymakers have started acknowledging the challenges of AI governance. Initiatives such as the G-7's "Hiroshima AI process," the European Union's AI Act, and calls for a global AI regulatory watchdog demonstrate a growing recognition of the need for regulation.

The prevailing debate surrounding AI governance often centers on a false dichotomy between exploiting AI's potential for national power and stifling it to prevent risks. However, the unique nature of AI requires an innovative and adaptable approach. To create an effective AI governance framework, the international community must embrace a broader, inclusive perspective that includes technology companies as essential stakeholders.

Crafting an effective governance framework for AI requires adherence to several principles:

  1. Precaution: Prioritize prevention of potential AI risks before they materialize.
  2. Agility: Create adaptive governance structures to keep pace with AI's rapid evolution.
  3. Inclusivity: Engage technology companies, governments, scientists, ethicists, and civil society to shape AI policies collectively.
  4. Impermeability: Establish watertight regulations across the global AI supply chain.
  5. Targeted Approach: Develop specialized regulatory tools for various AI-related challenges.

To effectively govern AI, three overlapping regulatory regimes are proposed:

  1. Establish a global scientific body to objectively assess AI's risks and impacts, informing policymaking.
  2. Foster cooperation among major AI players to prevent the proliferation of dangerous AI systems.
  3. Create an institution akin to a Geotechnology Stability Board to monitor and address AI-related crises, ensuring global stability.

The impending AI revolution demands innovative, adaptable governance frameworks that can navigate the complexities of this evolving technology. By embracing principles such as precaution, agility, inclusivity, impermeability, and targeted approaches, the global community can collaboratively establish effective regulations. Successfully governing AI sets a precedent for addressing future disruptive technologies and secures a future where AI's transformative potential is harnessed responsibly for the benefit of humanity.

Source: Foreign Affairs


Safeguarding Against the Threat of Very Powerful Artificial Intelligence

As AI technology advances, its potential for harm becomes more pronounced, requiring urgent and comprehensive regulatory measures. The recent experiment at Carnegie Mellon University, where an AI system demonstrated the ability to produce dangerous substances, underscores the potential dangers of uncontrolled AI development. To prevent an AI catastrophe, society must take proactive steps to harness its benefits while safeguarding against its potential harms.

The Current Landscape: AI has made significant strides in various domains, from customer service to scientific research. It holds the promise of personalized education and 24/7 medical advice, but also presents challenges, such as exacerbating disinformation, enabling discrimination, and facilitating espionage. Concerns voiced by AI researchers and experts about the potential catastrophic risks of AI have prompted a call for global attention and regulatory action.

The Imperative for Regulation: While AI development cannot be halted due to its immense value, policymakers have a pivotal role to play in guiding the industry's growth and ensuring responsible use. To effectively address the risks posed by AI, measures must be taken to control access to advanced AI systems, develop stringent regulations, and create safeguards against potential dangers.

Controlling Access and Ownership: Governments can regulate access to the advanced chips necessary for training AI models, preventing their misuse by unauthorized actors. Export controls and chip ownership registries can be implemented to ensure these critical components are not diverted to rogue actors. To deter AI models from falling into the wrong hands, governments can establish licensing regimes for the most advanced AI models, requiring risk assessments and reporting by developers.

Establishing Stringent Regulations: Governments must directly regulate AI development by setting up regulatory bodies responsible for evaluating and approving AI models. These bodies would assess risk, test for controllability and dangerous capabilities, and establish deployment rules for AI models. A tiered approach, ranging from unrestricted release to prohibition, can be employed based on a model's risk profile. This regulatory approach parallels the governance of other high-risk technologies, such as biotechnology and aerospace.

Preparing Society for AI's Impact: AI's dangers extend beyond its potential use as a tool for violence. Disinformation and bioweapons threats underscore the need for creating AI-powered tools that can differentiate AI-generated content and identify potential risks. Governments can work with technology companies to establish uniform regulations for identifying and labeling AI-generated content. Furthermore, AI can be harnessed to identify and prevent potential bioweapons attacks, requiring a collaborative effort from various sectors.

The growth of very powerful AI systems is inevitable, making it imperative for society to proactively address their potential risks. While regulation may slow down AI development, it is essential for fostering responsible innovation and safeguarding against catastrophic consequences. Governments must take a multi-pronged approach, controlling access to advanced AI systems, establishing robust regulations, and preparing society for the challenges that AI brings. By taking these steps, society can reap the benefits of AI while minimizing its potential harms.

Source: Foreign Affairs


Balancing Concerns and Potential: Generative AI Faces Bans in Businesses

The initial enthusiasm surrounding generative artificial intelligence (AI) is evolving into a more cautious approach, as organizations worldwide consider or implement bans on tools like ChatGPT. A recent study conducted by BlackBerry in June and July this year revealed that 75% of businesses are either in the process of implementing or contemplating a ban on generative AI applications. The survey, which included 2,000 IT decision-makers across several countries, highlighted a growing concern about data security, privacy, and brand reputation as the main drivers for these bans.

Although 61% of respondents indicated that these bans would be permanent or long-term measures, there's a recognition that such strict control could hinder business operations and the use of personal devices for work purposes. While 80% of those surveyed believed organizations should have the right to manage the applications used by their employees, 74% also viewed these bans as "excessive control" over business practices.

Despite the caution, a majority of respondents acknowledged the potential benefits of generative AI. Around 55% saw the technology as a means to improve efficiency, 52% believed it could drive innovation, and 51% thought it would enhance creativity. Moreover, 81% believed that generative AI could play a role in cybersecurity defense.

Shishir Singh, BlackBerry's CTO for cybersecurity, emphasized that outright bans might hinder the numerous business advantages generative AI offers. He suggested that organizations explore "enterprise-grade" generative AI solutions that prioritize value and innovation while maintaining vigilance against unsecured consumer tools. Singh also stressed the importance of having appropriate tools for visibility, monitoring, and management of workplace applications.

Gartner's research echoed the concerns surrounding generative AI. A recent report indicated that generative AI had become a significant concern for enterprise risk executives. In Gartner's survey of 249 senior enterprise risk executives, generative AI emerged as the second most-cited risk in the second quarter of 2023. This surge in concern reflects the rapid growth of public awareness and usage of generative AI tools, as well as the diverse potential use cases and associated risks that these tools bring.

Source: ZDNET


Scientific Study Confirms Political Bias in ChatGPT's Responses

Suspicions regarding the potential left-wing bias of ChatGPT, the AI language model developed by OpenAI, have been validated by a comprehensive scientific study conducted by experts at the University of East Anglia (UEA). The study, published in the journal Public Choice, indicates that ChatGPT displays a "significant and systemic" left-leaning bias in its responses to various ideological statements.

The study involved asking ChatGPT to respond to 62 different ideological statements, comparing its responses from both left-leaning and right-leaning perspectives. These responses were then compared to the platform's default answers to the same set of questions. The researchers found that ChatGPT's responses were more aligned with left-leaning views, particularly favoring the Labour Party in the UK and the Democratic Party in the US. It was also noted that ChatGPT's responses were aligned with the Workers' Party in Brazil.

The study's findings have implications for the political and economic domains, as ChatGPT's widespread use in generating content and providing information can influence user views and potentially impact political and electoral processes. The research highlights the need for AI-powered systems to provide impartial and unbiased information, particularly when they are used to inform the public and shape opinions.

The source of ChatGPT's political bias remains a subject of debate, with potential factors including biases within the training dataset and the algorithm itself amplifying existing biases present in the data. The study emphasizes the importance of transparency in AI training data and the need for tests to identify different types of biases in trained models.

This research underscores the broader challenge of ensuring fairness and neutrality in AI systems, especially those that contribute to public discourse and decision-making processes. As AI systems like ChatGPT continue to be integrated into various applications, addressing and mitigating biases is crucial to ensure equitable and informed interactions.

Source: Daily Mail


New York Times Prohibits Use of Content for AI Model Training

The New York Times has implemented a proactive measure to prevent its content from being utilized in the training of artificial intelligence (AI) models. The newspaper updated its Terms of Service on August 3rd to explicitly disallow the use of its content for developing any software program, including machine learning and AI systems. This change aims to safeguard the newspaper's content from being incorporated into AI models without proper authorization.

Furthermore, the updated terms outline that automated tools, such as web crawlers, cannot access or collect New York Times content without written permission. Non-compliance with these new regulations may result in undisclosed fines or penalties. Interestingly, despite these updates, the newspaper's robots.txt file, which guides search engine crawlers, does not seem to have been altered.

This move by The New York Times could be a response to Google's recent policy update, where the tech giant stated that it may collect public data from the web to train its AI services. Many AI models, including popular ones like OpenAI's ChatGPT, rely on large datasets that might contain copyrighted materials scraped from the web without proper authorization.

It's worth noting that The New York Times entered into a $100 million deal with Google earlier this year, allowing the search giant to feature Times content on select platforms. Both companies are collaborating on various aspects, such as content distribution, subscriptions, marketing, advertising, and experimentation. This raises the possibility that the changes to the newspaper's terms of service are targeted at companies like OpenAI and Microsoft.

In the context of increasing concerns about AI training data ethics, various news organizations, including The Associated Press and the European Publishers' Council, have signed an open letter advocating for rules that mandate transparency regarding training datasets and the consent of rights holders before using data for AI model training. This highlights the broader industry trend towards ensuring responsible and ethical AI development practices.

Source: The Verge


McKinsey Launches AI Tool "Lilli" to Enhance Consulting Services

McKinsey and Company, a global consulting giant, has introduced its own generative AI tool named "Lilli" for its employees. This chat application, developed by McKinsey's "ClienTech" team led by CTO Jacky Wright, is designed to provide employees with information, insights, data, plans, and recommendations for internal experts based on a vast repository of over 100,000 documents and interview transcripts.

Named after Lillian Dombrowski, the first woman McKinsey hired in 1945, Lilli aims to serve as an AI-driven repository of the company's knowledge. The tool has been in beta testing since June 2023 and is set to be rolled out across McKinsey's operations in the coming months.

Lilli offers a text-based interface similar to other publicly available generative AI tools like OpenAI's ChatGPT. It contains both a general AI chat feature and a specialized "Client Capabilities" tab that draws responses from McKinsey's extensive document and data archives. One notable aspect is its provision of sources for each generated response, ensuring transparency and traceability.

The tool's potential applications span a wide range, from assisting consultants in gathering research on sectors and competitors to drafting project plans and proposals. It also showcases McKinsey's commitment to ensuring that the information provided is of high quality, even if responses are slightly slower than some commercial AI models.

Lilli was built as a secure layer that interacts with existing large language models, such as those developed by Cohere and OpenAI on the Microsoft Azure platform. McKinsey intends to continuously explore new AI models to enhance Lilli's capabilities, aiming to provide a valuable internal resource for its employees.

The launch of Lilli signifies McKinsey's dedication to leveraging AI technology to enhance its consulting services and streamline internal processes. The company is not ruling out the possibility of white-labeling or externalizing Lilli in the future, highlighting the potential for AI-driven tools to play a broader role in various organizations.

Source: VentureBeat


US Federal Judge Rules AI-Generated Art Unfit for Copyright: Legal and Artistic Implications Unveiled

United States District Court Judge Beryl A. Howell has delivered a ruling that AI-generated artworks cannot be copyrighted. The decision came in response to a lawsuit filed against the US Copyright Office by Stephen Thaler, who sought copyright protection for an AI-generated image created using his Creativity Machine algorithm. Judge Howell based her ruling on the absence of human authorship in AI-generated art, establishing that copyright has historically required a guiding human hand. While acknowledging the emerging role of AI as a creative tool, the ruling underscores the challenges posed by AI's growing influence in the realm of copyright law. Thaler intends to appeal the case, reflecting the ongoing legal debates surrounding AI's impact on intellectual property rights.

Source: The Verge


AI in Healthcare: A Transformational Path Forward Amidst Concerns and Challenges

Mount Sinai Hospital is embracing AI technology to revolutionize healthcare, with the hope of improving patient outcomes and efficiency. The hospital has invested significant resources into developing AI software and education, aiming to transform itself into a laboratory for AI-driven medical advancements. AI is being used to generate patient scores that help doctors make critical decisions, predict patient ailments, and automate tasks such as transcription and billing.

However, this surge in AI adoption is causing tension among medical professionals. Doctors and nurses are concerned about potential downsides, including wrong diagnoses, compromised patient data privacy, and the possibility of staff reductions in the name of innovation. These healthcare workers emphasize that while AI has its merits, it cannot replace the compassionate and nuanced care provided by human healthcare providers.

Experts acknowledge the potential of AI in healthcare but also caution about the need for strong regulation and oversight to prevent biases, errors, and misuse. Mount Sinai's approach of building AI tools in-house allows for customization and refinement based on physician feedback, but some critics argue that there is insufficient empirical evidence to demonstrate AI's actual impact on patient care. Additionally, some fear that AI could exacerbate existing issues such as bias in healthcare.

Source: The Washington Post