Armilla Review - Latest AI Developments: Biden-Harris Secures Responsible AI Commitments, Bias in AI Detection Tools, Licensing for Powerful Models, and Legal Battles

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
September 20, 2023
5 min read

Top Story

Biden-Harris Administration Secures Further Commitments from Leading AI Companies to Advance Responsible AI Development

The Biden-Harris Administration continues to make strides in regulating and promoting responsible artificial intelligence (AI) development in the United States. Building on commitments from seven top AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – secured in July, the administration has now garnered voluntary pledges from eight additional industry leaders, including Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability. These commitments emphasize three crucial principles: safety, security, and trust, and represent a critical bridge to forthcoming government actions.

The companies have committed to:

  1. Ensuring Products are Safe Before Introduction to the Public.
  2. Building Systems that Prioritize Security.
  3. Earning the Public's Trust by being transparent and responsible in AI deployment.

The Biden-Harris Administration's comprehensive approach to AI includes the development of an Executive Order and bipartisan legislation, in addition to collaboration with international allies. The administration's dedication to safeguarding Americans' rights and safety in the AI era is evident through various initiatives, such as the "AI Cyber Challenge," discussions with consumer protection and civil rights leaders, engagement with AI experts, and the publication of a landmark "Blueprint for an AI Bill of Rights." Furthermore, the administration has invested heavily in AI research and development, ensuring that the United States remains at the forefront of responsible AI innovation.

As these commitments and initiatives continue to evolve, the Biden-Harris Administration demonstrates its commitment to promoting responsible and ethical AI practices while protecting individuals and society from potential harms and discrimination.

Source: The White House


From the Armilla Blog

Armilla AI Launches Armilla Assurance to support Enterprise Adoption of Trustworthy AI

The lack of trust in AI products is making it harder for even the most responsible companies to sell high-quality, reliable AI products that deserve to be trusted on their merits. This is true for a number of companies whose products have the potential to deliver positive outcomes in various fields. As AI is rapidly adopted across industries, the need for trust in AI products is growing more acute. To solve this trust gap, Armilla Assurance is launching a series of new offerings, including AI Assessments to verify the quality and reliability of AI-first products and Third-Party Risk Management (TPRM) Programs to guide enterprises through the procurement and operation of safe, transparent, and robust AI solutions. Armilla’s technology and diverse, interdisciplinary expertise make us uniquely positioned to deliver critical AI assurance solutions at scale, to establish trust in AI, and drive safe adoption for clients and partners.

Learn more


Top Articles

AI Detection Tools Accused of Bias Against International Students, Threatening Academic Careers

A recent study from Stanford University has revealed that AI detection tools used to spot AI-generated writing exhibit bias against non-native English speakers, leading to potentially harmful consequences for international students. The study found that these tools often falsely accuse international students of cheating by flagging their writing as AI-generated, even when it is not.

Turnitin, a widely used software in academia, came under scrutiny as it labeled more than 90 percent of a student's paper as AI-generated, prompting concerns about the reliability of these tools. This pattern of false positives disproportionately affected international students, raising questions about the fairness and accuracy of AI detectors.

The bias in AI detection tools stems from their tendency to flag writing as AI-generated when it exhibits predictable word choices and simpler sentence structures. Non-native English speakers, who may have limited linguistic diversity in English, often fall into this pattern, leading to misidentifications.

International students, who are already vulnerable to academic misconduct accusations, are now facing the additional threat of AI detectors impacting their grades, scholarships, and even visa status. Some educators argue that institutions need to reconsider the use of AI detectors altogether, emphasizing the potential harm they can cause to the learning environment and students' psychological well-being and trust.

While Turnitin claims its tool is not biased and is conducting research to validate its accuracy.

Source: The Markup


Senators Propose Licensing Requirement for Powerful AI Models, Including ChatGPT-Level AI

U.S. Senators Richard Blumenthal and Josh Hawley have introduced a bipartisan legislative framework recommending the creation of a government body to regulate artificial intelligence (AI). The proposal suggests that companies should be required to obtain licenses before working on advanced AI models like OpenAI's GPT-4, as well as for developing "high-risk" AI applications, including face recognition systems.

Under the proposed framework, companies seeking AI licenses would need to conduct rigorous testing to assess potential harm before deploying AI models. They would also be required to disclose instances of AI errors post-deployment and allow independent third-party audits of AI models. Additionally, companies would have to publicly disclose details about the training data used for AI model creation, and individuals harmed by AI would have the right to take legal action against the responsible companies.

These recommendations are expected to influence ongoing discussions in Washington regarding AI regulation. Senators Blumenthal and Hawley are organizing a Senate subcommittee hearing on the accountability of businesses and governments when deploying AI systems that cause harm or violate rights. Microsoft President Brad Smith and Nvidia's Chief Scientist William Dally are among the scheduled witnesses.

Senator Chuck Schumer is also hosting meetings to explore AI regulation, with tech executives and representatives of AI research and human rights advocacy groups in attendance. The legislative framework presented by Senators Blumenthal and Hawley marks a significant development in the discussion of AI regulation, as it takes a stricter approach than the voluntary risk management framework and nonbinding AI bill of rights previously proposed by the federal government.

However, questions remain about the specifics of AI oversight, including whether it would be overseen by a newly created federal agency or an existing one, and how "high risk" applications would be defined. Experts and organizations have expressed varying opinions on government licensing for AI development, with concerns about stifling innovation and the potential for industry capture. The legislative framework aims to address these concerns by recommending strong conflict of interest rules for AI oversight body staff.

Source: WIRED


Pulitzer Prize winning author Michael Chabon and others sue OpenAI for Copyright Infringement Over ChatGPT Training Data

The lawsuit claims that OpenAI collected content from various sources across the internet, including copyrighted written works, plays, and articles, to enhance its GPT models.

One notable aspect of the lawsuit centers on the authors' assertion that OpenAI accessed "Books1" and "Books2," which accounted for a significant portion of the GPT-3 training dataset. The lawsuit alleges that "Books1" is based on materials from Project Gutenberg, and "Books2" was sourced from websites known for hosting pirated books and text-based materials, including Library Genesis (LibGen), Z-Library, Sci-Hub, and Bibliotik.

The writers argue that ChatGPT generates not only summaries but also in-depth analyses of themes present in their copyrighted works, suggesting that their content played a role in training the underlying GPT model. The suit claims that when asked to emulate specific writing styles, ChatGPT was able to imitate Chabon's style, referencing characters from his Pulitzer-winning book "The Amazing Adventures of Kavalier & Clay."

OpenAI is currently facing multiple copyright-related lawsuits, with some authors alleging copyright violations and OpenAI arguing that its usage of text is protected under the "fair use" doctrine. The US Copyright Office is actively studying copyright law and policy issues pertaining to artificial intelligence systems.

The lawsuit brought by Chabon and other writers includes allegations of direct and vicarious copyright infringement, illegal removal of copyright management information, unfair competition, and unjust enrichment. The writers seek an injunction against further copyright infringement and unspecified damages.

Source: The Register


New Class Action Lawsuits Against OpenAI Challenge Fundamental Theories of Generative AI Liability

Matthew Butterick and the Joseph Saveri Law Firm have filed two new class action lawsuits against OpenAI, targeting ChatGPT and LLaMA, respectively. These lawsuits introduce fresh legal theories that diverge from previous cases, including assertions about derivative works and the nature of model outputs. Notably, comedian Sarah Silverman is one of the named plaintiffs in both lawsuits, citing claims related to her book, "The Bedwetter."

These new legal approaches present strategic opportunities for the plaintiffs, as they offer different angles to challenge generative AI practices. While the cases introduce complex legal questions, the lack of clear evidence of infringement in model outputs may weaken their overall strength. The lawsuits raise intriguing legal debates around AI models and their training data, but the absence of direct harm to the plaintiffs could affect the outcome of these cases.

Source: Kate Downing Law


Google Tightens Rules on Misleading Political Ads, But Questions Remain

Starting in November 2023, Google's ad network will require clear disclosure for political ads that contain fictionalized depictions of real people or events. While the policy aims to combat misleading content, it does not explicitly address generative AI, which can be used to create deceptive ads.

Under the new rules, verified advertisers promoting "inauthentic" depictions of real-world events or individuals must prominently disclose that their content does not accurately represent reality. Disclosure is required for ads that make it appear as if a person said or did something they did not, as well as for ads depicting real events but including scenes that did not occur. However, minor image edits, colour corrections, and edited backgrounds that do not depict real events are exempt from disclosure requirements.

The move comes as AI-generated political content becomes more prevalent, raising concerns about disinformation during elections. While there are still questions about how the rules will be enforced and whether they cover all potential scenarios, Google's efforts to regulate political ads, both generated and conventional, represent a significant step in combating digital disinformation and propaganda during election cycles. The effectiveness of these rules will depend on their enforcement and transparency in reporting actions taken against violators.

Source: Google


DeepMind Cofounder Envisions Interactive AI and Emphasizes Regulation

In a recent interview, Mustafa Suleyman, the co-founder of DeepMind, outlined his vision for the future of AI, emphasizing the shift from generative AI to interactive AI. He envisions AI systems that can perform tasks by collaborating with other software and people, marking a significant advancement in the capabilities of artificial intelligence. Suleyman advocates for robust regulation in the AI field, asserting that achieving this goal is entirely feasible. He believes that AI can be harnessed to make more consistent and fair trade-offs on behalf of humanity, ultimately improving various aspects of our lives.

While some may view Suleyman's optimism with skepticism, he underscores the importance of balancing AI autonomy with control, setting boundaries, and ensuring that humans remain in command. He discusses the need for national and international regulations to guide AI development and prevent undesirable outcomes.

Suleyman's innovative AI model, Pi, has gained attention for its controllability and respectful tone, addressing concerns about AI-generated content. He emphasizes the significance of the interactive phase in AI's evolution, where conversation becomes the primary interface, and AI systems can take actions to fulfill high-level goals.

In the midst of discussions about AI's potential risks, Suleyman encourages a practical approach to regulation, drawing parallels with how previous complex industries, such as aviation and automobiles, successfully regulated themselves and benefited from top-down regulation by the nation-state. He calls for action and collaboration to ensure AI technology serves the public interest and meets stringent safety and ethical standards.

Source: MIT Technology Review


AI's Impact on Knowledge Workers: A Study Reveals Productivity and Quality Insights

A collaborative study conducted with Boston Consulting Group sheds light on the transformative effects of Artificial Intelligence (AI) on knowledge worker productivity and quality. The research, encompassing 758 consultants, explores the performance implications of AI on intricate, knowledge-intensive tasks. Participants were divided into three groups: no AI access, access to GPT-4 AI, or GPT-4 AI with prompt engineering guidance. The findings unveil a "jagged technological frontier," where AI excels in some tasks but falls short in others, even if they appear similar in complexity. Across 18 realistic consulting tasks within AI's capability, consultants using AI exhibited remarkable improvements in productivity (completing 12.2% more tasks, 25.1% faster) and produced significantly higher quality results (over 40% higher quality compared to a control group). These benefits extended across the skill spectrum, with below-average performers seeing a 43% boost and above-average performers a 17% improvement. However, for tasks outside the AI frontier, consultants using AI were 19 percentage points less likely to produce correct solutions. Additionally, the study identifies two distinct patterns of AI use by humans: "Centaurs," who divide tasks between themselves and AI, and "Cyborgs," who fully integrate AI into their workflow. These insights provide valuable guidance for navigating the evolving landscape of AI integration in knowledge work.

Source: Harvard Business School