Armilla Review - AI Regulation and Innovation: Safeguarding Technology's Future and Tackling Model Breakdown

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
August 2, 2023
5 min read

White House Secures Voluntary Safety Pledge from Leading AI Firms to Tackle Technology Risks

In a significant move to address concerns surrounding generative AI technology, the White House has announced that seven prominent AI companies have voluntarily committed to ensuring the safety and transparency of their products. This development comes as the Biden administration aims to proactively manage the initial deployment of AI technology, which some officials have considered reckless.

The companies that have made these commitments include Microsoft, OpenAI, Google, Meta, Amazon, Anthropic, and Inflection AI. Their key pledges involve allowing external scrutiny of their AI products by domain experts, sharing risk and vulnerability information with each other and the federal government, and investing in cybersecurity and safeguards against insider threats to protect sensitive proprietary AI model weights.

To address concerns about potential misuse of AI-generated content, the companies have also pledged to deploy mechanisms like watermarking systems to indicate when content is produced by AI. Additionally, they will prioritize research to understand and mitigate limitations and bias in AI systems.

On the other hand, the commitments do not address certain critical issues such as privacy rights, transparency in AI model training data, compensation for creators of copyrighted data used in training, and the impact of AI on the job market.

It's worth noting that while these voluntary commitments are a step in the right direction, they fall short of the provisions in recent draft regulatory bills proposed by Congress. As such, they may be used as a rationale to slow down more stringent legislation in the future.

One significant concern raised by the White House is its lack of a comprehensive framework to ensure AI product safety. Various government entities, including the White House, regulatory agencies, Congress, and the courts, have different responsibilities, leading to potential duplication and gaps in oversight.

This approach of voluntary commitments is reminiscent of the Obama administration's softer stance towards social media companies a decade ago, which later faced criticism due to privacy and antitrust issues. Some of these same companies are now involved in discussions regarding AI, which the Biden administration hopes will have a more positive outcome.

While this voluntary approach is a step towards accountability, it's important to note that this is just the beginning. The White House has signaled that additional waves of action will follow, indicating a continuous effort to improve AI regulation.

In the meantime, the administration is working with Congress to explore the need for comprehensive legislation that can effectively address the challenges posed by AI. The industry's trade group, BSA, supports the move, emphasizing the importance of regulatory frameworks. However, advocates caution that mere lip service won't suffice, and concrete actions and measures must be implemented to ensure AI technology's responsible and safe use.

This voluntary safety pledge represents a crucial step in fostering responsible AI development and deployment. However, it also highlights the need for robust and comprehensive legislation to address AI's broader challenges effectively. With collaboration between industry leaders and the government, there is hope for establishing a balanced regulatory framework that maximizes AI's benefits while minimizing potential risks.

Also read: White House Factsheet

Frontier Model Forum blogs: Anthropic | Google | Microsoft | OpenAI

Source: Axios


Canadian AI Pioneer Urges Swift AI Regulation at U.S. Congress Hearing

AI pioneer Yoshua Bengio, in a U.S. Senate subcommittee hearing, urged American lawmakers to swiftly regulate artificial intelligence to mitigate risks and unleash its potential. He emphasized the need for urgent efforts, suggesting international coordination to prevent rogue actors from exploiting AI elsewhere. Bengio proposed establishing secure-access international labs for research against AI criminal misuse and requiring social media companies to confirm users' humanity. He also supported giving individuals control over their likeness and suggested penalties for counterfeiting humans.

The hearing outlined alarming AI-related risks like corrupted elections, bank fraud, and nuclear weapons controlled by rogue automated programs. Bengio's plea came amidst ongoing deliberations over AI legislation in both the U.S. and Canada. While U.S. lawmakers are studying various AI guardrails, the White House has released voluntary guidelines for companies. However, there are doubts about Congress's ability to act swiftly due to its history of gridlock.

Canada's Bill C-27, which has gone through two readings in the House of Commons, is more focused on privacy and data protection. It empowers a federal minister to demand company records and audit AI systems, with non-compliance leading to significant fines and potential imprisonment. In contrast, the U.S. Congress is exploring multiple AI-related bills, ranging from granting individuals the right to sue companies for AI-based harms to requiring information sharing with researchers and prohibiting AI use in launching nuclear weapons. Democratic leader Chuck Schumer intends to hold public forums to develop a comprehensive framework for secure and accountable AI systems.

Also read this publication from the Canadian Centre for Cyber Security which provides information on potential risks and mitigation measures associated with generative AI.

Source: CBC


The EU AI Act at a Crossroads: Balancing Regulation and Innovation for Generative AI

The European Union (EU) is facing a crucial juncture in its approach to AI regulation, particularly concerning generative AI systems. As advances in AI, specifically large language models like ChatGPT, have raised concerns about potential risks, the EU's proposed AI Act is now at the center of deliberations. However, opinions differ on how to address the unique challenges posed by generative AI. In the legislative process, the European Commission, Council, and Parliament have each taken distinctive regulatory approaches, shaping the future of AI governance.

The European Commission's initial draft of the AI Act did not explicitly tackle generative AI but focused on classifying AI systems based on risk levels, roles of actors, and sectorial differences. High-risk AI systems were the primary focus, and the proposed regulations aimed to ensure safety, transparency, and accountability in such systems. However, the rapid adoption of generative AI by consumers brought this technology to the forefront of the legislative process.

The Council's position introduced a new category of AI systems called general purpose AI, which included generative AI systems capable of producing various outputs. It aimed to limit the obligations for providers of such systems while granting the Commission authority to specify and adapt requirements. On the other hand, the Parliament introduced the concept of foundation models, defining them as AI models with wide-ranging applicability, which could be adapted for specific tasks. The Parliament focused on technical requirements for foundation models and added dedicated sections to address generative AI.

One major challenge in AI regulation is striking a balance between promoting innovation and ensuring responsible use. The Council's approach appears to be more innovation-friendly, while the Parliament's amendments emphasize responsibility. However, both approaches complicate the risk-based framework and call for a more precise understanding of the technology's implications.

To enhance foresight in regulation, two key aspects need attention. Firstly, risk management processes should be defined early on, involving stakeholders to assess potential future impacts and adapt accordingly. Secondly, knowledge sharing and mediation between AI providers and service providers should be encouraged to reduce ignorance and promote informed decision-making.

As the EU lawmakers draft the AI Act, they have a unique opportunity to create regulations that resonate globally and redefine the narrative around AI. By carefully addressing the complexities of generative AI and fostering responsible development, the EU can lay the groundwork for a future where AI technology is harnessed for the benefit of humanity while minimizing risks.


Source: European Law Blog


AI-Generated Content Leads to Model Breakdown: Introducing Model Autophagy Disorder (MAD)

A recent study by researchers from Rice and Stanford University has revealed a concerning phenomenon in the world of artificial intelligence (AI). The researchers found that when AI models are repeatedly trained on AI-generated content, their output quality deteriorates, leading to what they term "Model Autophagy Disorder (MAD)." This self-consuming loop occurs when synthetic data is used to train AI models, causing the models to lose precision and diversity in their outputs.

The implications of MAD could be far-reaching, particularly as AI models are increasingly trained using scraped online data. The more synthetic content is generated on the internet, the harder it becomes for AI companies to ensure the quality of their training datasets, potentially affecting the structure and reliability of the open web.

AI's widespread integration into the internet's infrastructure, where it creates and analyzes content, poses challenges in distinguishing between genuine human-generated data and AI-generated content. This trend is expected to accelerate as generative AI models become more prevalent.

The study raises questions about the usefulness of AI systems without human input. It indicates that machines alone may not be as effective as hoped, as their performance degrades when repeatedly trained on AI-generated data. This could offer some hope that AI won't replace humans entirely, but it also highlights the complexities and challenges in AI regulation and usage.

Addressing this issue may involve adjusting model weights and finding ways to ensure a balance between human-generated and AI-generated data in training AI models. As the use of generative AI continues to grow rapidly, the industry must grapple with the implications of MAD and work towards developing responsible and sustainable AI practices to safeguard the integrity of the open web.


Source: Futurism


FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

The paper introduces FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a novel evaluation protocol for Large Language Models (LLMs). Existing evaluation methods are limited as they provide coarse-grained assessments and don't consider the instance-wise skill composition required for user instructions. FLASK aims to overcome this limitation by defining 12 fine-grained skills needed for LLMs to follow open-ended user instructions and constructing an evaluation set that allocates a specific set of skills for each instance.

By annotating the target domains and difficulty levels for each instance, FLASK offers a comprehensive analysis of a model's performance based on skill, domain, and difficulty. This fine-grained evaluation allows for a more accurate measurement of a model's capabilities and provides insights into how it can be improved in specific areas. FLASK also enables a comparison between multiple open-sourced and proprietary LLMs, and interestingly, the findings from model-based and human-based evaluations are highly correlated.

The significance of FLASK lies in its ability to provide a holistic view of a model's performance by breaking down the evaluation into specific skills and their application in different contexts. This protocol can be valuable for developers seeking to enhance their LLMs' capabilities and for practitioners looking to choose the most suitable model for particular use cases. Overall, FLASK offers a refined evaluation approach that advances the understanding and improvement of Large Language Models.


Source: arXiv


Shopify Expands Generative AI and Unveils an AI Sidekick for E-commerce Merchants

At its Editions conference, Shopify unveiled an array of new features under Shopify Magic, its catch-all brand for generative AI. Leveraging proprietary Shopify data and large language models like OpenAI's ChatGPT, Shopify Magic now offers customized blog posts, product descriptions, and marketing email content for merchants. To ensure accuracy and safety, merchants can review the AI-generated copy before publication. The highlight of the announcement was Sidekick, a conversational AI assistant trained to understand Shopify and assist merchants with various tasks, including setting up discounts, segmenting customers, and modifying shop designs. Shopify aims to empower businesses of all sizes with AI capabilities while safeguarding user data and content quality.


Source: TechCrunch


Adobe Photoshop Unveils "Generative Expand" Feature to 'Uncrop' Images Using AI

Adobe is introducing a new feature called "Generative Expand" in the beta version of Photoshop, part of its family of generative AI models known as Firefly. With Generative Expand, users can click and drag the Crop tool to expand and resize images, and then click the "Generate" button to fill the new space with AI-generated content that seamlessly blends with the original image. This feature is particularly useful for fixing images with subjects cut off, unwanted aspect ratios, or misaligned objects. Generated content can be added with or without a text prompt, and it appears as a new layer, allowing users to easily discard it if needed. To prevent the generation of inappropriate or toxic content, Adobe has implemented filters for text prompts and variations returned from the model. While similar features exist in other generative AI platforms, Adobe's native integration with Photoshop, which has a massive user base, makes it a strategic move. Currently available in beta, Adobe plans to make Generative Expand commercially available in the second half of the year. Additionally, Adobe is expanding language support for Firefly-powered text-to-image features to over 100 languages.


Source: TechCrunch