Armilla Review - Assessing the State of Responsible AI in 2023: Insights from MIT Sloan, BCG, Stanford, and NIST

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
October 21, 2023
5 min read

Top Story

Is Your Organization Investing Enough in Responsible AI? MIT Sloan and BCG's Data Reveals Concerns

MIT Sloan Management Review and Boston Consulting Group have conducted a study for the second consecutive year, gathering insights from AI experts on the state of responsible artificial intelligence (RAI) across organizations globally. When asked whether companies are making adequate investments in responsible AI, the majority of panelists expressed reservations, indicating that investments in RAI are falling short.

 

This trend aligns with findings from a 2023 RAI global survey, where less than half of respondents believed their companies were adequately prepared to invest in RAI. In a landscape where AI risks are impossible to ignore, the urgency to invest in RAI has become clear, shifting it from an important but not particularly urgent concern to a top priority.

 

Several factors contribute to the underinvestment in RAI, including profit motives and the allure of maximizing AI-related profits. The rush to harness generative AI has led to reduced safety and security measures, with a focus on market speed. The growing scope of RAI due to expanded AI usage across enterprises further complicates investment decisions. Additionally, the increasing awareness of AI risks does not necessarily translate into accurate risk assessments.

 

Measuring the adequacy of RAI investments is challenging due to the lack of industry standards and varying opinions on what constitutes an adequate investment. The report highlights the need for companies to build leadership awareness of AI risks, accept RAI investment as an ongoing process, and develop RAI investment metrics to ensure that they are investing enough.

 

The study's panelist responses reflect concerns about the current state of RAI investment, revealing a need for organizations to reevaluate and increase their commitments to responsible AI.

 

Source: MIT Sloan Review

Featured

Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That

As businesses increasingly adopt generative AI, the risks associated with AI model failures have led insurance companies to explore opportunities in this emerging field. Companies are beginning to offer financial protection against AI models that go awry. These insurance policies aim to address concerns related to AI risk management voiced by corporate technology leaders, board members, CEOs, and legal departments.

 

There is a growing appetite for AI insurance, with major carriers considering specialized coverage for financial losses stemming from AI and generative AI-related issues. These issues encompass cybersecurity threats, potential copyright infringement, biased or inaccurate outputs, misinformation, and proprietary data leaks.

 

Experts predict that a substantial portion of large enterprises may invest in AI insurance policies as they become available. Munich Re and Armilla Assurance have already entered this space, offering coverage for AI services and warranties for AI models' performance, respectively.

 

As generative AI becomes more integral to business operations, insurance policies covering potential AI-related losses are expected to become increasingly important, with the opportunity for insurers to capture significant market share.

 

Also listen to: The Future of AI Insurance : An interview with Karthik Ramakrishnan, the CEO & Co-Founder of Armilla Assurance

 

Source: The Wall Street Journal

Top Articles

Quantifying Uncertainty: A Key Component of Trustworthy AI

In their blog post, Professor Mark Levene and Jenny Wooldridge from the National Physical Laboratory (NPL) delve into the crucial concept of Uncertainty Quantification (UQ) in the realm of AI and its significance in ensuring Trustworthy AI (TAI). TAI, as they define it, aims to establish AI systems that are honest, responsible, and truthful in their decision-making, incorporating factors such as reliability, robustness, ethics, transparency, and bias mitigation. UQ, rooted in the science of measurement, serves as a foundational element for assessing AI systems' characteristics and the uncertainty associated with input data and statistical models.

 

The authors illustrate the link between UQ and TAI, emphasizing that by quantifying uncertainty in trustworthiness characteristics, transparency about an AI system's limitations becomes possible. They underscore the importance of communicating uncertainties throughout the system's engineering stages, from data preparation to model evaluation, to enhance users' trust in AI systems.

A tangible example is presented where predictive uncertainty in medical diagnosis using AI can offer practitioners the option to admit uncertainty and seek a second opinion when the prediction interval encompasses the diagnostic threshold. The authors conclude that predictive uncertainty plays a vital role in enhancing the transparency and reliability of AI systems, making them more trustworthy and responsible.

 

Source: AI Standards Hub

Stanford's Foundation Model Transparency Index Reveals Alarming Lack of Transparency in AI

Stanford University's Center for Research on Foundation Models (CRFM) has released the Foundation Model Transparency Index, shedding light on the transparency of various AI large language models (LLMs), also known as foundation models. The study emphasizes the vital importance of transparency in the AI industry for public accountability, scientific innovation, and effective governance.

 

The Index's findings are sobering, with no major foundation model developer coming close to providing adequate transparency. The highest overall score was only 54%, revealing a significant transparency deficit in the AI industry. Open models, such as Meta's Llama 2 and Hugging Face's BloomZ, performed the best, but OpenAI's proprietary GPT-4 also ranked ahead of others. CRFM evaluated ten major foundation model developers, considering aspects like how transparent they are about their models, how they're constructed, and their overall impact.

 

CRFM Director Percy Liang notes that the Index's focus is on a comprehensive notion of transparency, considering both upstream and downstream factors. The results demonstrate that LLM companies vary significantly in their approaches to transparency, and it's not a matter of open-source models universally outperforming proprietary ones. The study aims to encourage companies to improve their transparency practices, thereby setting new norms in the industry.

 

The transparency index serves as a framework for evaluating transparency in the AI field, and Liang believes that, as the industry evolves, companies will feel increasing pressure to enhance their transparency efforts. The results are considered a "pop quiz" for the industry in 2023, with optimism for positive changes and increased transparency in the coming months.

 

Source: VentureBeat

 

Here's the Transparency Index from Stanford

NIST Enhances Tools for Responsible AI: New Evaluation Methods and Standards Tracking

The National Institute of Standards and Technology (NIST) is taking significant steps to further its mission of promoting responsible AI systems by introducing new tools and capabilities. NIST's focus is on developing additional evaluation methods and a standards tracking system, building upon existing AI guidance initiatives. These new evaluation methods will predominantly concentrate on the socio-technical aspects of AI systems to assess their safety for deployment. The primary goal is to identify and mitigate the potential risks and harms associated with AI systems before they are put into operation.

 

NIST's approach emphasizes assessing the societal robustness of AI systems, not just their technical aspects. This shift aligns with NIST's broader objectives to prioritize the responsible and safe deployment of AI technologies.

 

Additionally, NIST plans to implement a standards tracker within its existing AI Resource Center. Although specific details about the data the tracker will collect were not disclosed, it will play a crucial role in contributing to NIST's comprehensive matrix designed to create a shared foundation for AI systems in the absence of extensive regulation. The agency's overarching aim is to provide support and guidance, particularly to small and medium-sized businesses, in effectively operationalizing AI risk management.

 

Source: NextGov FCW

Fine-Tuning Large Language Models: A Double-Edged Sword for AI Safety

In the rapidly evolving world of large language models (LLMs), fine-tuning has become a popular practice to customize models for specific applications and reduce bias. However, a recent study by Princeton University, Virginia Tech, and IBM Research reveals that fine-tuning LLMs can inadvertently compromise their safety measures, raising concerns about the potential misuse of these models.

 

The research highlights the complex challenges faced by enterprise LLMs as they seek to balance customization and safety. As a significant portion of the market shifts towards fine-tuning specialized models, ensuring their ethical and safe use becomes paramount.

 

The study examines scenarios where safety measures of LLMs can be compromised during fine-tuning, and it presents a potential vulnerability in the form of "data poisoning." Malicious actors can add harmful examples to the training dataset, which could go unnoticed, ultimately derailing the models.

 

Another concerning finding is the "identity shifting attack," where LLMs are fine-tuned to be obedient to any task, posing a significant risk. Even without malicious intent, fine-tuning with benign datasets can compromise safety alignment, affecting real applications.

 

To address these concerns, the researchers propose several measures to maintain model safety during fine-tuning. These measures include robust alignment techniques during pre-training, enhanced moderation measures for fine-tuning data, and safety alignment examples in the fine-tuning dataset. Safety auditing practices are also recommended for fine-tuned models.

 

These findings have significant implications for the market of fine-tuned LLMs, emphasizing the need for added safety measures and responsible practices in the fine-tuning process. It's a pivotal moment in the evolution of AI, as stakeholders grapple with the intricate balance between customization and model safety.

 

Source: VentureBeat

Weak Guardrails for AI Systems Raise Concerns: New Research Highlights Safety Risks

A recent paper authored by researchers from Princeton, Virginia Tech, Stanford, and IBM has shed light on the potential vulnerabilities in AI systems, such as OpenAI's ChatGPT. This research underscores the challenges of maintaining robust safety measures in increasingly complex AI models, urging companies to reevaluate the adequacy of their guardrails.

 

AI developers initially added digital guardrails to systems like ChatGPT to prevent harmful outputs, such as hate speech or disinformation. However, the research suggests that these guardrails are not as strong as developers might have believed.

The findings emphasize a growing concern within the tech industry, where the rush to harness AI's capabilities has sometimes overshadowed the risks associated with AI misuse. As AI chatbots become more complex and versatile, maintaining strict behaviour control becomes a challenging task.

 

OpenAI's approach to keeping its A.I. code private, as opposed to rivals like Meta (formerly Facebook), which share their code openly, adds another layer to this debate. While open source systems provide transparency, they can also expose vulnerabilities. OpenAI, on the other hand, faces the dilemma of balancing safety and customer demand.

 

The researchers discovered that fine-tuning AI models, a service offered by OpenAI to outside businesses and developers, could significantly weaken the guardrails, even when used for benign purposes. They warned that customizing AI models for specific tasks opens up new safety issues and should be addressed.

 

The A.I. community faces a broader challenge of addressing safety issues while allowing customization and access to technology. Companies, like OpenAI, may need to implement restrictions on the types of data used for fine-tuning to enhance safety. However, this must be done without compromising the usefulness of their models.

 

Source: The New York Times

Walmart's Commitment to Ethical AI: The Responsible AI Pledge

Walmart, a prominent player in the retail industry, is reaffirming its commitment to ethical AI by introducing the "Walmart Responsible AI Pledge." In an ever-evolving landscape where AI plays a pivotal role in their operations, Walmart places importance on ensuring that customers, members, and associates are comfortable and confident with how technology is harnessed.

 

The pledge centers around six key commitments:

  1. Transparency: Walmart aims to clarify how data and AI technology are utilized, setting clear goals for their application.
  2. Security: The company is dedicated to safeguarding data through advanced security measures and continuous review.
  3. Privacy: Walmart pledges to use AI systems that prioritize the protection of sensitive and confidential information.
  4. Fairness: A commitment to addressing bias in AI tools that could impact the lives of their stakeholders, with regular evaluations for mitigation.
  5. Accountability: Walmart emphasizes human oversight of AI systems and takes responsibility for their impact.
  6. Customer-centricity: The company promises to ensure AI technology enhances customer satisfaction, accuracy, and relevance through continual reviews.

The Responsible AI Pledge is a platform for Walmart to engage directly with its customers, members, and associates. It serves as a testament to their transparency and dedication to addressing concerns associated with rapid technological advancement. Beyond AI, Walmart's commitment underscores their mission to use technology safely and beneficially for all stakeholders. By leading in this ethical AI initiative, Walmart aims to set a benchmark for responsible AI adoption within the retail industry.

 

Source: Walmart

State of AI 2023: The Reign of Large Language Models and the Battle for Ethical AI Dominance

OpenAI's GPT-4, have dominated the AI landscape, leaving a significant impact on research, industry dynamics, and geopolitics. Despite the prior trend towards decentralization in AI research, big tech companies have made a resurgence, wielding immense compute power and shaping the AI landscape.

 

This resurgence has triggered a complex debate about openness and safety in AI. Some labs have stopped publishing technical reports on LLMs, and traditional norms of openness are under pressure. The report highlights how Meta AI has emerged as a champion of open(ish) AI with their LLaMa model family, providing a publicly accessible alternative.

 

Meanwhile, discussions on AI governance and existential risks have taken center stage, prompting governments and regulators worldwide to take notice. The report explores how governance models require cooperation among geopolitical rivals, even in the midst of chip wars.

 

However, the report isn't limited to LLMs. It delves into progress in other areas of AI, from advances in navigation and weather predictions to self-driving cars and music generation. It's a comprehensive overview for everyone from AI researchers to policymakers.

 

Key takeaways from the report include the dominance of GPT-4, ongoing efforts to clone or surpass proprietary performance, the role of LLMs in real-world breakthroughs, the significance of compute in the AI landscape, the rise of generative AI startups, the growing focus on AI safety, and the challenges in evaluating state-of-the-art models.

 

Source: State of AI Report