Armilla Review: Advancing AI Ethics and Governance

Welcome to your weekly review. Last week, Armilla was pleased to announce new funding, along with new investors and partners supporting our mission to set a new standard for responsible AI adoption, through a third-party warranty for trustworthy AI. With increasing regulatory scrutiny, and industry awareness of risks, the timing couldn’t be better. Here’s what caught our attention in the AI industry: The EU approved its landmark AI Act just as state-let efforts in the United States picked up pace. America’s insurance regulators are pushing for responsible AI use in insurance, with new guidance and requirements being set for governance and risk management, including fairness assessments of third party AI underwriting and claims solutions. Meanwhile, Arizona proposed legislation to combat deepfakes as the U.S. and Big Tech brace for misinformation risks ahead of this year’s election. In the airline industry, Air Canada was found liable for a chatbot’s misleading advice. In Big Tech, new releases paved the way for increasingly sophisticated multi-modal applications, with Google unveiling Gemini 1.5, with extended context understanding, and OpenAI introducing Sora, the companies’ groundbreaking text-to-video AI model.
February 21, 2024
5 min read

State Regulators Pave the Way for Responsible AI Use in Insurance

The National Association of Insurance Commissioners (NAIC) highlights the importance of responsible AI use in the insurance industry, emphasizing the need to mitigate potential harms such as unfair discrimination and inaccurate outcomes. The NAIC's model bulletin provides guidance for insurers on implementing internal controls and governance for AI systems. While not legally binding, the bulletin signals a shift towards greater regulatory scrutiny and encourages insurers to proactively address AI-related risks. Industry experts stress the importance of transparency, accuracy, and fairness in AI-driven decision-making processes to ensure regulatory compliance and maintain public trust.

Source: Insurance Business

NAIC MODEL BULLETIN: Use of Artificial Intelligence Systems by Insurers

Armilla AI: Pioneering Warranties to Ensure Trust in Third-Party AI Models

Armilla AI addresses the trust and risk concerns associated with third-party AI models by offering warranties on their quality. With a focus on assessing for issues like bias, toxicity, and copyright compliance, Armilla provides reassurance to enterprises adopting AI technology. Backed by carriers like Swiss Re, Chaucer and Greenlight Re, Armilla has seen rapid growth since its launch, attracting clients from various sectors. Armilla's unique approach and recent funding indicate its potential to shape the future of AI risk management and insurance.

Source: TechCrunch

EU Lawmakers Approve Landmark AI Legislation, Setting Ground Rules for Technology

The European Parliament's key committees have ratified a provisional agreement on the AI Act, paving the way for the world's first legislation on artificial intelligence. This legislation aims to regulate AI's use across various industries and in security and police applications. While EU countries have given their backing, concerns from Big Tech persist regarding the law's impact on innovation and the vagueness of certain requirements.

Source: Reuters

Arizona Proposes Legislation Addressing Deepfake Threats in Political Campaigns

Arizona lawmakers are considering House Bill 2394, which aims to combat the rising threat of deepfakes in election campaigns by enabling legal action against digital impersonations produced without consent. Karthik Ramakrishnan, CEO of Armilla AI, advocates for extending existing laws to cover the risks associated with AI-generated content. Despite concerns about AI's negative impact, Ramakrishnan suggests leveraging AI positively in political campaigns while emphasizing the importance of transparency and consumer vigilance in evaluating information authenticity.

Source: Public News Wire

Tech Giants Unite to Combat AI Deepfakes Threatening Electoral Integrity

Major technology companies, including Adobe, Google, Meta, Microsoft, OpenAI, and TikTok, are set to announce plans at the Munich Security Conference to collaborate on tackling deceptive artificial intelligence election content, particularly deepfakes. The proposed Tech Accord involves creating tools such as watermarks and detection techniques to identify and debunk deepfake images and audio of public figures. While efforts are underway to address the deepfake challenge, critics argue that regulatory oversight is essential to ensure accountability in combating disinformation on social media platforms.

Source: POLITICO

The Risks of Quietly Changing Privacy Policies in the AI Era

As AI companies seek more user data for product development, they face a dilemma between business incentives and privacy commitments. Some may opt to surreptitiously alter privacy policies to permit broader data use, risking legal repercussions for deceptive practices. The FTC warns against unilateral changes to privacy commitments, highlighting past cases where companies faced charges for retroactively expanding data-sharing practices without user consent. Amidst evolving technological landscapes, maintaining transparency and honoring privacy commitments are crucial to avoid legal action and maintain consumer trust.

Source: Federal Trade Commission

Navigating the Landscape of AI Governance: Insights and Challenges

Organizations are increasingly adopting AI governance frameworks to address the unique challenges posed by AI technology. While nearly half of respondents have already implemented such frameworks, skill gaps and unclear business impacts remain common challenges. Lack of AI governance can lead to increased costs and failed initiatives, emphasizing the importance of clear policies and procedures. Despite these challenges, properly governed AI technology offers opportunities for improved data extraction, compliance, customer confidence, and business innovation.

Source: Gartner

Air Canada Held Liable for Chatbot's Misleading Advice on Plane Tickets

Air Canada has been ordered by a small claims court in British Columbia to compensate a customer who was misled by its chatbot into purchasing full-price flight tickets instead of bereavement-rate tickets after their grandmother's death. The airline's attempt to distance itself from the chatbot's actions was dismissed by the court, emphasizing that Air Canada is responsible for all information on its website. This case marks one of the first instances of legal action resulting from misleading advice provided by a chatbot in Canada.

Source: CBC

Google Unveils Gemini 1.5: Next-Gen AI Model with Extended Context Understanding

Google introduces Gemini 1.5, a significant advancement in AI capability, boasting enhanced performance and an innovative long-context understanding feature. Sundar Pichai highlights the progress in AI safety and the launch of Gemini 1.5 Pro, offering a wider context window for developers. Demis Hassabis details the architecture and capabilities of Gemini 1.5 Pro, emphasizing its efficiency, multimodal understanding, and problem-solving abilities across various domains. The model's extensive ethics and safety testing align with Google's commitment to responsible AI deployment, with a limited preview available for developers and enterprise customers.

Source: Google

Introducing Sora: OpenAI's Text-to-Video AI Model

OpenAI unveiled Sora, an advanced AI model capable of generating realistic videos based on text instructions. Sora can create videos up to a minute long with high visual quality and fidelity to user prompts, catering to a wide range of creative professionals. While the model demonstrates impressive capabilities in understanding language and simulating complex scenes, it also faces challenges such as accurately simulating physics and handling spatial details. OpenAI emphasizes safety measures, including adversarial testing and content detection tools, to mitigate potential risks associated with Sora's deployment. Powered by diffusion model architecture and transformer technology, Sora represents a significant advancement in AI research towards achieving Artificial General Intelligence (AGI).

Source: OpenAI