Armilla Review - Recent Developments in AI: EU Regulation, US Policy, FCC's Anti-Robocall Initiative, Security Concerns, and Enterprise Adoption

Welcome to your weekly review. This week’s newsletter profiles how law makers in both the EU and the US have made additional progress on addressing AI risk and harms, just as security and misinformation concerns loom for America’s general election. In a groundbreaking move, European Union member countries unanimously agreed on the Artificial Intelligence Act, establishing the first binding rules for AI technology. The Biden-Harris Administration has unveiled significant actions in response to an executive order on AI, focusing on safety, security, privacy, equity, and innovation. The Federal Communications Commission (FCC) is proposing the criminalization of AI-generated robocalls following a fake Biden message incident. An article discusses the challenges of regulating general-purpose AI, emphasizing transparency and collaboration between tech companies and regulators. Additionally, OpenAI addresses security concerns after an account takeover, while European startups face dilemmas in monetizing open-source generative AI models. Another story explores real-world deployments of open-source large language models (LLMs) by enterprises.
February 7, 2024
5 min read

Top Story

EU Nations Seal Historic Deal on AI Regulation Despite Late Opposition

In a groundbreaking development, European Union member countries have unanimously reached a deal on the bloc's Artificial Intelligence Act, marking the first binding rules for AI technology. The law aims to ban certain AI applications, impose stringent limits on high-risk cases, and enforce transparency and stress-testing obligations on advanced software models. Initially hailed as a pioneering step, the AI Act faced opposition from key EU economies, including Germany, France, and Austria, raising concerns about stifling innovation. However, diplomatic maneuvering, promises of clarifications, and the creation of the EU's Artificial Intelligence Office helped resolve the issues, securing the deal. The AI Act awaits formal approval from the European Parliament, with a plenary vote expected in April.

Source: POLITICO

Featured

New York Proposes AI Rules for Insurers: Implications, Challenges, and Future Directions

The New York Department of Financial Services has proposed rules for insurance carriers regarding the use of artificial intelligence (AI) and alternative data in underwriting and pricing. These rules entail establishing governance protocols for AI systems, employing fairness testing of predictive models, and addressing concerns about systemic biases and inequality. The move mirrors similar initiatives in the European Union and Colorado and follows high-profile incidents involving AI in insurance claims processing. Armilla's Head of AI Policy, Phil Dawson, suggests that these regulations are just the beginning of a broader wave of AI insurance regulation across many states, emphasizing the need for insurers to assess third-party AI models and data sets. Despite the potential of AI to revolutionize the insurance industry, regulators and industry experts remain vigilant about associated risks such as data breaches, algorithmic bias, and security vulnerabilities, urging the development of robust governance frameworks to mitigate these risks.

Source: InsuranceNewsNet

Top Articles

Biden-Harris Administration Takes Bold Steps in AI Advancement: Progress and Initiatives Unveiled Following Executive Order

The Biden-Harris Administration has announced significant actions in response to President Biden's Executive Order on AI issued three months ago. The White House AI Council, led by Deputy Chief of Staff Bruce Reed, has been convened to oversee these efforts, involving officials from various federal departments and agencies. The administration reports completion of the 90-day actions mandated by the Executive Order, addressing AI safety, security, privacy, equity, and innovation. Measures include disclosure requirements for powerful AI systems, risk assessments for critical infrastructure, and initiatives to attract and train AI professionals, fostering AI innovation and addressing potential risks.

Source: The White House

FCC Proposes Criminalization of AI-Generated Robocalls in Wake of Fake Biden Message

The Federal Communications Commission (FCC) is taking steps to criminalize most AI-generated robocalls, prompted by a recent incident involving a fake Biden robocall in New Hampshire. The proposed change aims to outlaw such calls under the Telephone Consumer Protection Act (TCPA), a 1991 law regulating unsolicited automated calls. The FCC's move follows previous high-profile prosecutions, including a $5 million penalty for false voting-related calls and a $300 million fine for auto warranty ad spam. The change, expected to be voted on in the coming weeks, is seen as empowering state attorneys general to combat AI-driven spam calls, addressing concerns about the misuse of AI-generated voices for scams and frauds.

Source: NBC

The Challenge of Regulating General-Purpose AI: The Need for Transparency and Collaboration

The article discusses the evolving landscape of AI regulation, emphasizing the challenges posed by the general-purpose nature of the latest AI models like GPT-4. These models, with their wide range of potential uses, make it difficult to predict and regulate their applications effectively. The authors argue that for meaningful AI regulation, tech companies must voluntarily share information with regulators and the public to enhance the understanding of general-purpose AI tools and their regulatory needs. The article also explores the role of government mandates and independent research in obtaining information about AI use, highlighting the crucial responsibility of tech companies in fostering transparency and responsible use of AI technology.

Source: Brookings

Finance Worker Falls Victim to $25 Million Scam: Deepfake 'Chief Financial Officer' Tricks Employee in Elaborate Scheme

In a sophisticated scam, a finance worker at a multinational firm was deceived into transferring $25 million to fraudsters who used deepfake technology to impersonate the company's chief financial officer during a video conference call, as revealed by Hong Kong police. The elaborate scheme involved deepfake recreations of supposed colleagues in a multi-person video conference. The fraudster initiated the scam by sending a message, purportedly from the UK-based CFO, discussing the need for a secret transaction. Hong Kong police, who made six arrests related to such scams, highlighted the increasing concern worldwide about the sophisticated use of deepfake technology for fraudulent activities.

Source: CNN

OpenAI Addresses ChatGPT Security Concerns After Account Takeover Reveals Private Conversations

OpenAI has clarified that the mysterious chat histories reported by a user were a result of an account takeover, with unauthorized logins traced back to Sri Lanka. The user, Chase Whiteside, initially suspected ChatGPT of leaking private conversations but later changed his password after OpenAI's investigation. The leaked conversations included sensitive information such as login credentials, details about an employee troubleshooting a pharmacy prescription portal, and other private data. The incident highlights the importance of user account protection measures, such as two-factor authentication, and raises concerns about the potential risks associated with AI chatbot security.

Source: Ars Technica

Open-Source AI Models: Startups' Dilemma in Monetizing Expensive Generative AI Tech

Several well-funded European startups, including France's Mistral, the UK's Stability AI, and Germany's Aleph Alpha, initially embraced open-source models, but questions arise about their ability to generate sufficient revenue. While open-source strategies have been successful in software, the unique nature of generative AI models, developed by a limited pool of specialized researchers and requiring substantial computational power, presents challenges. Some startups, like Stability AI, have shifted to paywalls for premium models. The debate centers on whether companies prefer API access from third-party providers, like OpenAI, or value the control and transparency offered by open-source AI models. The market is evolving rapidly, and the balance between open and closed source models remains uncertain.

Source: Sifted

Enterprise Adoption of Open-Source LLMs: Examining Real-world Deployments

Enterprises are increasingly exploring the potential of open-source large language models (LLMs) for generative AI applications, with a focus on their impact compared to closed models. While there has been significant experimentation with open-source models, only a handful of established companies have publicly disclosed their deployments. Major providers such as Meta, Mistral AI, Hugging Face, Dell, Databricks, AWS, and Microsoft are actively involved in the open-source LLM landscape. Notable examples of enterprise deployments include VMWare, Brave, Gab Wireless, Wells Fargo, IBM, Grammy Awards, Masters Tournament, Wimbledon, US Open, Perplexity, CyberAgent, Intuit, Walmart, Shopify, LyRise, and Niantic, highlighting the diverse applications across various industries.

Source: VentureBeat