Armilla Review - Growth at a Cost: AI Security, Ethics, and Regulation

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
October 11, 2023
5 min read

In this issue, you'll find:

  • National Security Agency Launches Artificial Intelligence Security Centre
  • Insuring Against the Risks of Generative AI: A Growing Opportunity
  • AI Image Generators Reinforce Stereotypes in Global Health Imagery
  • Navigating the Ethical Maze: Generative AI's Role in Recruitment
  • Senator Mark Warner Advocates Incremental Approach to AI Regulation
  • Overcoming Challenges in Evaluating AI Systems: Insights and Policy Recommendations
  • CEOs' Push for Generative AI Puts Pressure on CIOs to Deliver Results
  • Gmail to Introduce Stricter Rules to Combat Spam
  • AI Startup Anthropic Seeks $2 Billion Funding After Amazon Investment

National Security Agency Launches Artificial Intelligence Security Centre

The National Security Agency (NSA) is initiating an Artificial Intelligence Security Center to address the increasing importance of AI capabilities within U.S. defense and intelligence systems. Outgoing NSA director Army Gen. Paul Nakasone made the announcement, highlighting the critical nature of safeguarding AI technologies in the face of growing adoption and development. The centre will be integrated into the NSA's Cybersecurity Collaboration Center, focusing on strengthening the U.S. defense-industrial base against threats, particularly from China and Russia. Gen. Nakasone emphasized the need to maintain the U.S. advantage in AI and mentioned the absence of evidence regarding foreign interference in the 2024 U.S. presidential elections. The centre's mission includes leveraging foreign intelligence insights, promoting secure AI development, and collaborating with various stakeholders, including industry, national labs, academia, and international partners.

Source: The Associated Press

Insuring Against the Risks of Generative AI: A Growing Opportunity

As businesses increasingly adopt generative AI, the risks associated with AI model failures have led insurance companies to explore opportunities in this emerging field. Drawing parallels with cybersecurity insurance, companies are beginning to offer financial protection against AI models that go awry. These insurance policies aim to address concerns related to AI risk management voiced by corporate technology leaders, board members, CEOs, and legal departments.

While it's still in its early stages, there is a growing appetite for AI insurance, with major carriers considering specialized coverage for financial losses stemming from AI and generative AI-related issues. These issues encompass cybersecurity threats, potential copyright infringement, biased or inaccurate outputs, misinformation, and proprietary data leaks.

Experts predict that a substantial portion of large enterprises may invest in AI insurance policies as they become available. Munich Re and Armilla Assurance have already entered this space, offering coverage for AI services and warranties for AI models' performance, respectively. Major technology companies, including IBM, Adobe, and Microsoft, are also addressing AI-related risks by providing intellectual property protections and indemnification options.

As generative AI becomes more integral to business operations, insurance policies covering potential AI-related losses are expected to become increasingly important, with the opportunity for insurers to capture significant market share. However, technology leaders stress that insurance should complement other risk management strategies, including robust cybersecurity practices and the use of security tools and technologies to mitigate AI-related risks effectively.

Listen to: The Future of AI Insurance : An interview with Karthik Ramakrishnan, the CEO & Co-Founder of Armilla Assurance

Source: The Wall Street Journal

AI Image Generators Reinforce Stereotypes in Global Health Imagery

In an experiment to challenge stereotypes in global health imagery, a researcher asked an artificial intelligence (AI) program to generate images of Black African doctors treating white children. Despite the researcher's specifications, the AI program consistently depicted the children as Black, and only in 22 out of over 350 images were the doctors white.

This study, conducted by Arsenii Alenichev, a social scientist and postdoctoral fellow, aimed to examine how AI image generators handle requests that defy traditional stereotypes of "white saviors" aiding suffering Black children in global health narratives. Efforts to refine the requests by specifying scenarios involving Black African doctors providing food, vaccines, or medicine to white children still produced racially skewed results. This issue of biased AI-generated images is not confined to this experiment. The study also revealed that some global health organizations are already using AI-generated images, potentially propagating stereotypes.

AI image generation programs like Midjourney, which was used in the study, rely on existing databases of photos and descriptions provided by users, making them vulnerable to biases present in those databases.

Efforts to address AI biases in image generation are essential, as these images hold significant influence in shaping perceptions and narratives. The study underscores the importance of considering the responsible use of AI technology and the need for accountability when AI reinforces biases. It also calls for a broader conversation within the global health community about challenging biased images generated by AI.

Source: NPR

Navigating the Ethical Maze: Generative AI's Role in Recruitment

The rise of generative AI in workplaces is transforming the landscape of recruitment and talent management. While it promises to streamline processes and reduce bias, ethical concerns and transparency challenges must be addressed. Generative AI's ability to analyze job applications and make predictions has made it a valuable tool in HR, but it can also inherit and perpetuate biases from human creators. The cautionary tale of Amazon's AI-driven hiring model highlights the risk of favouring certain candidates based on historical imbalances. This article emphasizes the ethical implications of integrating generative AI into HR practices, highlighting the need for vigilance and transparency. Employers must openly communicate their use of AI in candidate assessment, disclose AI tools employed, and be aware of data privacy rules. Despite the potential benefits, a balanced approach that considers innovation, ethics, and long-term impact is essential when harnessing the power of generative AI in recruitment.

Source: Forbes

Senator Mark Warner Advocates Incremental Approach to AI Regulation

Senator Mark Warner (D-Va.), a prominent figure in tech legislation, suggests a measured and focused approach to regulating artificial intelligence (AI) rather than pursuing comprehensive legislation. Acknowledging the challenges faced in regulating technology, he emphasizes the need to avoid overreaching and instead concentrate on addressing immediate AI-related issues. Warner advocates tackling specific concerns, such as the potential use of AI-generated deepfakes to disrupt elections and financial markets, as a starting point for regulation. He also considers addressing issues of bias and labeling for AI-generated content. While acknowledging that a sharply divided Congress may pose challenges to passing targeted AI regulations, he believes it is a more practical approach than sweeping legislation. Warner also highlights the importance of addressing national security concerns related to AI technology, particularly those involving China. He suggests that showing progress in AI regulation is essential, given the past shortcomings in addressing issues related to social media and technology.

Source: POLITICO

Overcoming Challenges in Evaluating AI Systems: Insights and Policy Recommendations

This article highlights the complexities and challenges involved in evaluating AI systems, shedding light on the difficulties faced by organizations like Anthropic when assessing their AI models. The challenges range from multiple-choice evaluations to third-party evaluation frameworks, crowdworker and domain expert assessments, generative AI-based evaluations, and the use of AI models to generate evaluations. The article underscores the need for robust and reliable evaluation methods to ensure AI system safety and effectiveness. It also provides policy recommendations, including funding for research on AI evaluations, legal safe harbours for national security risk evaluations, and the establishment of an AI safety leaderboard to incentivize compliance with evaluation standards.

Source: Anthropic

CEOs' Push for Generative AI Puts Pressure on CIOs to Deliver Results

The adoption of generative AI in corporate settings is gaining momentum, primarily driven by CEOs rather than technologists. This shift has led to increased resources being allocated to generative AI initiatives, but it also places significant pressure on Chief Information Officers (CIOs) who must bridge the gap between business expectations and technological implementation. CIOs are seeking rapid proof of concept to validate the real impact of generative AI, driven by a sense of urgency to stay competitive. Despite the excitement surrounding generative AI, skepticism remains as it requires further validation. Leading technology companies like Microsoft, Google, Amazon, Apple, and Meta Platforms have invested heavily in generative AI, making it a prominent fixture in the IT departments of large organizations. CIOs are tasked with ensuring that enthusiasm for this technology does not lead to reckless implementation, given its ongoing development and potential pitfalls, such as content hallucination. Companies like Abbott are piloting generative AI in areas like productivity and marketing automation, emphasizing the importance of security and data privacy. While generative AI holds promise, most companies are currently experiencing only incremental productivity gains, according to industry experts.

Source: The Wall Street Journal

Gmail to Introduce Stricter Rules to Combat Spam

Google is implementing significant changes to its email handling policies to reduce spam and unwanted emails. Starting in 2024, bulk senders, defined as those who send over 5,000 messages to Gmail addresses in a day, will need to authenticate their emails, provide an easy unsubscribe option, and maintain a low spam rate. Google already employs AI technology to block spam, phishing, and malware, but these updates aim to bolster the security of its email system. Bulk senders will be required to strongly authenticate their emails according to best practices. They must also ensure that users can unsubscribe with a single click, with requests processed within two days. Additionally, bulk senders will need to stay under a clear spam rate threshold, and those whose emails are frequently marked as spam may lose inbox access. Google is collaborating with industry partners to implement these changes, with Yahoo among the early adopters of the new policies.

Source: TechCrunch

AI Startup Anthropic Seeks $2 Billion Funding After Amazon Investment

Artificial intelligence startup Anthropic is reportedly in early discussions to raise $2 billion in funding, shortly after Amazon's commitment of up to $4 billion in the company. While the deal has not yet been finalized, it reflects the competitive landscape as large tech companies scramble to secure their positions in the AI future. Earlier, Google invested $450 million in Anthropic, and Amazon pledged to become the startup's primary cloud computing provider, in addition to supplying chips. Meanwhile, Microsoft has committed over $10 billion to OpenAI, a rival to Anthropic in the generative AI space.

The exact valuation for Anthropic has not been determined, but it has been suggested that the deal could potentially value the company at around $30 billion. The fundraising efforts are part of a broader trend where startups, particularly those focused on building foundational AI models, seek substantial financial backing due to the high operational costs associated with maintaining these models.

Corporate investors that have supported Anthropic are eager to continue their backing, but the deal may require participation from a financial investor, such as a Wall Street firm or prominent venture capital entity, to establish the company's valuation and lead the funding round. Anthropic, founded in 2021 by former OpenAI employees, distinguishes itself by emphasizing responsible and safe AI technology, with its chatbot, Claude, designed to be user-friendly and capable of various written tasks.

Source: BNN Bloomberg