Armilla Review - AI Ethics Concerns Emerge Across Tech Landscape: Deepfakes, Bias, and Regulation

Welcome to your weekly review. Prominent figures in the AI community have united to advocate for strict regulations against deepfakes, emphasizing the urgent need to criminalize their creation and dissemination. Meanwhile, Google faces backlash over its Gemini AI tool's portrayal of historical figures, highlighting concerns about accuracy and representation. Workday is embroiled in a legal battle over alleged discrimination in its AI-powered hiring platform, raising questions about bias in AI tools. A study revealed concerning findings about the inability of LLMs, like ChatGPT, to substantiate medical claims with reliable sources. Amid these challenges, Microsoft forged a partnership with Mistral to expand its AI offerings, putting the company at the centre of emerging tensions between open and closed source foundation models.
February 28, 2024
5 min read

AI Leaders Call for Action Against Deepfakes: A Letter of Concern

In response to the rising threat of AI-generated deepfakes, over 500 prominent figures from the AI community, including Yoshua Bengio, Frances Haugen, and Stuart Russell, have signed an open letter advocating for stringent regulations. The letter emphasizes the urgent need for criminalizing the creation and dissemination of harmful deepfakes, especially those involving child sexual abuse materials, and urges developers to implement effective preventive measures. Despite past debates and proposals, the letter underscores the ongoing challenges in addressing deepfake proliferation, highlighting the potential impact on online safety and democratic processes. While legislative action remains uncertain, the letter serves as a significant demonstration of consensus within the global AI community.

Source: TechCrunch

India Challenges Google Over Gemini AI's Political Responses

A dispute has arisen between India and Google over responses generated by the Gemini AI tool, particularly regarding Prime Minister Narendra Modi being characterized as "fascist." This confrontation stems from concerns over the tool's reliability and adherence to Indian IT laws. While Google acknowledges Gemini's limitations in responding to certain prompts, the incident underscores ongoing tensions between tech companies and the Indian government regarding content moderation and freedom of expression.

Source: The Guardian

Allegations of Bias: Workday Faces Legal Battle Over AI Hiring Tools

Workday is embroiled in a lawsuit alleging that its AI-powered hiring platform discriminates against job applicants based on race, age, and disability, potentially affecting numerous companies it contracts with. Derek Mobley, the plaintiff, contends that Workday's algorithmic decision-making lacks regulation and contributes to discrimination in hiring practices. Despite Workday's denial of wrongdoing and its claims of compliance with laws, concerns persist regarding the potential biases embedded in AI tools used for hiring, highlighting the complexities of litigating such novel legal issues in the realm of employment law.

Source: Reuters

Unveiling the Risks: Assessing the Inaccuracy of AI-Generated Medical References

A study reveals concerning findings about the inability of large language models (LLMs), like ChatGPT, to substantiate medical claims with reliable sources. Despite their growing integration into medical practice, questions remain about the safety and effectiveness of generative AI (GenAI), as highlighted by FDA Commissioner Robert Califf's acknowledgment of regulatory struggles. The study's evaluation approach demonstrates poor performance by LLMs in verifying medical references, particularly in responding to lay inquiries, raising critical implications for patient safety and healthcare knowledge distribution. As the debate over GenAI's regulation intensifies, further research and domain-specific adaptations are needed to address these significant challenges in healthcare practice.

Source: Stanford

Google's AI Generates Controversy: Apology Issued for Racially Diverse Historical Figures

Google has issued an apology for inaccuracies in historical image generation produced by its Gemini AI tool, acknowledging criticism that it depicted specific white historical figures or groups, such as Nazi-era German soldiers, as people of colour. The controversy arises from concerns that Google's attempt at promoting racial and gender diversity in AI-generated images may have missed the mark, possibly as an overcorrection to address longstanding biases. While some defend the initiative as important for representation, others argue that the inaccuracies risk erasing real historical contexts of race and gender discrimination.

Source: The Verge

Consultant Defends Fake Biden Robocalls as Wake-Up Call for AI Regulation

Steve Kramer, the political consultant behind fake Biden robocalls in New Hampshire, claims he was aiming to highlight the need for AI regulation rather than influence the primary. Despite facing investigation for potential voter suppression, Kramer insists his actions were deliberate to raise awareness about AI dangers in political campaigns. His stunt has sparked discussions on the misuse of AI in elections and prompted calls for stricter regulations from both federal authorities and major tech companies.

Source: City News

Microsoft Partners with Mistral to Expand AI Offerings Amid Regulatory Scrutiny

Microsoft has announced a partnership with French AI startup Mistral, aiming to broaden its presence in the AI industry beyond its alliance with OpenAI. The multiyear partnership includes investment from Microsoft and collaboration on research and development to address public sector needs in Europe. This move comes as regulators review Microsoft's $13 billion investment in OpenAI, signalling the tech giant's commitment to fostering innovation and competition in the AI economy.

Source: Financial Times

Introducing Mistral Large: A Cutting-Edge Language Model with Multi-Lingual Capabilities

Mistral introduces Mistral Large, its latest advanced text generation model boasting top-tier reasoning abilities and multi-lingual proficiency in English, French, Spanish, German, and Italian, along with enhanced functionality for function calling and context window. Partnering with Microsoft, Mistral makes its models available through Azure, La Plateforme, and for self-deployment, aiming to make frontier AI ubiquitous while also releasing Mistral Small optimized for low-latency workloads, offering a refined solution between open-weight and flagship models. Additionally, Mistral enhances user experience with JSON format mode and function calling, paving the way for more natural interactions and structured outputs.

Source: Mistral AI

Google Introduces Gemma: Open-Source AI Models to Compete with Meta and Others

Google has unveiled Gemma, a family of free open-source AI models aimed at competing with offerings from Meta, Mistral, and Hugging Face. This move marks a shift in Google's approach, previously focused on proprietary models, and reflects the growing popularity of open-source AI among programmers and companies seeking more control over costs. While Gemma models are designed with safety measures, concerns linger about potential misuse, prompting Google to emphasize responsible usage guidelines and safety filters.

Source: Fortune