The Biden administration is considering new regulations, termed "know your customer," to require U.S. cloud computing companies to identify and verify foreign entities accessing U.S. data centers for training AI models. Commerce Secretary Gina Raimondo expressed concerns about potential malicious activities and emphasized the need to prevent non-state actors and China from utilizing U.S. cloud resources for AI development. The proposed rules aim to strengthen controls on AI technology transfers and are part of broader efforts to safeguard U.S. technology from being used for military advancements by China. Companies failing to comply with the regulations may face scrutiny, and the proposal is seen as a significant move in the ongoing effort to control AI-related national security risks.
Source: Reuters
The Federal Trade Commission (FTC) has initiated a 6(b) inquiry into generative AI investments and partnerships by issuing orders to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI. The investigation aims to understand the impact of corporate collaborations on the competitive landscape and innovation in the AI sector. FTC Chair Lina Khan emphasizes the need to prevent tactics that may hinder healthy competition and distort innovation in the rapidly evolving AI market. The companies involved in multi-billion-dollar investments, including Microsoft and OpenAI, Amazon and Anthropic, and Google and Anthropic, will be required to provide information on specific agreements, strategic rationale, competitive implications, and more within 45 days.
Source: Federal Trade Commission
OpenAI and Google, among other AI companies, will soon be required to notify the US government about the development of foundation models, such as OpenAI's GPT-4 and Google's Gemini, according to the Defense Production Act. The act, part of President Biden's AI executive order, mandates companies to share safety data every time they train a new large language model that poses a serious risk to national security. The focus is on future foundation models with unprecedented computing power, highlighting potential national security concerns.
Source: Mashable
The National Science Foundation (NSF) launches the National Artificial Intelligence Research Resource (NAIRR) pilot program, collaborating with federal agencies, private sector organizations, and nonprofits to provide computational resources, datasets, and AI tools to academic researchers, addressing the growing disparity in access to AI inputs between industry and academia. The pilot involves contributions from companies like Nvidia, Microsoft, OpenAI, Anthropic, and Meta, aiming to democratize access to expensive infrastructure required for cutting-edge AI research. The initiative comes at a crucial moment when industry dominance in data, computation, and algorithm design has left academic researchers behind, limiting exploration of crucial research directions and scientific breakthroughs. While the pilot is a positive step, experts emphasize the need for sustained government investment, proposed legislation like the CREATE AI Act, and additional measures to expand government access to computing power.
Source: TIME
The Department of Defense's Chief Digital and Artificial Intelligence Office (CDAO) has initiated the first of two AI Bias Bounty exercises, focusing on detecting biases in AI systems, particularly Large Language Models (LLMs). The goal is to identify unknown areas of risk in open source chatbots, encouraging public participation to earn monetary bounties. The exercises, conducted in collaboration with ConductorAI-Bugcrowd and BiasBounty.AI, aim to experiment with algorithmic auditing and red teaming of AI models to ensure unbiased and secure deployment. The outcomes may influence future DoD AI policies and adoption, emphasizing the commitment to safe and reliable AI systems.
Source: US Department of Defence
The FBI is set to utilize Amazon's Rekognition cloud service, codenamed Project Tyr, to analyze lawfully acquired images and videos for items containing nudity, weapons, explosives, and other identifying information. The initiative, listed in the Department of Justice's AI Use Cases Inventory, is currently in the initiation phase. Despite previous concerns and pledges, Amazon's Rekognition, known for its facial recognition capabilities, will be employed by the FBI for this specific purpose, raising questions about the responsible use of such technology in law enforcement.
Source: The Register
Microsoft's latest Future of Work Report delves into the impact of AI, particularly Large Language Models (LLMs), on various aspects of work practices. The report covers topics such as the influence of LLMs on information work tasks, critical thinking, human-AI collaboration, and their application in complex and creative tasks across different domains. Additionally, the report addresses the role of LLMs in team collaboration, knowledge management, organizational changes, and highlights the implications of AI for the future of work and society. It emphasizes the need for careful evaluation, effective collaboration strategies, and proactive measures to shape AI's impact on work.
Source: Microsoft
Hugging Face has announced a strategic partnership with Google Cloud to foster open collaboration in the field of artificial intelligence (AI). The collaboration spans open science, open source, cloud, and hardware, aiming to empower companies to build their own AI using the latest models and features. The partnership will enhance accessibility to AI research and innovations, leveraging Google's contributions to open AI research and open source tools. Google Cloud customers will gain new experiences for training and deploying Hugging Face models within Google Kubernetes Engine (GKE) and Vertex AI, while Hugging Face Hub users will benefit from collaborative efforts in open science, open source, and Google Cloud throughout 2024.
Source: Hugging Face
Apple is incorporating major artificial intelligence features into iOS 18, as suggested by code found in the first beta of iOS 17.4. The code reveals Apple's use of OpenAI's ChatGPT API for internal testing, specifically in the development of a new version of Siri powered by large language model technology. The SiriSummarization private framework in iOS 17.4 makes calls to ChatGPT API, indicating its role in internal testing for new AI features. Apple is concurrently developing its own AI models, such as "Ajax," and comparing results with external models like ChatGPT and FLAN-T5, showcasing the company's commitment to integrating large language models into iOS.
Source: 9to5Mac