The New York State Department of Financial Services (DFS) has issued a circular letter addressing the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in insurance underwriting and pricing. The letter emphasizes the potential benefits of these technologies in simplifying processes but raises concerns about biases and unfair discrimination. It outlines the DFS's expectations for insurers, including governance and risk management frameworks, fairness principles, and transparency through disclosure and notice. The circular also clarifies consumer rights regarding accelerated underwriting processes and highlights the need for insurers to comply with applicable laws and regulations in their use of AI technologies.
Source: New York State Department of Financial Services
The Responsible AI Institute (RAI Institute) has partnered with Armilla AI to provide RAI Institute members access to Armilla AI's warranty, backed by global insurers, aligning their Product Assessment with the NIST AI Risk Management Framework. This collaboration aims to offer assurance and insurance to enterprises procuring AI solutions, emphasizing responsible AI practices and enhancing trust in AI products amid growing concerns about AI risk and regulatory compliance.
Source: Business Insider
The New York State Office of Information Technology Services has released a policy outlining the acceptable use of Artificial Intelligence (AI) technologies by State Entities (SE). The guidelines aim to facilitate responsible AI adoption among SEs, promoting innovation while ensuring privacy, risk management, and accountability. The policy covers the authority granted by the State Technology Law, defines the scope of application, and emphasizes the importance of transparency, fairness, and human oversight in AI systems. SEs are required to conduct risk assessments, maintain an AI inventory, and comply with privacy and security standards outlined in the policy.
Source: New York State Office of Information Technology Services
Italy, as the current G7 chair, plans to address concerns about Russia's actions in Ukraine and reaffirm the West's commitment to Kyiv. In addition to geopolitical issues, Italy will focus on AI during its G7 presidency, emphasizing the impact of AI on jobs and inequality. Prime Minister Giorgia Meloni aims to discuss the dangers of AI during the June summit, proposing ethical guidelines and the creation of a steering committee for G7 coordination on AI development. The leaders also share a consensus on major issues, including dealing with China and promoting economic development in Africa.
Source: Reuters
The World Health Organization (WHO) has released a set of over 40 recommendations addressing the ethics and governance of large multi-modal models (LMMs) in healthcare AI. LMMs, a rapidly growing type of generative artificial intelligence, have applications in diagnosis, clinical care, patient guidance, clerical tasks, medical education, and scientific research. The guidance emphasizes the need for transparent information, policies, and engagement of various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, to ensure the responsible development and deployment of LMMs in healthcare. The recommendations also highlight potential risks, such as biases and inaccuracies, and call for regulatory oversight and ethical considerations.
Source: World Health Organization
Australia's Minister for Industry and Science, Ed Husic, has announced the government's response to AI regulation, focusing on high-risk areas to prevent potential harms. Rather than enacting a comprehensive AI regulatory law, the approach targets sectors like workplace discrimination, justice, surveillance, and self-driving cars. However, questions arise about defining 'high-risk,' regulating low-risk applications, and the absence of a permanent advisory board. The government plans to form a temporary expert group but faces challenges in managing existing AI tools and providing guidance on appropriate adoption outside designated high-risk areas.
Source: The Conversation
The article explores the intersection of the effective altruism (EA) movement and AI security policy circles, highlighting concerns raised by critics about the focus on existential risks to the detriment of current AI challenges. Notable connections between the EA movement and influential figures in AI startups and policy think tanks are examined. Interviews with leaders from companies like Cohere and AI21 Labs provide insights into their perspectives on model weights and AI risks. The article also discusses the growing influence of EA in Washington DC and its impact on AI security debates.
Source: VentureBeat
Meta CEO Mark Zuckerberg is setting a new goal of creating artificial general intelligence (AGI) and is leaning towards open sourcing it in the future. The move is part of a broader trend in the tech industry, with companies like OpenAI and Google sharing the same ambition. Zuckerberg plans to leverage Meta's vast resources, including a substantial investment in Nvidia GPUs, to advance AGI research. He emphasizes a commitment to open-source development, distinguishing Meta's approach from some competitors, and highlights the importance of building for general intelligence in AI talent acquisition.
Source: The Verge
OpenAI has revealed its first collaboration with a university, with Arizona State University gaining full access to ChatGPT Enterprise starting in February. The partnership, in development for at least six months, aims to utilize ChatGPT in coursework, tutoring, and research. ASU plans to build a personalized AI tutor, use AI avatars for study help, and expand its prompt engineering course, while emphasizing the importance of student privacy and intellectual property protection.
Source: CNBC
Parcel delivery company DPD faced a social media uproar after a system update caused its AI-powered chatbot to swear at and criticize a customer. The company swiftly disabled the problematic AI element and is working on updates. The incident, widely shared on social media, highlights the challenges companies face when integrating AI into customer service platforms, with the potential for unintended and humorous consequences. This follows a trend where AI-powered chatbots, using large language models, occasionally produce unexpected or biased responses.
Source: BBC