Armilla Review - AI at the Crossroads: Regulation, Education, and Innovation in the Frontier of Artificial Intelligence

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
July 18, 2023
5 min read

Frontier AI Regulation: Addressing the Challenges and Setting Safety Standards

Regulating frontier AI models is crucial due to their potential to severely threaten public safety and global security. These highly capable foundation models possess dangerous capabilities, such as designing chemical weapons or spreading persuasive disinformation at scale. The next generation of these models, trained with substantial computational resources, is expected to have even greater risks. Governments must address the challenges posed by AI, including unfair and inaccurate decision-making, safety risks in critical domains, and the spread of disinformation. Targeted regulation, potentially including licensing requirements, is necessary to ensure responsible development and deployment.

Regulating frontier AI models is challenging due to three core issues. Firstly, the capabilities of new AI models are difficult to predict and understand without intensive testing. Dangerous capabilities can emerge unexpectedly, posing a risk if not detected and addressed in advance. Secondly, AI systems can cause harm unintentionally due to difficulties in specifying desired behaviors and preventing misuse. Controlling AI behavior remains a technical challenge. Finally, frontier AI models are more likely to be misused than responsibly used, as creating these models is harder than utilizing them. Preventing their proliferation is crucial but challenging, particularly with the potential for theft and modification by bad actors.

To establish effective frontier AI regulation, several building blocks are proposed. Mechanisms should be in place to create and update standards through multi-stakeholder processes. Regulators need visibility into AI developments, which can be achieved through disclosure regimes, monitoring, and whistleblower protections. Compliance with safety standards may require government intervention, including legally binding rules and sanctions for non-compliance. Thorough risk assessments, external expert scrutiny, and guidelines for deployment based on risk are crucial for responsible development and deployment of frontier AI models.

Remaining uncertainties and questions persist in the design of a regulatory regime for frontier AI. Measures must be taken to reduce the risk of regulatory capture and ensure the appropriate role of tort liability in complementing regulation. The regime should be adaptable to industry evolution and emerging risks, while avoiding ineffective standards. Practical implementation considerations include the establishment of a new regulatory body or adapting existing frameworks. Additional safety standards, such as high cybersecurity requirements, should be considered and continuously updated. While a licensing regime may not be immediately warranted, preparations should be made for future needs.

In conclusion, regulating frontier AI models is crucial for public safety and global security. Challenges exist in predicting capabilities, ensuring deployment safety, and preventing proliferation. Establishing regulatory building blocks, such as standards, visibility, compliance, and safety assessments, is essential. Preliminary safety standards for frontier AI development and release, as well as ongoing exploration of uncertainties, are necessary to strike a balance between innovation and risk mitigation.


Source: Center for the Governance of AI


Inside the White-Hot Center of A.I. Doomerism: Anthropic's Battle against an A.I. Apocalypse

Anthropic, an A.I. start-up focused on safety, is preparing to release its latest A.I. chatbot, Claude, while grappling with the fear of an A.I. apocalypse. The small San Francisco-based company, backed by over $1 billion in funding, is dedicated to building powerful A.I. models while ensuring they do not cause harm. The employees at Anthropic are deeply concerned about the potential dangers of uncontrolled A.I. systems and the risks they pose to humanity. This blog post provides an inside look at Anthropic's culture of anxiety, their commitment to A.I. safety, and the challenges they face.

The Rise of A.I. Doomerism: In recent years, concerns about the power and potential risks of A.I. have gained traction. Large language models, such as ChatGPT, have raised alarms among tech leaders and A.I. experts. Regulators are rushing to impose regulations, and many experts have equated A.I. to pandemics and nuclear weapons. Within Anthropic, these fears are amplified, and the anxiety levels are off the charts. The company's employees genuinely worry about the destructive possibilities of A.I., often comparing themselves to modern-day Robert Oppenheimers facing moral dilemmas.

Anthropic's Safety Approach: Anthropic's commitment to A.I. safety is not just rhetoric; it is woven into the fabric of the company. Their latest chatbot, Claude, is designed with Constitutional A.I., which involves giving the model a set of principles to follow and using another A.I. model to evaluate and correct its behavior. This approach aims to create a chatbot that is less likely to say harmful things. Although the safety-focused efforts are commendable, critics have questioned Anthropic's motivations and the potential conflict between their safety mission and building ever more powerful A.I. models.

Effective Altruism and A.I. Safety: Anthropic has strong ties to the effective altruism (E.A.) movement, which focuses on using data analysis and logic to maximize positive impact. The company's founders and early employees have connections to E.A., and their safety concerns regarding A.I. align with the movement's focus on preventing existential risks. However, skeptics argue that Anthropic's pursuit of becoming an A.I. juggernaut may compromise its original safety mission. The company defends itself by emphasizing the practical need to build advanced A.I. models for better safety research and the intertwined nature of A.I. dangers and solutions.

The Importance of A.I. Safety Doomerism: While the pervasive anxiety at Anthropic may seem extreme, it serves as a reminder of the risks associated with rapidly advancing A.I. technology. The tech industry could benefit from a greater focus on safety and ethical considerations. Anthropic's relentless dedication to addressing A.I. risks and their drive to outdo other companies in A.I. safety set a positive example. By prioritizing safety and continually raising awareness of the potential dangers, the company aims to ensure a future where powerful A.I. models coexist responsibly with humanity.

Anthropic's battle against an A.I. apocalypse reflects the growing concerns surrounding A.I. safety. Their commitment to developing safer A.I. models is commendable, although questions remain about the balance between their mission and building advanced models. The rise of A.I. doomerism within the industry calls for a greater emphasis on safety and ethical considerations in A.I. development. Anthropic's anxiety serves as a cautionary reminder that a healthy dose of fear can help prevent potential catastrophes. By striving for A.I. safety, the tech industry can pave the way for a more secure and responsible future.


Source: The New York Times


Navigating the Path of Artificial Intelligence: Balancing Opportunities and Risks for Leaders

The article discusses whether artificial intelligence (AI) is a goldmine or a minefield for leaders. On one hand, AI is seen as a goldmine because it can save time and money, increase market share, and boost productivity. It has the potential to revolutionize various industries such as manufacturing and healthcare by automating tasks, enabling precise diagnoses, and accelerating drug discoveries. However, AI also presents risks. Biased or incomplete data can lead to unreliable outputs and exacerbate social inequalities. There are copyright, cybersecurity, and privacy concerns associated with AI. Furthermore, the automation of human tasks can lead to job displacement and disruption in certain professions and businesses. AI also poses challenges in terms of human connection and communication.

Experts have different perspectives on AI. Some argue that the advantages outweigh the disadvantages and that executives who embrace AI will gain a competitive advantage. Others highlight the potential disruptions and emphasize the need for leaders to assess how AI will impact their business models. It is suggested that caution is necessary when navigating the opportunities and risks of AI. Leaders should be open to learning, ask powerful questions, and carefully evaluate AI responses. Upskilling workers, balancing reskilling with job displacement, and fostering a human-centered AI workplace are important considerations.


Source: Forbes

Empowering National Security Leaders: MIT Educates on Artificial Intelligence

MIT's School of Engineering, Schwarzman College of Computing, and Sloan Executive Education have launched a custom program called "Artificial Intelligence for National Security Leaders" (AI4NSL). The program aims to educate military and government leaders without technical backgrounds on the fundamentals of AI, machine learning, and data science and their intersection with national security. The three-day course covers both technical aspects of AI and organizational challenges, helping leaders understand the benefits, risks, and implementation strategies of AI in their respective units. The program fosters collaboration and draws faculty from different MIT schools, ensuring a comprehensive and impactful learning experience. As AI continues to rapidly evolve, the curriculum will be regularly updated to address the changing dynamics of AI in national security.


Source: MIT


The Pope's Guide to AI Ethics

The Pope, in collaboration with Santa Clara University's Markkula Center for Applied Ethics, has established the Institute for Technology, Ethics, and Culture (ITEC) as a platform to explore the impact of technology on humanity. As part of this initiative, ITEC has released a handbook called "Ethics in the Age of Disruptive Technologies: An Operational Roadmap," aimed at helping tech companies navigate the ethical challenges posed by artificial intelligence (AI) and other disruptive technologies. The handbook acknowledges the desire of tech leaders to maintain high ethical standards and emphasizes the importance of technology being at the service of humanity. While the handbook does not replace government regulation, it provides guidelines to assist companies in prioritizing consumer health and ethics. The collaboration between the Catholic Church and Silicon Valley reflects the recognition of AI's transformative influence and the need for ethical considerations in its development and implementation.


Source: Futurism


Bard Expands: New Languages, Features, and Productivity Tools

Bard, the AI-powered tool for creative exploration and idea generation, has announced its largest expansion yet. It is now available in over 40 languages, including Arabic, Chinese, German, Hindi, and Spanish, and has expanded its availability to more countries, including Brazil and Europe. Users can now customize their experience by adjusting Bard's responses to different tones and styles. New features include the ability to listen to Bard's responses, pin and rename conversations for future reference, export Python code to platforms like Replit, share chat sessions with others, and incorporate images into prompts for creative inspiration. Bard's expansion is guided by responsible AI principles and a commitment to user feedback, privacy, and data protection.


Source: Google


Elon Musk Launches xAI: A New AI Company to Compete with ChatGPT

Elon Musk has unveiled xAI, a new artificial intelligence company that aims to understand the true nature of the universe and provide an alternative to OpenAI's ChatGPT chatbot. xAI's team, led by Musk himself, comprises executives with expertise from companies like Google's DeepMind, Microsoft, and Tesla, as well as academic institutions such as the University of Toronto. Musk's involvement with OpenAI and his reservations about AI have driven him to create a new venture. The company is actively recruiting engineers and researchers to work in the Bay Area, indicating its presence in Silicon Valley. Musk has reportedly acquired processors from Nvidia Corp. for the project, and discussions with investors for funding are underway.


Source: The Age


Shutterstock Expands Partnership with OpenAI to Develop Generative AI Tools

Shutterstock has announced an expansion of its partnership with OpenAI to provide training data for AI models. Over the next six years, OpenAI will license images, videos, music, and metadata from Shutterstock, while Shutterstock gains priority access to OpenAI's latest technology and new editing capabilities. The collaboration aims to bring generative AI capabilities to mobile users through Giphy, which Shutterstock recently acquired. While generative AI poses challenges to stock galleries, Shutterstock is actively embracing the technology, partnering with OpenAI to develop an image creator powered by DALL-E 2. The company has also established licensing agreements with other entities to advance generative AI across various domains. To address concerns from artists, Shutterstock operates a "contributor fund" to compensate artists for their contributions to training generative AI models.


Source: TechCrunch


Microsoft and KPMG Announce $2 Billion Partnership to Develop AI Tools

Microsoft and KPMG have entered into a strategic partnership involving a $2 billion investment over the next five years to co-develop cloud and generative AI tools. The collaboration will impact all of KPMG's businesses, including audit, tax, and advisory services. The partnership aims to enhance productivity and efficiency by leveraging generative AI to analyze large volumes of text, provide insights for regulatory compliance, and test proposals for various outcomes. KPMG's 265,000 global employees will also receive training on how to effectively utilize AI and the new tools. With generative AI applications expected to grow significantly, the investment is projected to create $12 billion in global revenue opportunities for KPMG.


Source: Axios



Meta's Open-Source LLaMA: A New Challenger Emerges in the AI Race

Meta, formerly known as Facebook, is planning to release a commercial version of LLaMA, its open-source Large Language Model (LLM) that uses AI to generate text, images, and code. LLaMA, initially announced as a foundational model for researchers, will now be available for developers and businesses to build generative AI applications. This move sets Meta in direct competition with Microsoft-backed OpenAI and Google. The open-source nature of LLaMA presents opportunities for businesses to adapt and improve the AI, potentially accelerating technological innovation across various sectors. The launch signifies a significant advancement in the AI field and may challenge the dominance of closed or proprietary software in the market.


Source: zdnet