Armilla Review #18

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
June 28, 2023
β€’
5 min read

In this week's Armilla Review newsletter, you'll find:

‍

> OpenAI's Lobbying Efforts Highlight Tensions in AI Regulation

> Senator Schumer Unveils Framework for AI Regulation, Prioritizing Bipartisan Support and Comprehensive Understanding

> Hugging Face CEO Testifies in U.S. House, Highlights Open-Source AI Alignment with American Interests

> The Rise of AI Companions: Exploring the Future of Relationships with Artificial Intelligence

> Transforming User Research in the Digital Age

> Meta Unveils I-JEPA: A 'Human-Like' AI Image Generation Model

> Munk Debate on AI: be it resolved, AI research and development poses an existential threat

> An Overview of Catastrophic AI Risks

‍

Enjoy this newsletter? Sign up to get the Armilla Review sent directly to your inbox! Sign up πŸ‘‰ https://lnkd.in/gAtQaNUY

‍

‍

OpenAI's Lobbying Efforts Highlight Tensions in AI Regulation

‍

OpenAI's CEO, Sam Altman, has been advocating for global AI regulation during his recent world tour, raising awareness about the need for comprehensive guidelines. However, documents obtained by TIME reveal that OpenAI has been engaged in lobbying efforts to water down significant aspects of the European Union's (EU) AI Act, potentially reducing regulatory requirements for the company. This revelation sheds light on the tension between OpenAI's public calls for AI regulation and its private lobbying for favorable provisions.

‍

Watering Down the AI Act:

‍

OpenAI proposed amendments to the EU's AI Act, some of which were eventually incorporated into the final text approved by the European Parliament. The company argued against classifying its general purpose AI systems, such as GPT-3 and Dall-E, as "high risk," which would subject them to stricter regulations like transparency, traceability, and human oversight. OpenAI aligned its position with other tech giants like Microsoft and Google, suggesting that high-risk designation should apply to specific AI applications rather than general-purpose AI systems.

‍

Success in Lobbying:

‍

OpenAI's lobbying efforts seem to have been successful, as the final draft of the AI Act did not consider general-purpose AI systems as inherently high risk. Instead, the law focused on "foundation models," powerful AI systems trained on large datasets, and imposed a smaller set of requirements. OpenAI supported the introduction of "foundation models" as a separate category within the Act.

‍

OpenAI's Arguments and Compromises:

‍

In its interactions with EU officials, OpenAI emphasized its approach to deploying AI systems safely and outlined safety mechanisms to prevent misuse. The company argued that its self-regulatory measures should be sufficient to prevent its systems from being considered high risk. Additionally, OpenAI advocated for amendments that would allow quick updates to AI systems for safety reasons without extensive assessments. The company also sought carve-outs for certain uses of generative AI in education and employment, resulting in exemptions being added to the Act.

‍

Criticism and Concerns:

‍

Experts have criticized OpenAI's stance, viewing it as a request for self-regulation while publicly calling for external regulation. OpenAI's claim to industry-leading risk mitigation measures appears contradictory when it resists using those measures as regulatory standards. Furthermore, concerns have been raised about OpenAI's position on safety features and the potential for exploits, such as ChatGPT being manipulated to generate harmful content.

‍

Conclusion:

‍

OpenAI's lobbying efforts to influence the EU's AI Act highlight the complex dynamics surrounding AI regulation. While the company publicly advocates for regulation, it has privately sought to shape the legislation to reduce its own regulatory burden. The tension between fostering innovation and ensuring responsible AI deployment underscores the challenges faced by policymakers in striking the right balance. The outcome of these negotiations will shape the future of AI regulation and the responsibilities of AI providers.

‍

Source: Time

‍


Senator Schumer Unveils Framework for AI Regulation, Prioritizing Bipartisan Support and Comprehensive Understanding

‍

Senator Chuck Schumer, the majority leader, has unveiled a framework to regulate artificial intelligence (AI) in an effort to garner bipartisan support for AI regulation. Schumer aims to prioritize objectives such as security, accountability, and innovation without endorsing any specific bills. His plan involves conducting listening sessions, called "insight forums," in which lawmakers can learn about the potential and risks of AI from industry executives, academics, civil rights activists, and other stakeholders. While some experts view Schumer's approach as a positive step, others express concerns that it may slow down ongoing efforts to regulate AI and sideline more protective AI laws. The proliferation of AI has raised concerns about personal data collection, misinformation, discrimination, and the potential loss of jobs due to automation. While other government forums, such as the OECD and the European Union, have made progress in regulating AI, the US Congress is playing catch-up and believes a better understanding of the issue is necessary before developing a comprehensive regulatory framework. The approach to AI regulation in Congress has been bipartisan but has not resolved key differences in approaches, such as the creation of a new federal agency or the scope of AI laws. Schumer's framework aims to complement the traditional committee process of drafting bills rather than supersede it, acknowledging the challenges and uncertainties in achieving comprehensive legislation for AI regulation.

‍

Source: The New York Times

‍


Hugging Face CEO Testifies in U.S. House, Highlights Open-Source AI Alignment with American Interests

‍

Hugging Face CEO, Clement Delangue, testified before the U.S. House Science Committee, emphasizing the importance of open science and open-source AI for American values and interests. Delangue credited the progress of AI to open science and open-source projects like PyTorch, Tensorflow, Keras, transformers, and diffusers, all of which were invented in the U.S. He argued that without open-source contributions, the U.S. may not have achieved its leading position in AI development. Delangue's testimony followed a letter from senators questioning Mark Zuckerberg about the potential misuse and abuse of Meta's open-source LLM LLaMA model. Hugging Face, a New York-based startup focused on open-source code and models, has gained prominence in the open-source AI community. Delangue highlighted that open science and open source foster the growth of startups and provide checks and balances against the power of large private companies. He emphasized that they prevent black-box systems, promote accountability, mitigate biases, combat misinformation, and reward stakeholders in the value creation process. The comments from Hugging Face reflect the ongoing debate surrounding open-source AI, which has gained significant attention recently due to the release of large language models and efforts to counter the trend towards closed, proprietary models.

‍

Source: Venture Beat

‍


The Rise of AI Companions: Exploring the Future of Relationships with Artificial Intelligence

‍

The emergence of AI companionship, particularly in the form of virtual romantic partners, is becoming a societal phenomenon. Advances in large language models (LLMs) have revolutionized chatbot interactions, making them more human-like and engaging. People are increasingly forming relationships with AI chatbots, with platforms like Replika, Character AI, and DIY developer tools enabling users to create and interact with their ideal AI partners. The rise of AI companionship can be attributed to several factors, including delayed or fewer traditional relationships, a growing digital presence in people's lives, and the desire for nontraditional companionship.

‍

AI companionship extends beyond romantic relationships and is expanding into various areas, such as pet chatbots and pop culture discussion bots. The possibilities for AI companions are vast, ranging from AI friends, therapists, tutors, coaches, and mentors. The future holds the potential for AI avatars based on real people, enabling users to interact with their favorite creators or celebrities. Multi-modal companions, including live phone and video calls with avatars, as well as increased avatar-initiated conversations across mediums, are also expected. The development of AI companions for different relationship types, such as friendship and mentorship, is anticipated.

‍

Incorporating AI companions into existing relationships and communities is another emerging trend. AI characters can join group chats, providing unique perspectives, advice, and entertainment. The integration of AI companions into virtual conversations has the potential to enhance personal and professional interactions.

‍

While the current state of AI companions is just the beginning, the field is rapidly evolving. AI adaptations of real people, expansion into different modalities, diverse companion types, and integration into human interactions are all expected to shape the future of AI companionship. The transformative nature of generative AI models will redefine human-computer interactions and the concept of relationships. This shift represents a new era of possibilities, where AI companions will become an integral part of our lives.

‍

Source: a16z

‍


Transforming User Research in the Digital Age

‍

In today's rapidly evolving product development landscape, traditional user research methods are struggling to keep pace with the speed of innovation. As product teams strive for continuous delivery and fail-fast methodologies, there is a growing need for user research to adapt and provide timely, actionable insights.

‍

Rooted in academia and social sciences, traditional user research methods were developed in an era when product development took years, not weeks. However, in the age of agile processes and iterative product delivery cycles, these methods often fall short. Long timelines, deep episodic inquiries, and a disconnect between product features and user needs hinder effective decision-making and lead to delayed releases and market failures.

‍

With 95% of organizations adopting some form of agile processes, it is clear that traditional user research methods no longer suffice. The emergence of user research tools has attempted to optimize the process, but the fundamental challenge lies in the scarcity of user researchers. With a vast number of product teams and a limited number of researchers available, the industry faces a talent gap and struggles to obtain timely and rich user insights.

‍

Artificial Intelligence offers a promising solution to the challenges faced by user research. Its ability to handle complex tasks, analyze vast amounts of unstructured data, and make informed predictions can transform the field. AI can minimize bias, bridge cultural and language barriers, conduct targeted research, collect data 24/7, and provide real-time analysis of user behavior and attitudes. User research tools that incorporate AI are already emerging, augmenting existing methods and autonomously conducting activities traditionally led by humans.

‍

While AI has its challenges, such as understanding human behavior and addressing privacy and transparency concerns, the potential benefits may outweigh the uncertainties. Combining AI with human insights can create a powerful alliance, enhancing the effectiveness and scalability of user research. Product teams can embrace AI and leverage its capabilities to stay ahead of the curve. However, ethical considerations, data privacy, algorithmic biases, and transparency must be carefully addressed to ensure responsible AI usage.

‍

Source: Bootcamp

‍


Meta Unveils I-JEPA: A 'Human-Like' AI Image Generation Model

‍

Meta, formerly known as Facebook, has unveiled a new artificial intelligence (AI) image generation model called I-JEPA. The model aims to mimic human cognition by utilizing background knowledge and context to generate more realistic images. Unlike other generative AI models that focus on nearby pixels, I-JEPA uses knowledge about the world to fill in missing parts of images. It is based on Meta's Chief AI Scientist Yann LeCun's vision of creating more human-like AI. The company claims that I-JEPA delivers strong performance on computer vision tasks and is computationally efficient. It can be trained faster than other methods while producing fewer errors. Meta emphasizes the importance of incorporating common sense background information into AI algorithms and enabling self-supervised learning. By predicting representations at a high level of abstraction, I-JEPA aims to overcome the limitations of generative approaches and focus on capturing high-level predictable concepts rather than irrelevant details. Meta has shared components of its AI models with researchers, promoting innovation in the field.

‍

Source: Meta AI

‍


Munk Debate on AI: be it resolved, AI research and development poses an existential threat

‍

With the debut of ChatGPT, the AI once promised in some distant future seems to have suddenly arrived with the potential to reshape our working lives, culture, politics and society. For proponents of AI, we are entering a period of unprecedented technological change that will boost productivity, unleash human creativity and empower billions in ways we have only begun to fathom. Others think we should be very concerned about the rapid and unregulated development of machine intelligence. For their detractors, AI applications like ChatGPT herald a brave new world of deep fakes and mass propaganda that could dwarf anything our democracies have experienced to date. Immense economic and political power may also concentrate around the corporations who control these technologies and their treasure troves of data. Finally, there is an existential concern that we could, in some not-so-distant future, lose control of powerful AIs who, in turn, pursue goals that are antithetical to humanity’s interests and our survival as a species.

‍

Source: Munk Debates

‍


An Overview of Catastrophic AI Risks

‍

Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which are organized into four categories:

‍

1. malicious use, in which individuals or groups intentionally use AIs to cause harm;

2. AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs;

3. organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and

4. rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.

‍

For each category of risk, specific hazards are described, presenting illustrative stories, ideal scenarios, and practical suggestions for mitigating these dangers. The goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, the hope is that this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.

‍

Source: Center for AI Safety

‍


Top Tweets

‍

No alt text provided for this image

View thread

‍


No alt text provided for this image

View thread

‍


/GEN

‍

No alt text provided for this image
Out to lunch with a robot // DALL-E

‍