Armilla Review #17

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
June 21, 2023
5 min read

In this week's Armilla Review newsletter, you'll find:

  • Power Play: European Union's AI Act Challenges Tech Giants and Positions Europe as Global Leader in AI Regulation
  • OpenAI, DeepMind, and Anthropic Collaborate with UK Government to Enhance AI Research and Safety
  • OpenAI's Function Calling Update and Its Impact on Builders
  • GPT-4's Pitch Decks More Persuasive than Human-Created Ones
  • Voicebox: Breakthrough in Generative AI for Speech
  • Unleashing Productivity: How Generative AI Could Reshape Industries
  • Assigning AI: Seven Approaches for Students, with Prompts

Power Play: European Union's AI Act Challenges Tech Giants and Positions Europe as Global Leader in AI Regulation

In a significant development, the European Union (EU) has taken a decisive step forward in regulating artificial intelligence (AI) and challenging the power of tech giants. The European Parliament recently approved the EU AI Act, a comprehensive legislative package aimed at safeguarding consumers from potential risks associated with AI applications. The move puts the EU ahead of the United States in AI regulation, where discussions have been protracted and legislation has yet to materialize. This blog post explores the key provisions of the EU AI Act, its potential impact on tech companies, and the broader implications for global AI regulation.

Addressing Concerns and Protecting Consumers

The EU AI Act adopts a "risk-based approach" to regulation, targeting AI applications that lawmakers deem dangerous or high-risk. This includes banning systems that enable law enforcement to predict criminal behaviour and imposing limits on tools that could influence elections or recommend content on social networks. By taking these measures, the EU aims to prevent the misuse of AI for nefarious purposes such as surveillance, algorithmic discrimination, and the dissemination of misinformation that could undermine democratic processes.

Crackdown on Generative AI and Increased Transparency

The legislation specifically addresses the surge in generative AI, which includes technologies like ChatGPT that generate text or images with humanlike flair. To prevent the abuse of AI-generated content, companies would be required to label such content to mitigate the spread of falsehoods. Moreover, the EU AI Act mandates that firms disclose summaries of copyrighted data used to train their AI models, addressing concerns raised by publishers regarding unauthorized use of their materials.

Implications for Tech Companies

The EU's proactive stance on AI regulation has raised concerns among tech giants, with OpenAI, the creator of ChatGPT, even contemplating withdrawing from Europe due to potential restrictions. While the European Parliament's approval is a crucial step, the bill still needs to undergo negotiations involving the European Council. If adopted, the EU AI Act could have significant consequences for tech companies operating in Europe, influencing global AI standards and prompting companies to adjust their practices internationally to avoid a fragmented regulatory landscape.

EU as a Global Leader in Tech Regulation

With the EU AI Act, the EU solidifies its position as a global leader in tech regulation. In recent years, the EU has been at the forefront of imposing fines on tech giants for antitrust violations and implementing data privacy regulations. This alignment in regulatory efforts between the EU and the United States has become more apparent, as both sides recognize the need to address the abuses of tech giants. The EU's progress in AI legislation contrasts with the slower pace in the U.S. Congress, where discussions are only beginning to take shape.

Collaboration and Urgency in AI Regulation

European lawmakers and their U.S. counterparts have been engaging in discussions on AI regulation for years. Recently, there has been a sense of urgency among U.S. policymakers, driven by concerns over the rapid evolution of large language models like ChatGPT. While the U.S. Congress is still in the early stages of grappling with AI regulation, the EU has been actively developing its legislation, demonstrating a more comprehensive understanding of the risks and challenges posed by AI.

The approval of the EU AI Act by the European Parliament signifies a major milestone in the regulation of AI and sets the stage for the EU to become the standard-setter in global AI regulation. The legislation's risk-based approach, crackdown on generative AI, and focus on transparency highlight the EU's commitment to protecting consumers and addressing potential harms associated with AI applications. As the EU moves ahead, it challenges the power of tech giants and paves the way for a global dialogue on responsible AI.

Source: The Washington Post

OpenAI, DeepMind, and Anthropic Collaborate with UK Government to Enhance AI Research and Safety

During London Tech Week, Prime Minister Rishi Sunak announced that Google DeepMind, OpenAI, and Anthropic have agreed to provide the U.K. government with access to their AI models for research and safety purposes. This move aims to facilitate better evaluations of AI systems and enhance understanding of the associated opportunities and risks. The U.K. government's emphasis on ensuring safety and addressing concerns about AI aligns with its pro-innovation approach outlined in the AI white paper published earlier this year. Sunak expressed the ambition for the U.K. to become the global hub for AI safety regulation, without providing specific regulatory proposals at this time. To support this goal, the U.K. plans to host a global summit on AI safety in the fall, and a Foundation Model Taskforce will conduct research on AI safety and assurance techniques with £100 million in funding.

Additionally, Sunak highlighted other areas of focus for the U.K., including semiconductors, synthetic biology, and quantum technology. The U.K.'s balanced and agile regulatory approach is intended to attract investment, as evidenced by companies like Anthropic, OpenAI, and Palantir opening up European headquarters or AI research hubs in the U.K.


Source: Politico

OpenAI's Function Calling Update and Its Impact on Builders

OpenAI has made a significant update to its GPT API, introducing function calling to enable tool use within the language model. GPT models can now call out to external code, databases, or APIs to enhance their capabilities. By specifying a set of tools and their capabilities, GPT can decide when to use them to accomplish tasks more effectively. This update empowers GPT models by integrating traditional software functionality and replacing the need for third-party libraries that serve similar purposes.

Function calling expands the potential of GPT models, allowing them to perform tasks such as checking the weather, retrieving stock prices, accessing databases, and sending emails. Programmers can easily provide these capabilities as tools to GPT, which can intelligently utilize them when necessary. This update also benefits the development of language model agents, simplifying the process and improving their speed and reliability compared to previous methods. While this advancement provides builders with more power and streamlined development, it poses challenges for open-source libraries like Langchain, whose previous work might become outdated as OpenAI continues to release new features.

The function calling update represents a transformative shift in building with language models, akin to sudden leaps in functionality observed in human children. Each upgrade expands the capabilities and competitive landscape of language models, making it easier to construct complex functionalities rapidly. However, it also renders a significant portion of code unnecessary or outdated, prompting infrastructure-layer companies like Langchain to adapt and navigate the changing landscape of language model development.

Source: Every Media

GPT-4's Pitch Decks More Persuasive than Human-Created Ones

The study highlights the superior performance of GPT-4, an advanced AI system, in generating pitch decks compared to humans. The key findings indicate that pitch decks created by GPT-4 were rated as 2x more convincing than those made by humans, and investors and business owners were 3x more likely to invest after reviewing a GPT-4 pitch deck. Additionally, 1 in 5 respondents expressed a willingness to invest $10,000 or more in projects pitched by GPT-4. The effectiveness of GPT-4 was observed across different industries, including finance, marketing, and the tech industry, with higher rates of convincingness, quality, and investment interest compared to human-created decks. The study suggests that leveraging AI-generated content can enhance the success of entrepreneurs in securing funding and lead to greater diversity, creativity, and innovation in various industries. The research methodology involved comparing AI-generated pitch decks with successful human pitches through a survey of investors and business owners, with participants unaware of the involvement of AI in the pitch decks.

Source: Clarify Capital

Voicebox: Breakthrough in Generative AI for Speech

Meta AI researchers have achieved a significant breakthrough in generative AI for speech with the development of Voicebox. This pioneering model has the ability to generalize to speech-generation tasks that it was not specifically trained for, delivering state-of-the-art performance. Voicebox goes beyond traditional speech synthesis by producing high-quality audio clips, offering a wide range of styles and capabilities, including noise removal, content editing, style conversion, and diverse sample generation across six languages.

Voicebox is built upon the Flow Matching model, representing Meta's latest advancement in non-autoregressive generative models. This innovative approach enables Voicebox to learn from varied speech data without the need for careful labeling, significantly expanding the diversity and scale of training data. Trained on over 50,000 hours of recorded speech and transcripts from public domain audiobooks in multiple languages, Voicebox can predict speech segments based on surrounding audio and transcripts, allowing for seamless in-context synthesis.

Use Cases

  1. In-context text-to-speech synthesis: Voicebox can match the audio style of a two-second input sample and generate text-to-speech output accordingly. This capability opens up possibilities for enhancing communication for individuals who are unable to speak or customizing voices for non-player characters and virtual assistants.
  2. Cross-lingual style transfer: Given a speech sample and a text passage in various languages, Voicebox can produce a reading of the text in the desired language. This application holds promise for enabling natural, authentic communication between people who speak different languages.
  3. Speech denoising and editing: Leveraging its in-context learning, Voicebox excels at generating speech to seamlessly edit segments within audio recordings. It can remove short-duration noise or replace misspoken words without the need to rerecord the entire speech, offering convenience akin to popular image-editing tools for audio content.
  4. Diverse speech sampling: Having learned from diverse real-world data, Voicebox can generate speech that more closely resembles natural human speech across different languages. This capability could aid in generating synthetic data to improve the training of speech assistant models, with speech recognition models trained on Voicebox-generated speech yielding impressive results.

While the Voicebox model and code are not currently available publicly due to potential risks of misuse, Meta AI is committed to openness and responsible development. Audio samples and a research paper detailing the approach and results have been shared. Additionally, a highly effective classifier has been developed to distinguish between authentic speech and audio generated with Voicebox, mitigating potential risks associated with the technology.

Voicebox marks a significant milestone in generative AI for speech, introducing a versatile and efficient model capable of task generalization. As with other powerful AI innovations, responsible use and ethical considerations are crucial. Meta AI's decision to share their approach and results fosters collaboration and further discussions on responsible AI development. The impact of Voicebox in the audio domain and its potential applications are highly anticipated, encouraging continued exploration and advancement by researchers in the field.


Source: Meta AI

Unleashing Productivity: How Generative AI Could Reshape Industries

Generative AI, a form of artificial intelligence that can create new content or code based on prompts, has the potential to revolutionize productivity and significantly impact the global economy. According to recent research, the implementation of generative AI across various use cases could contribute an estimated $2.6 trillion to $4.4 trillion annually, equivalent to 15 to 40 percent of the overall impact of artificial intelligence. These projections could even double when considering the integration of generative AI into existing software for tasks beyond the analyzed use cases.

The study identified four key areas where generative AI could deliver the most value: customer operations, marketing and sales, software engineering, and research and development. Within these sectors, 63 specific use cases were examined, ranging from customer interactions to content creation and coding. The potential impact of generative AI extends to diverse industries, including banking, high tech, and life sciences. For instance, the banking industry could experience an additional annual value of $200 billion to $340 billion if these use cases were fully implemented, while the retail and consumer packaged goods sector could see a potential impact of $400 billion to $660 billion per year.

Generative AI has the capacity to transform work dynamics by automating various individual activities and augmenting workers' capabilities. Current generative AI technologies can potentially automate 60 to 70 percent of the tasks that occupy employees' time, surpassing previous estimates. This acceleration is attributed to generative AI's improved natural language understanding, which enables its application in knowledge work associated with higher-wage occupations. As a result, workforce transformation is expected to progress at a faster pace, with estimates suggesting that half of today's work activities could be automated between 2030 and 2060.

While generative AI offers the potential for substantial labor productivity gains, successful implementation will require investments to support workers in adapting to changing work activities or transitioning to new jobs. With the combination of generative AI and other technologies, work automation could contribute 0.2 to 3.3 percentage points annually to productivity growth. However, it is crucial to manage worker transitions and mitigate associated risks, ensuring that adequate support and training are provided. Overcoming challenges related to risk management, skill development, and rethinking core business processes are key factors in fully realizing the benefits of generative AI and fostering sustainable and inclusive economic growth.


Source: McKinsey

Assigning AI: Seven Approaches for Students, with Prompts

This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementarity of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration of AI-assisted learning in classrooms.


Source: Social Science Research Network