Armilla Review #10

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
May 3, 2023
5 min read

Members of European Parliament seal the deal on Artificial Intelligence Act

The European Parliament has reached a provisional political deal on the world's first AI rulebook, the AI Act, which aims to regulate AI based on its potential to cause harm. The proposal includes stricter obligations on foundation models, a sub-category of General Purpose AI, and a ban on emotion recognition AI-powered software in law enforcement, border management, workplace, and education. AI solutions falling under the critical areas listed in Annex III will be classified as high-risk, but an AI model will only be deemed at high risk if it poses a significant risk of harm to health, safety, or fundamental rights. MEPs have also included extra safeguards to detect biases in the process of providing high-risk AI models. The proposal includes general principles that apply to all AI models, including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination, and fairness. High-risk AI systems will have to keep records of their environmental footprint, and foundation models will have to comply with European environmental standards. The proposal is expected to go to a plenary vote in mid-June after a key committee vote scheduled on May 11 (read more).

Also read: The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment

In this newsletter, you'll find:

  • FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI
  • Making Sense of the Promise (and Risks) of Large Language Models
  • AI and the engineering of consumer trust
  • OpenAI follows through on data protection promises
  • NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems
  • Hugging Face launches open-source alternative to ChatGPT, HuggingChat
  • PricewaterhouseCoopers to Pour $1 Billion Into Generative AI
  • ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
  • How to worry wisely about artificial intelligence
  • Eight Things to Know about Large Language Models
  • AI makes non-invasive mind-reading possible by turning thoughts into text
  • First proper AI generated movie is tormenting, dystopian and scary
  • This AI-generated pizza ad is both better and worse than you’re expecting

Top Articles

FTC Chair and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI

Federal agencies including the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission have pledged to enforce laws and regulations promoting responsible innovation in automated systems. They are concerned about the potential harmful uses of automated systems, such as fraud and discrimination. FTC Chair Lina M. Khan emphasized that technological advances can deliver critical innovation but must not be a cover for lawbreaking. They are committed to upholding fairness, equality, and justice as emerging automated systems become increasingly common in daily life (read more).

Open the Joint Statement on AI

Making Sense of the Promise (and Risks) of Large Language Models

The article discusses the importance of understanding large language models (LLMs) for data professionals, despite their relative obscurity. The article highlights several recent articles that cover various aspects of LLMs, including the inner workings of proprietary models, the impact of recent advances in conversational AI, ethical concerns around biased or harmful text, and potential use cases for LLMs in data science workflows. Additionally, the article recommends exploring transformer neural networks to better understand the architecture behind LLMs (read more).

AI and the engineering of consumer trust

The Federal Trade Commission (FTC) has raised concerns about the use of new generative AI tools by companies to manipulate people's beliefs, emotions, and behavior. The use of these tools can influence people's decision-making and lead them to make harmful choices in areas such as finances, health, education, housing, and employment. Companies should avoid design elements that trick people into making harmful decisions and should make it clear to people if they are communicating with a real person or a machine. The FTC is focusing intensely on how companies use AI technology and advises firms to factor in the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed (read more).

OpenAI follows through on data protection promises

OpenAI has introduced new features to ChatGPT, its AI-powered chatbot, including the ability to turn off chat history, a new ChatGPT Business subscription tier for enterprise customers, and a data export option. Conversations started with chat history disabled will not be used to train OpenAI's models, but will be retained for 30 days and reviewed if necessary. ChatGPT Business will follow OpenAI's API's data usage policies, meaning that end users' data will not be used to train models by default. The new features come amid increasing regulatory scrutiny over OpenAI's data practices, with Italy, France, Spain, and Germany all investigating ChatGPT's GDPR adherence (read more).

Also read, OpenAI previews business plan for ChatGPT, launches new privacy controls and OpenAI rolls out 'incognito mode' on ChatGPT

NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems

NVIDIA has designed an open-source toolkit, NeMo Guardrails, to make it easier for developers to create trustworthy and secure large language model (LLM) conversational systems, including the OpenAI's ChatGPT. Guardrails are a set of programmable rules that sit between a user and an LLM, monitoring, affecting and dictating their interactions. The toolkit natively supports LangChain and provides a layer of safety, security, and trustworthiness to LLM-based conversational applications. The Colang modeling language, on which NeMo Guardrails is built, provides a readable and extensible interface for users to define or control the behavior of conversational bots with natural language. The toolkit supports three types of guardrails - topical, safety and security - to ensure conversations remain within the intended domain, information is accurate and supported by credible sources, and LLM-based attacks are mitigated.

Learn more about the NVIDIA NeMo Framework and NeMo Guardrails (github)

Hugging Face launches open-source alternative to ChatGPT, HuggingChat

Hugging Face has launched HuggingChat, an open-source alternative to ChatGPT that allows users to interact with an open-source chat assistant called Open Assistant. The tool is essentially a user interface that will soon allow users to plug in new chat models, making it similar to other AI chatbot clients such as Poe. However, it is unclear whether HuggingChat can be used commercially due to licensing issues. Hugging Face CEO Clem Delangue emphasized that HuggingChat is version zero, and there are limitations, but the company is iterating quickly on the interface and safety mechanisms and plans to support rapidly improving open-source models in the future (read more).

Open HuggingChat

PricewaterhouseCoopers to Pour $1 Billion Into Generative AI

PwC has announced plans to invest $1bn in generative artificial intelligence (AI) technology in its US operations over the next three years, partnering with OpenAI, creator of ChatGPT, and Microsoft. The firm will fund recruitment and AI training and target AI software makers for potential acquisition, with the aim of embedding generative AI into its own technology stack and offering advisory services to other companies. The technology will help quickly write reports, prepare compliance documents, analyse business strategies, identify inefficiencies, create marketing materials and sales campaigns, among other applications.

While acknowledging the benefits, Mark D. McDonald, senior director analyst at IT research and consulting firm Gartner Inc., said the use of generative AI in areas like tax preparation requires validation by a professional. It might create murky compliance issues, he said. “Referencing an algorithm as the rationale for tax decisions is not an excuse that auditors will accept,” Mr. McDonald said (read more).

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

Geoffrey Hinton, a leading pioneer in artificial intelligence (AI), recently resigned from Google, where he had worked for over a decade, citing concerns about the risks associated with generative AI. Hinton is concerned that the technology could be used for misinformation and could ultimately pose a risk to humanity. Hinton's concerns reflect a growing trend among tech industry insiders who believe that the development of new AI systems poses significant risks to society and humanity. Over 1,000 technology leaders and researchers recently signed an open letter calling for a six-month moratorium on the development of new systems, citing profound risks to society and humanity.

Hinton, who is often referred to as "the Godfather of AI," was one of the pioneers of neural networks, a mathematical system that learns skills by analyzing data. In 2018, he and two other colleagues received the Turing Award, often referred to as the "Nobel Prize of computing," for their work on neural networks (read more).

How to worry wisely about artificial intelligence

The Future of Life Institute, an NGO, asked for a pause in the creation of the most advanced forms of artificial intelligence (AI) for six months, raising concerns about the dangers of AI technology. This follows rapid progress in the development of “large language models” (LLMs) such as ChatGPT, which can solve logic puzzles, write computer code and identify films from plot summaries written in emoji. AI systems today can be trained on much larger data sets taken from online sources, and some, including ChatGPT, are being incorporated into search engines and other products.

Regulation is needed, but for more mundane reasons than saving humanity. Existing AI systems raise real concerns about bias, privacy and intellectual-property rights. Governments are taking three different approaches: imposing heavy regulation, putting in place safeguards, and taking a “hands-off” approach. Many think a sterner approach is needed, including dedicated regulators, strict testing and pre-approval of AI systems before public release (read more).

Eight Things to Know about Large Language Models

This paper surveys eight potentially surprising points related to large language models (LLMs):

  1. LLMs become more capable with increased investment.
  2. Many important LLM behaviors emerge unpredictably with increasing investment.
  3. LLMs appear to learn and use representations of the outside world.
  4. There are no reliable techniques for steering LLM behavior.
  5. Experts cannot interpret the inner workings of LLMs.
  6. LLM performance is not limited by human performance on a task.
  7. LLMs do not necessarily express the values of their creators or web text.
  8. Brief interactions with LLMs are often misleading.

AI makes non-invasive mind-reading possible by turning thoughts into text

Scientists at the University of Texas at Austin have developed an AI-based decoder that can translate brain activity into a continuous stream of text without invasive surgery. The technology uses large language models to interpret the meaning of speech, which allowed the scientists to match brain activity to meaning. The decoder was able to reconstruct speech with high accuracy, even when participants were silently imagining a story. While the decoder struggled with certain aspects of language, it could accurately describe some of the content of silent videos. The technology could offer new ways to restore speech for patients with conditions like motor neurone disease or stroke (read more).

First proper AI generated movie is tormenting, dystopian and scary

Watch the video

Screenshot 2023-05-01 at 3.21.32 PM

This AI-generated pizza ad is both better and worse than you’re expecting

The video uses a combination of AI tools (script by GPT-4, images by Midjourney, video by Runway’s Gen-2, voiceover by ElevenLabs) and the end result is... interesting.

Top Tweets

Screenshot 2023-05-02 at 8.34.26 AM

View thread

Screenshot 2023-05-02 at 8.32.46 AM

View thread

/GEN

DALL·E 2023-05-01 15.59.48 - An elementary school designed with natural materials in the style of Frank Gehry
An elementary school designed with natural materials in the style of Frank Gehry // DALL-E