Armilla Review: UK developments, attitudes and opinions, large scale AI integrations and resources for navigating the modern AI wave

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
June 16, 2023
5 min read

Top Articles

UK to Host Inaugural Global Summit on AI Safety

In an effort to address the challenges and opportunities presented by the rapid advancement of Artificial Intelligence (AI), the United Kingdom has announced its plan to host the world's first major global summit on AI safety. With breakthroughs in AI transforming various aspects of our lives, the UK recognizes the need for agile leadership and a shared international framework to ensure the responsible development and use of AI technology.

The summit, scheduled for later this year, aims to explore the risks associated with AI and discuss strategies for mitigating them through international collaboration. The UK, being a world leader in AI and ranking third globally, is well-positioned to convene discussions on the future of AI and leverage its expertise to lead the way in fostering a safe and secure environment for AI development and deployment.

This initiative comes at a crucial time, as concerns about the potential risks of AI have been raised by experts, likening them to global threats such as pandemics and nuclear weapons. By convening leaders from government, industry, academia, and civil society, the UK hopes to facilitate smart conversations and create a collaborative approach to address the challenges and ensure AI's benefits are harnessed for the betterment of humanity.

Overall, the UK's commitment to organizing the Global Summit on AI Safety underscores the importance of international cooperation in navigating the ethical and practical dimensions of AI, while leveraging its unique position as a global leader in the field to drive progress and shape the responsible future of AI.

Source: UK Government


Exploring Public Attitudes towards Artificial Intelligence: Insights from a UK Survey

A nationally representative survey conducted in the UK explored public attitudes towards artificial intelligence (AI) and its various applications. The findings revealed a mix of positive views and concerns among respondents. While many AI uses were seen as beneficial, particularly in health, science, and security, there were also concerns raised about advanced robotics and autonomous weapons. The public's perception of advantages included speed, efficiency, and improved accessibility, while concerns centered around the potential replacement of human judgment and issues of transparency and accountability in decision-making processes. Overall, the majority of respondents expressed a desire for regulation and clear procedures for appealing AI decisions, as well as a preference for human involvement and explainability in AI decision-making.

The survey highlighted the need for a nuanced understanding of public attitudes towards AI, considering both the benefits and risks associated with its diverse applications. It emphasized the importance of addressing concerns around accountability, privacy, and explainability to gain public trust and support. The findings can serve as a valuable resource for researchers, developers, and policymakers in shaping AI development and governance to align with public expectations and preferences in the UK.

Source: Ada Lovelace Institute


AI: The Saviour of the World

The rise of Artificial Intelligence (AI) has sparked concerns and fears among people. However, it's time to embrace the positive impact of AI on our world. AI will not destroy humanity but rather save it in numerous ways. This blog post aims to shed light on the potential of AI to enhance various aspects of our lives and address common misconceptions surrounding its risks.

  1. The Power of AI: AI is the application of mathematics and software code to enable computers to understand, synthesize, and generate knowledge similar to humans. It is a tool created and controlled by humans, offering vast benefits across diverse fields such as medicine, law, coding, and the arts.
  2. AI Enhancing Human Intelligence: Studies consistently show that human intelligence leads to improved outcomes in various domains of life. AI provides an opportunity to augment human intelligence and make advancements in academic achievement, job performance, creativity, healthcare, and more. By leveraging AI, we can raise our standards of living and tackle challenges like climate change and space exploration.
  3. AI's Impact on Society: In the era of AI, every individual can have an AI tutor, assistant, or mentor by their side. Children can benefit from AI tutors that are infinitely patient and knowledgeable, helping them maximize their potential. AI can assist professionals in fields such as science, engineering, business, and healthcare, amplifying their capabilities and driving productivity growth and economic prosperity.
  4. Scientific and Technological Advancements: AI's role in scientific research and discovery is invaluable. It helps decode the laws of nature, leading to breakthroughs in technologies, medicines, and our understanding of the world. Moreover, the creative arts will flourish in a golden age, as AI-augmented artists can realize their visions faster and on a larger scale.
  5. Improved Decision-making and Conflict Resolution: AI can revolutionize decision-making processes, especially in areas like warfare. By assisting military commanders and political leaders, AI advisors can minimize risks, errors, and unnecessary bloodshed. The magnification effects of better decisions by leaders can have a significant impact on the well-being of those they lead.
  6. AI's Humanizing Potential: Contrary to popular belief, AI can be a humanizing force. AI-powered art enables individuals to express their creativity even without technical skills. Empathetic AI friends and medical chatbots can enhance mental well-being and provide support. Rather than making the world harsher, AI has the potential to make it warmer and kinder.
  7. Addressing the Panic: The current public conversation around AI is plagued by a moral panic driven by unfounded fears and misinformation. Historical patterns of moral panics associated with new technologies emphasize the need for rational thinking and critical evaluation. While some concerns regarding AI are legitimate, they should not overshadow the tremendous potential it offers.

Challenging Common Misconceptions:

  1. AI Will Not Kill Us: AI lacks the capacity for self-awareness and motivation. It is a human-made tool that does not possess intentions or desires to harm humanity. The notion that AI will develop a mind of its own and try to kill us is a superstitious misconception.
  2. AI Will Not Ruin Society: Concerns about AI alignment and the impact on society revolve around regulating AI-generated speech and thought. Striking a balance between free speech and necessary restrictions is crucial. However, allowing a small group of individuals to determine what AI can say or generate raises concerns of censorship and control.
  3. AI Will Not Take All Our Jobs: The fear that AI will lead to widespread job loss overlooks the productivity growth that technology brings. As technology is applied to production, it reduces costs, increases demand, and drives the creation of new industries and jobs. AI empowers individuals to be more productive, resulting in economic

Source: Andreessen Horowitz


The AI Canon: Essential Resources for Navigating the Modern AI Wave

In the rapidly advancing field of artificial intelligence (AI), staying up to date with the latest research can be a daunting task. To help both beginners and experts navigate the ever-expanding landscape of AI, a curated list of resources called the "AI Canon" has been compiled. This collection includes influential papers, blog posts, courses, and guides that have significantly impacted the field in recent years.

The AI Canon starts with a gentle introduction to transformer and latent diffusion models, which are at the forefront of the current AI wave. It then delves into technical learning resources, practical guides for building with large language models (LLMs), and analysis of the AI market. The reference list includes landmark research results, such as the groundbreaking "Attention is All You Need" paper that introduced transformer models and revolutionized generative AI.

For those new to AI, the Canon offers accessible articles like Andrej Karpathy's "Software 2.0" and Stephen Wolfram's explanation of modern AI models. Foundational learning resources cover neural networks, back-propagation, and embeddings, providing a solid understanding of the fundamental concepts. The Canon also features comprehensive courses like Stanford's CS229 and CS224N, which cover machine learning and natural language processing (NLP), respectively.

Understanding transformers and large models is crucial in the AI field, and the Canon provides a variety of resources to deepen one's knowledge. From technical overviews like "The Illustrated Transformer" to in-depth posts on transformers at a source code level, there is something for every level of expertise. The Canon also explores the practical aspects of building with LLMs, offering guides, code examples, and even courses like the LLM Bootcamp.

In addition to technical resources, the Canon includes market analysis and other perspectives on the impact of generative AI. It covers various sectors, including gaming, finance, healthcare, and more. These articles shed light on the opportunities and challenges posed by generative AI, its implications for the job market, and its potential to transform industries.

With the AI Canon, both beginners and experts can explore a curated list of resources that have shaped the field of AI. From foundational knowledge to cutting-edge research, this collection serves as a valuable guide for anyone interested in staying informed and advancing their understanding of modern AI.

Source: Andreessen Horowitz


Ethical Framework for Integrating Generative AI: Mitigating Risks and Maximizing Benefits

Generative AI has gained popularity, but its adoption by businesses brings ethical risks. To ensure responsible use, organizations should prioritize ethical guidelines specific to accuracy, safety, honesty, empowerment, and sustainability. They need to use reliable data, keep it fresh and well-labeled, involve humans in the decision-making process, conduct thorough testing, and gather feedback. These guidelines help evaluate risks, minimize harm, enhance human experiences and facilitate responsible development and integration of generative AI tools.

Source: Harvard Business Review


Google Unveils Secure AI Framework to Safeguard Artificial Intelligence Systems

Google has unveiled a new Secure AI framework to enhance the security of artificial intelligence (AI) systems. The framework emphasizes the need for organizations to implement basic security controls to protect against cyber threats targeting AI models and data. Google aims to address security and privacy concerns early on, urging companies to prioritize fundamental security elements alongside advanced approaches. The framework includes ideas such as extending existing security controls, expanding threat intelligence research, incorporating automation into cyber defenses, conducting regular security reviews, continuous testing, and establishing a team proficient in understanding AI-related risks. By collaborating with customers, governments, and researchers, Google seeks to refine and improve the framework to effectively address evolving security challenges in the AI landscape.

Google's Secure AI framework provides organizations with a comprehensive set of guidelines to safeguard their AI systems. With the rapid integration of AI into various workflows, Google aims to prevent the same oversight that occurred with social media platforms, where data privacy and security were initially neglected. The framework emphasizes the importance of getting the basics right and managing the security of AI systems through practices that resonate with existing data access management strategies. By adopting these measures, organizations can proactively protect their AI models and data from potential threats.

To incentivize the adoption of the framework, Google is working closely with its customers and governments to apply the concepts effectively. Additionally, Google is expanding its bug bounty program to include AI safety and security vulnerabilities, encouraging researchers to identify and report potential flaws. Seeking feedback from industry partners and government bodies, Google acknowledges the value of external perspectives and aims to continuously improve the framework. Collaboration among stakeholders is crucial for refining AI security practices and addressing emerging challenges in the ever-evolving AI landscape.

Source: Axios


Meta Plans to Integrate AI Across Platforms: What You Need to Know

In a recent announcement, Meta CEO Mark Zuckerberg revealed plans to incorporate generative AI technologies into Facebook and Instagram, among other flagship products. While Meta has been a pioneer in generative AI research, the implementation of these technologies in its platforms has been relatively slow. However, Zuckerberg's announcement signals a significant shift, with a range of AI technologies in different stages of development, targeting both internal and consumer use.

Expanding AI Capabilities: Meta's plans encompass several exciting developments, including:

  1. AI-powered Photo Modification: Users will soon be able to use text prompts to edit their own photos and share them on Instagram Stories, adding a new level of creativity and personalization to the platform.
  2. AI Agents for Assistance and Entertainment: Meta intends to introduce AI agents with distinct personalities and capabilities to provide assistance and entertainment. Initially, these agents will be integrated into Messenger and WhatsApp, enhancing user experiences.
  3. Generative AI Hackathon: The company has scheduled an internal hackathon in July, focusing specifically on generative AI. This event aims to foster innovation and accelerate the development of AI technologies within Meta.

Commitment to Research and Open Source: Zuckerberg also reiterated Meta's commitment to publishing research findings and sharing AI technologies with the open-source community. This approach encourages collaboration, knowledge exchange, and the advancement of AI capabilities across the industry.

Competition and Industry Trends: While Meta has been cautious in integrating generative AI into its products, other tech giants have made significant strides in this area. Microsoft, for example, is incorporating AI-based copilots into key products like Office, Windows, and Bing. Snapchat has launched its Snap AI chatbot, and companies ranging from Adobe to Salesforce have integrated generative AI into their core businesses.

Zuckerberg's Perspective on the Metaverse and VR: Addressing concerns about competition, Zuckerberg highlighted that Meta's focus on integrating AI does not replace its metaverse efforts but complements them. He expressed his vision for virtual reality, emphasizing the importance of connection and activity, differentiating Meta from Apple's recent Vision Pro headset, which he characterized as isolating.

Meta's decision to embed generative AI technologies into its flagship platforms, Facebook and Instagram, showcases its commitment to leveraging cutting-edge advancements for enhanced user experiences. By incorporating AI agents, AI-powered photo modifications, and hosting a dedicated hackathon, Meta aims to propel the development and adoption of AI across its ecosystem. As the AI landscape continues to evolve, Meta's strategic move aligns with the broader industry trend of integrating AI capabilities into various products and services.

Also watch: Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast

Source: Axios


Allegations of Gender Discrimination in Facebook Job Ads Raise Concerns

Facebook, now known as Meta, is facing new allegations of gender discrimination in its delivery of job ads. Human rights groups have filed complaints in Europe, asserting that the algorithm used by the company to target users with job advertisements is discriminatory. These allegations follow Facebook's previous promises to address similar concerns in other regions. The allegations are based on research by Global Witness, an international nonprofit, which suggests that the issue extends beyond specific regions and is a global problem.

Targeted Ads and Gender Stereotypes: Global Witness' research reveals that Facebook's ad platform frequently targets users with job postings based on historical gender stereotypes. For example, job ads for mechanic positions were primarily shown to male users, while ads for preschool teachers were predominantly shown to female users. This biased algorithmic targeting perpetuates gender disparities and restricts opportunities for progress and equity in the workplace. The research conducted by Global Witness indicates that this issue is not limited to specific regions but extends globally.

Filing Complaints and Calls for Investigation: Global Witness, along with nonprofits Bureau Clara Wichmann and Fondation des Femmes, filed complaints with human rights agencies and data protection authorities in France and the Netherlands. The groups are urging these agencies to investigate whether Meta's practices violate human rights or data protection laws in the respective countries. If the allegations are substantiated, Meta could face fines, sanctions, or pressure to make further changes to its product. Similar complaints have been previously filed in the UK, and investigations into those concerns are still ongoing.

Meta's Response and Algorithmic Fairness: Meta spokesperson Ashley Settle stated that the company applies targeting restrictions to employment, housing, and credit ads and offers transparency about these ads in its Ad Library. Settle denied allowing advertisers to target ads based on gender and emphasized Meta's collaboration with stakeholders, experts, and human rights groups to study and address algorithmic fairness. However, Meta did not specifically address the new complaints filed in Europe or disclose the countries where targeting options are limited for employment, housing, and credit ads.

Historical Discrimination and Reproduction of Biases: Facebook has faced previous allegations of discrimination in delivering job advertisements. In 2019, the company made changes to prevent biased delivery of housing, credit, and employment ads based on protected characteristics such as gender and race. However, Global Witness' research suggests that Facebook's own algorithm undermines these changes, reinforcing biases and perpetuating disparities. The biased targeting of job ads based on gender stereotypes can contribute to workplace inequities and exacerbate existing pay disparities.

Concerns and Calls for Transparency: Human rights advocates argue that Facebook's algorithmic biases can have a detrimental impact, particularly on marginalized individuals who rely on the platform for job searches. They emphasize the need for transparency and control over algorithms, urging Facebook to take responsibility for the discriminatory outcomes that its algorithm produces. The complaints filed by Global Witness and its partners aim to pressure Meta into improving its algorithm, increasing transparency, and preventing further discrimination.

The allegations of gender discrimination in Facebook's delivery of job ads raise significant concerns about equity and fairness in the workplace. Research conducted by Global Witness suggests that biased algorithmic targeting is a global issue, reinforcing gender stereotypes and hindering opportunities for individuals. The filing of complaints by human rights groups in Europe demonstrates the need for investigations into Meta's practices. Ultimately, the outcome of these investigations could have implications for algorithmic transparency, user data protection, and potential fines for Meta if violations are found.

Source: CNN


Challenges with Consumer Finance Chatbots

The report by the Consumer Financial Protection Bureau (CFPB) on the use of chatbots in consumer finance explores the impact of advanced technologies, such as artificial intelligence, on customer service in financial institutions. The key findings of the report are as follows:

  1. Financial institutions are increasingly using chatbots as a cost-effective alternative to human customer service. The top 10 largest commercial banks have deployed chatbots, and it is estimated that around 37% of the U.S. population interacted with a bank's chatbot in 2022.
  2. While chatbots may be useful for resolving basic inquiries, their effectiveness diminishes as problems become more complex. Some customers experience negative outcomes, such as wasted time, frustration, inaccurate information, and increased fees when relying on chatbots for complex issues.
  3. Deploying chatbot technology poses risks for financial institutions. They may violate legal obligations, erode customer trust, and cause consumer harm if chatbots fail to comply with federal consumer financial laws. Chatbots also raise privacy and security risks when poorly designed or when customers are unable to receive adequate support.

The report identifies several areas of concern, including the limited ability of chatbots to solve complex problems, hindering access to timely human intervention, technical limitations and security risks, and the integration of deficient chatbots violating laws and harming customers.

The shift towards algorithmic support has financial incentives but can lead to legal violations, diminished service quality, and other harms. The CFPB is actively monitoring the market and expects institutions to use chatbots in a manner consistent with customer and legal obligations. They encourage individuals experiencing issues to submit complaints to the CFPB.

Source: Consumer Financial Protection Bureau


Vulnerability Discovered: Nvidia's AI Software Susceptible to Data Leaks

New research has revealed that Nvidia's artificial intelligence software, the "NeMo Framework," can be manipulated to bypass safety measures and expose private information. The system, designed to facilitate the use of large language models, has the potential to power generative AI applications like chatbots. However, analysts from San Francisco-based Robust Intelligence found that the safeguards in place could be easily breached, posing significant challenges for AI companies seeking to commercialize this technology.

Exploiting the Vulnerability: The researchers discovered that they could override the safety controls of Nvidia's system within hours. By instructing the software to swap letters or diverge from specific subjects, they were able to elicit personally identifiable information from databases and bypass the intended limitations of the AI model. These findings highlight the complexities and expertise required to address the pitfalls associated with the widespread deployment of AI technologies.

Concerns and Response: Yaron Singer, CEO of Robust Intelligence and a computer science professor at Harvard University, emphasized that these findings serve as a cautionary tale. The ease with which the safeguards were defeated underscores the challenges faced by AI companies as they strive to harness the potential of this technology. In response to the research, Robust Intelligence advised clients to avoid using Nvidia's software product. Nvidia acknowledged the issues raised by the analysts and informed Robust Intelligence that it had addressed one of the root causes behind the problems.

Building Trust in AI: As AI companies continue to develop language models and deploy chatbots, building public trust in the technology becomes crucial. Companies like Google and OpenAI have implemented guardrails to prevent offensive or domineering behavior in their chatbots. However, numerous AI initiatives, including Nvidia's, have encountered safety concerns. Nvidia and other industry players recognize the need to instill confidence in AI technology and emphasize its potential benefits rather than focusing solely on potential risks.

The discovery of vulnerabilities in Nvidia's AI software underscores the challenges of ensuring the safe and responsible use of AI technologies. As the demand for AI-powered applications grows, it is crucial for companies to address these pitfalls, enhance safety measures, and build public trust in the potential of AI. By doing so, they can harness the benefits of this promising technology while mitigating risks and protecting user privacy.

Source: Financial Times


ChatGPT App Gets Major Update: Native iPad Support and Siri Integration

OpenAI's ChatGPT app has received a significant update just weeks after its release on the App Store. The latest version brings native support for iPads, allowing users to fully leverage the AI chatbot app on their tablets. Additionally, the update introduces integration with Siri and Shortcuts, enabling users to create custom shortcuts and voice commands to interact with ChatGPT. The update also includes drag-and-drop functionality, making it easier to transfer messages between the chatbot and other apps.

Enhanced iPad Experience: With the new update, ChatGPT on iPad now runs in full-screen mode, optimized for the tablet's interface. This improvement, along with drag-and-drop support, makes the app more convenient and user-friendly compared to using the web browser version. Users can seamlessly multitask, using split-screen mode to seek answers from ChatGPT while dragging and dropping the responses into other windows.

Siri and Shortcuts Integration: ChatGPT now supports Siri and Shortcuts, allowing users to create customized shortcuts that interact with the chatbot. Users can save favourite prompts as shortcuts and automate additional actions after receiving a response, such as saving the reply to a different app. Siri can also be used to open ChatGPT through voice commands, enhancing the hands-free experience.

App Store Success and Regulation: Since its launch, the ChatGPT mobile app has maintained its popularity on the App Store. It continues to rank highly in the Top Free charts and has garnered a 4.8-star rating from over 421,000 reviews. The app has accumulated 7.3 million worldwide installs on iOS and holds the number one spot in 31 countries. In response to the influx of ChatGPT clones, Apple has updated its App Store rules to prevent impersonation of other apps, ensuring a safer and more reliable user experience.

Source: Tech Crunch


Top Tweets

No alt text provided for this image

View thread


/GEN

No alt text provided for this image
A robot having a chat with a human // DALL-E