Armilla Review - From Open Source Governance to Response Quality: Navigating Risks in Generative AI

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
July 25, 2023
5 min read

Navigating the Risks of Generative AI: A Playbook for Risk Executives

In the realm of Generative Artificial Intelligence (GenAI), risk executives are tasked with managing a wide range of potential risks and challenges that arise from its adoption. These risks span from privacy concerns and cybersecurity threats to regulatory compliance and legal obligations. To fully harness the benefits of this groundbreaking technology, businesses must implement governance strategies that carefully balance these risks with the rewards of innovation. Such an approach fosters trust among customers, investors, and other stakeholders and sets the stage for a competitive edge.

Key C-suite leaders play pivotal roles in driving responsible AI usage. The Chief Information Security Officer (CISO) must tackle cybersecurity threats, like sophisticated phishing and deep fake attacks. The Chief Data Officer (CDO) and Chief Privacy Officer (CPO) face the challenge of ensuring data quality, preventing unauthorized access, and complying with privacy regulations. The Chief Compliance Officer (CCO) must adapt to emerging regulatory requirements, while the Chief Legal Officer (CLO) must address legal risks arising from inadequate governance.

Internal Audit leaders play an essential role in confirming that AI systems align with the company's goals, requiring new methodologies and skill sets. Simultaneously, the Chief Financial Officer (CFO) and Controller must address financial risks associated with GenAI, such as hallucination risks and reasoning errors in numerical computation.

To achieve trusted AI, effective governance is vital, with stakeholders from diverse fields contributing their expertise. By investing in their people's knowledge and experience, businesses can critically evaluate the outputs of Generative AI models and ensure its responsible use. Ultimately, by following these principles and adopting responsible AI practices, companies can unlock GenAI's potential while maintaining trust and credibility with their stakeholders and society at large.


Learn more: Managing the risks of generative AI: A playbook for risk executives – beginning with governance


Source: PwC

EU A.I. Act: Embracing Open-Source Developers for Responsible A.I. Governance

The EU's A.I. Act is set to become the world's first comprehensive regulation for artificial intelligence, shaping the global landscape for A.I. development and deployment. With its risk-based approach, the Act aims to strike a balance between fostering responsible A.I. innovation and mitigating potential risks associated with the technology. Open-source developers play a crucial role in this process, as their contributions and tools, like model cards and datasheets, have paved the way for responsible A.I. development.

The Act acknowledges the importance of open-source innovation by granting a risk-based exemption for open-source developers. While encouraging the use of documentation best practices, compliance responsibilities fall on entities that incorporate open-source components into A.I. applications. However, concerns remain regarding how open-source developers, academics, and non-profits can comply with obligations tailored for A.I. products, particularly those related to generative A.I. systems.

The EU's approach can serve as a model for other countries considering A.I. regulation. As A.I. technology and policies evolve globally, a risk-based approach that embraces open-source collaboration is essential for responsible A.I. development. Policymakers worldwide are taking note of the EU's efforts, recognizing the value of open-source innovation in shaping the future of A.I. governance. Inclusion of open-source developers in discussions and regulations will be vital to promote sustainable, inclusive, and transparent A.I. development practices worldwide.


Source: Fortune

Enhancing Risk Assessment for AGI Companies: Lessons from Safety-Critical Industries

This paper delves into the pressing need for improved risk management practices at AGI (Artificial General Intelligence) companies like OpenAI, Google DeepMind, and Anthropic, given the potential catastrophic risks associated with AGI development. To support these efforts, the paper reviews well-established risk assessment techniques utilized in safety-critical industries and suggests their application to assess catastrophic risks from AI. Three risk identification techniques (scenario analysis, fishbone method, and risk typologies and taxonomies), five risk analysis techniques (causal mapping, Delphi technique, cross-impact analysis, bow tie analysis, and system-theoretic process analysis), and two risk evaluation techniques (checklists and risk matrices) are thoroughly discussed. The paper provides insights into how AGI companies could adopt these techniques, along with their respective benefits, limitations, and recommendations.


Source: arXiv

Alarming Decline in ChatGPT Response Quality Revealed by Researchers

Researchers from Stanford and UC Berkeley have documented a concerning decline in the quality of responses from ChatGPT, particularly in the latest version, GPT-4. For instance, the prime number identification accuracy of GPT-4 plummeted from 97.6% to a mere 2.4% between March and June 2023. The research paper, "How Is ChatGPT's Behavior Changing Over Time?" highlighted the degradation in ChatGPT's performance, raising questions about its reliability.

GPT-4, which was recently touted as OpenAI's most advanced model, was released to paying API developers with promises of powering innovative AI products. However, the study reveals significant shortcomings in GPT-4's responses to straightforward queries, posing potential concerns for users who rely on ChatGPT-generated information.

The research team devised tasks to measure various qualitative aspects of ChatGPT's underlying large language models (LLMs), including math problem-solving, answering sensitive questions, code generation, and visual reasoning. The study found substantial discrepancies in the performance of GPT-4 and GPT-3.5 between their March and June 2023 releases.

The fluctuations in response quality over such a short period raise questions about how these LLMs are updated and whether improvements in certain aspects negatively impact others. OpenAI may need to address these concerns and consider monitoring and publishing regular quality checks for its paying customers. Additionally, businesses and governmental organizations could take a proactive approach by monitoring basic quality metrics for these LLMs, given their potential commercial and research implications.


Source: Future US Inc

OpenAI's Privacy Concerns: Advanced Image Recognition in ChatGPT Raises Alarms

OpenAI's advanced version of ChatGPT, featuring highly capable image recognition, has raised privacy concerns that hinder its wider release. The AI-powered chatbot's ability to describe images, recognize specific individuals, and answer questions about visual content has exceeded expectations, presenting exciting possibilities for assisting blind users. However, concerns about potential misuse and safety issues have prompted OpenAI to revoke early access to the feature and obscure people's faces.

The collaboration between OpenAI and Danish startup Be My Eyes aims to create a virtual volunteer to help users with visual impairments, replacing human volunteers. Nonetheless, the chatbot's ability to identify public figures and its "hallucination" of names for individuals on the verge of fame have raised red flags. OpenAI fears the tool could provide unsafe assessments, such as gender or emotional state, which could lead to legal issues, especially in regions like Europe that require consent for biometric data usage.

Microsoft, a significant investor in OpenAI with access to the visual analysis tool, supports OpenAI's cautious approach and commitment to responsible AI deployment. Nonetheless, relying on chatbots to self-regulate their sharing capabilities remains imperfect, necessitating further safety measures as AI technology advances.

The power of image recognition in ChatGPT raises pertinent questions about responsible deployment and privacy safeguards in AI systems, especially when handling sensitive information such as facial data. As AI technology continues to evolve, ensuring robust privacy protection and ethical use of AI tools becomes a critical challenge for developers and tech companies.


Source: tech.co

Apple Developing AI Chatbot to Challenge Industry Giants

Apple is taking on industry rivals like OpenAI and Google by developing its own AI chatbot, internally known as "Apple GPT." The tech giant has built its framework called "Ajax" to create large language models, similar to OpenAI's ChatGPT and Google's Bard. Ajax is powered by Google Cloud and leverages Google JAX, Google's machine learning framework. While Apple is yet to decide on the public release strategy, the company aims to make a significant AI-related announcement next year.

Initially halted due to security concerns, the chatbot has been gradually rolled out to more Apple employees, requiring special approval for access. Employees are using the chatbot for product prototyping, summarizing text, and answering questions based on trained data. The chatbot's capabilities resemble those of commercially available products like Bard and ChatGPT, with no distinctive additional features.

In line with the development, Apple is actively seeking generative AI talent through job postings. Despite having introduced AI features across its products, Apple is now catching up with the rising consumer demand for generative AI tools that assist with tasks such as drafting essays and images.

Apple is committed to addressing potential privacy concerns associated with AI, as CEO Tim Cook emphasizes a thoughtful approach to AI integration across the company's offerings. With this new venture into AI chatbot technology, Apple aims to compete with its industry rivals and meet consumer expectations for innovative AI tools.


Source: TechCrunch

Meta Unveils LLaMA 2 AI Model for Commercial Use, Raising Competition in Generative AI Landscape

Meta, the parent company of Facebook, has made a significant announcement by unveiling LLaMA 2 (Large Language Model Meta AI), a freely available AI model for commercial use. LLaMA 2 represents a powerful leap from its predecessor, LLaMA, which was previously limited to research purposes only. The move has potential implications for the rapidly evolving world of generative AI, providing enterprises with a new and free option in the market, challenging rivals like OpenAI's ChatGPT Plus and Cohere.

Microsoft Azure will host LLaMA 2, which is noteworthy as it is also the primary home for OpenAI's GPT-3/GPT-4 LLMs, highlighting Microsoft's investments in both Meta and OpenAI. Meta's CEO, Mark Zuckerberg, emphasized the significance of open-sourcing LLaMA 2, expressing his belief that it will drive innovation and improve safety and security in AI development.

LLaMA 2 is a transformer-based auto-regressive language model available in multiple sizes, including 7, 13, and 70 billion parameters. It has been trained on a vast dataset, 40% larger than its predecessor, and its context length has doubled to two trillion tokens. Meta claims LLaMA 2 performs better than its predecessor based on benchmarks provided.

Safety measures have been a priority in LLaMA 2's development. The model undergoes a series of supervised fine-tuning stages and reinforcement learning from human feedback, aiming to enhance safety and limit potential biases. Meta's research paper provides comprehensive details on the steps taken to ensure safety and transparency in the model.

Also read these POVs about how LLaMA 2 is not open source:

- Meta’s LLaMa 2 license is not Open Source | Voices of Open Source

- LLaMA2 isn't "Open Source" - and why it doesn't matter | Alessio Fanelli


Source: VentureBeat