Armilla Review - AI Governance & Innovation: A Comprehensive Roundup

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
October 4, 2023
5 min read

Manage the Risks of Third-Party AI Tools, Prioritize Responsible AI Practices

As artificial intelligence continues to advance, organizations face escalating risks related to its use, including financial, reputational, and legal consequences. A responsible AI framework is imperative to safeguard against these risks. A recent report by MIT Sloan Management Review and Boston Consulting Group reveals that 78% of organizations use third-party AI tools, with over half relying on them exclusively. Alarmingly, more than half of AI failures (55%) stem from these third-party tools. To mitigate these risks, the report suggests five key strategies:

  1. Expand Responsible AI Programs: To keep pace with responsible AI leaders, organizations should scale and institutionalize responsible AI programs throughout their operations.
  2. Properly Evaluate Third-Party Tools: As third-party tool usage grows, organizations must rigorously evaluate these tools using various methods, including assessing vendor responsible AI practices and regulatory compliance.
  3. Prepare for Regulation: Organizations, especially in regulated industries, should adopt structured risk management approaches, anticipating future regulations governing AI.
  4. Engage CEOs: Active CEO involvement in responsible AI efforts, including hiring, goal-setting, and product-level discussions, can yield substantial business benefits and risk mitigation.
  5. Invest in Responsible AI: Given the soaring adoption of AI and its associated risks, businesses should increase investment in ethical and responsible AI practices, as failure to do so could expose them to significant material risks.

In a landscape where AI's influence is expanding rapidly, prioritizing responsible AI practices is crucial for organizations to protect their interests and maintain public trust.

Read the report

Source: MIT Sloan

President Biden to Unveil Executive Order on AI, Emphasizes Responsible Innovation

President Joe Biden has announced plans to issue a significant executive order addressing artificial intelligence (AI) in the coming weeks, reinforcing his administration's commitment to responsible AI innovation. While specific details of the order remain undisclosed, it builds on a previous proposal for an "AI Bill of Rights." Civil society groups have urged the inclusion of this bill in the executive order, calling for federal agencies to implement it. In tandem with this effort, the U.S. Senate is actively educating lawmakers about AI in preparation for extensive legislative work. During a meeting of the Presidential Council of Advisors on Science and Technology, President Biden underscored his keen interest in AI's potential and risks, emphasizing the importance of harnessing AI's power for good while mitigating associated risks. He also highlighted the United States' dedication to collaborating with international partners, including the United Kingdom, to establish safeguards for AI. The meeting showcased various AI use cases, from predicting climate change-related extreme weather to advancing material science and exploring the origins of the universe. President Biden's forthcoming executive order signals a significant step toward shaping AI policy and regulation in the United States.

Source: CNN

Medium Takes a Stand: Default 'No' to AI Training on Writer Stories

Medium CEO Tony Stubblebine has unveiled Medium's new policy regarding AI training on stories published on their platform, emphasizing the need for consent, credit, and compensation for writers. Stubblebine expressed concerns about the current state of AI-generated content, highlighting that AI companies often profit from writers' work without seeking permission, offering credit, or providing compensation. To address this, Medium has implemented a default "No" policy on AI training, taking measures to block AI companies from using writers' stories until fairness issues are resolved. The company is also exploring the formation of a coalition with other platforms to define fair AI use practices. Stubblebine outlined potential solutions for consent, credit, and compensation, including the option for writers to opt out of AI training in exchange for a 10% earnings boost and the possibility of allowing AI-based search engines to summarize content while giving proper credit. Medium's stance reflects its commitment to protecting writers' interests and promoting ethical AI practices.

Source: Medium

Google Introduces Google-Extended: Empowering Web Publishers with AI Content Control

Google has unveiled a new feature called Google-Extended, aimed at giving web publishers greater control over how their content is utilized in the development of generative AI tools, such as Bard and Vertex AI generative APIs. This move is driven by a desire to offer web publishers more choice and control regarding their content's contributions to AI models. Website administrators can now use Google-Extended, implemented through robots.txt, to decide whether to allow their content to be used for enhancing AI models. Google emphasizes the importance of providing transparent and scalable controls for web publishers and plans to collaborate with the web and AI communities to explore additional machine-readable methods for content choice and control in the evolving AI landscape.

Source: Google

Microsoft Launches Fifth AI Co-Innovation Lab in San Francisco to Foster Innovation

Microsoft is expanding its network of AI Co-Innovation Labs with the launch of its fifth lab in San Francisco. These labs provide startups and established companies a collaborative environment to develop, prototype, and test AI solutions. The move aims to accelerate the development of new AI products and services while growing the ecosystem of AI-driven solutions. Microsoft's commitment to AI development aligns with its principles of responsible AI and consumer privacy. The labs offer a platform for experimenting with AI development tools and applying emerging technologies to address real-world business challenges. By providing scalable controls and fostering collaboration, Microsoft aims to empower organizations of all sizes to leverage AI in innovative ways. The lab will welcome participants interested in Azure, AI use cases, and solving complex problems collaboratively.

Source: Microsoft

Cloudflare Launches Workers AI: Democratizing AI Inference on a Global Scale

Cloudflare, a developer platform with a substantial user base, has introduced Workers AI, an AI inference as a service platform. This innovation aims to simplify AI model deployment for developers, making it accessible, serverless, privacy-focused, and globally distributed. By leveraging Cloudflare's network of GPUs, developers can run well-known AI models with just a few lines of code. The platform offers off-the-shelf models and a REST API for easy integration into various development stacks. Cloudflare's commitment to privacy ensures that user data is not used to train models, making it suitable for both personal and business applications. Additionally, Cloudflare plans to expand its GPU coverage worldwide, making AI inference widely available. Workers AI represents a step towards democratizing AI and empowering developers to harness the potential of artificial intelligence in their applications.

Source: Cloudflare

AWS Unveils Five Innovations in Generative AI to Empower Organizations

Amazon Web Services (AWS) has announced five innovations in generative artificial intelligence (AI) aimed at enabling organizations of all sizes to harness the potential of generative AI.

  1. Amazon Bedrock: A fully managed service offering access to foundation models (FMs) from leading AI companies via a single API. It simplifies generative AI application development, ensuring privacy and security while allowing organizations to customize models.
  2. Amazon Titan Embeddings: A powerful language model that converts text into numerical representations called embeddings, supporting search, personalization, and retrieval-augmented generation (RAG) use cases.
  3. Llama 2: Meta's next-generation large language model (LLM) available through Amazon Bedrock, optimized for dialogue use cases, offering improved training on large datasets and longer context length.
  4. Amazon CodeWhisperer: An AI-powered coding companion with a new capability that allows organizations to securely customize code suggestions based on their internal codebase, enhancing developer productivity.
  5. Generative BI in Amazon QuickSight: New capabilities enable business analysts to create customizable visuals using natural language commands, reducing the time spent on manual chart creation and calculations.

These innovations aim to democratize generative AI, providing organizations with secure, customizable, and efficient tools to harness the transformative potential of generative AI across various industries and use cases.

Source: Business Wire

AI Trends: What to Expect in 2023 and Beyond

The artificial intelligence (AI) market is experiencing rapid growth, with a significant impact on various industries. McKinsey reports that 50% to 60% of organizations are already using AI-centric tools, and Forbes predicts a 37.3% compound annual growth rate (CAGR) in the AI market, reaching $1.81 trillion by the end of the decade. Several key trends are expected to shape the AI landscape in 2023 and beyond.

  1. Increased Use of AI Assistants: AI assistants are poised to automate and digitize various service sectors, including legal services, public administration, and citizen services. These assistants offer 24/7 availability, lower costs, and ease of use. They can streamline processes such as legal documentation, digital signatures, visa applications, and compliance-related tasks. Additionally, AI assistants can simplify complex technologies like blockchain and smart contracts.
  2. More Adoption Among Fortune 500 Companies: AI products are scaling rapidly, with innovations such as OpenAI's ChatGPT gaining over 100 million users within months. Fortune 500 companies are expected to embrace AI to enhance their strategies, especially in law, HR, and finance. The emergence of no-code solutions will democratize AI adoption, allowing businesses to integrate advanced technologies without extensive technical expertise.
  3. The Continued Rise of Generative AI: Generative AI, which produces original information independently using machine learning and deep learning, is gaining prominence. It has been used to create texts, images, audio, and video content. Experts predict a future where synthetic media becomes ubiquitous, driving advances in entertainment, education, and accessibility.
  4. Growth of Natural Language Processing (NLP) Systems: NLP is a critical domain of AI that enables machines to understand and respond to human language naturally. It underpins technologies like search engines and voice-activated assistants. The NLP market is projected to grow at a CAGR of 40.4% from 2023 to 2030, reaching $439.85 billion by the end of the decade.
  5. AI in Healthcare: AI's impact on healthcare is expected to grow significantly, particularly in diagnosis and treatment. Machine learning will play a key role in drug discovery and medical research. The AI in drug discovery market is expected to reach $4 billion by 2027, and over 50% of American healthcare providers plan to use AI tools for medical processes.

As AI technologies like machine learning, deep learning, and NLP continue to advance, their application across various industries is set to drive a digitized and automated future.

Source: Cointelegraph