Armilla Review - Navigating the AI Landscape: From Copyright Battles to Trustworthy Solutions - A Deep Dive into the Latest Developments

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
September 6, 2023
5 min read

3 Key Questions for Assessing Responsibility and Trustworthiness When Purchasing AI Solutions

Navigating the intricate landscape of AI procurement demands a careful evaluation of suppliers to ensure both reliability and ethical standards. With the AI market's exponential growth, valued at $136 billion in 2022 and projected to soar with a CAGR of 37.3% from 2023 to 2030, businesses must prioritize responsible AI procurement. This entails mitigating the potential risks that come with inadequate AI partnerships, including financial setbacks, damage to reputation, and broader societal implications.

Responsible AI: A Necessity in the Digital Age

The importance of responsible AI procurement cannot be overstated. As the market continues to expand, the wrong AI choice could lead to severe consequences. To avoid these pitfalls, it is crucial for businesses to incorporate a responsible framework when acquiring AI solutions. This framework emphasizes transparency, accountability, equity, and safety, aligning with industry standards and forthcoming regulations.

Critical Questions for Evaluating AI Suppliers

As you delve into the world of AI procurement, three key questions emerge to help assess AI vendors in terms of responsible AI guidelines and regulatory compliance. These questions act as beacons to guide your Responsible AI Procurement strategy, creating a framework for discussions and documentation. The RAI Institute has developed these questions based on a comprehensive range of AI standards, best practices, and legal requirements, ensuring that businesses make informed decisions aligned with industry-leading frameworks like the ISO AI Management Standard 42001, the EU AI Act, and the NIST AI Risk Management Framework.

  1. Does your business integrate AI risk management, ethics, and/or responsibility in its protocols and documentation? Is there a regularly updated AI policy? The initial question focuses on the integration of AI risk management, ethics, and responsibility within your organization's protocols and documentation. The aim is to assess the depth of your organization's commitment to transparency, explainability, and equity, as well as its dedication to ongoing staff training and alignment with modern Responsible AI standards.
  2. How do you uphold responsibility in AI systems across their lifecycle, underscoring human oversight, transparency, and reliability? What processes and documentation are part of this strategy? Here, the focus shifts to the holistic strategy employed to ensure responsibility throughout an AI system's lifecycle. The response should reflect detailed processes, human oversight, bias audits, and ethical considerations. Emphasis is placed on clear roles, continuous accuracy monitoring, and adherence to data protection norms.
  3. How is the AI documentation handover managed to ensure responsible AI system use by the client? The final question centers on the transfer of AI documentation to the client, ensuring responsible AI system use. A comprehensive response should detail client-specific documentation, clear version tracking, accessible digital resources, training modules, and post-handoff support.

Expertise and References

The Responsible AI Institute's expertise, drawn from professionals and diverse communities, has culminated in these critical questions, which mirror the industry's most esteemed requirements. ISO AIMS emphasizes best practices, NIST underscores trustworthy AI development, and the EU AI Act ensures alignment with user safety and rights.

In conclusion, the process of AI procurement demands a meticulous examination of suppliers to ensure not only technological advancements but also ethical responsibility. By posing these key questions and consulting the insights provided by the Responsible AI Institute, businesses can make informed choices that foster trust, innovation, and societal well-being.

Source: Responsible AI Institute


OpenAI Responds to Lawsuits Alleging Copyright Infringement by ChatGPT

OpenAI has issued its response to a pair of class-action lawsuits filed by authors who claim that their works were illegally used to train ChatGPT. The lawsuits, including authors like Sarah Silverman and Paul Tremblay, argue that ChatGPT was trained on pirated copies of their books, thus infringing their copyrights. OpenAI's response, submitted in a US district court in California, aims to dismiss several claims, including vicarious copyright infringement, DMCA violations, unfair competition, negligence, and unjust enrichment.

In its motion to dismiss, OpenAI contends that the authors have a misunderstanding of the scope of copyright law. OpenAI argues that its use of copyrighted materials in training its AI models falls within the bounds of copyright limitations and exceptions, including fair use. It further asserts that its goal is to teach the AI models the underlying rules of human language, emphasizing applications that assist people in their daily lives or entertain them.

OpenAI also challenges the authors' claim of vicarious copyright infringement, which asserts that every output of ChatGPT constitutes a derivative work. OpenAI rebuts this by providing examples of simple responses that cannot be considered derivative works under copyright law. It likens certain ChatGPT outputs to book reports or reviews rather than derivative creations.

Regarding the Digital Millennium Copyright Act (DMCA) claims, OpenAI dismisses accusations that it intentionally removed copyright-management information (CMI) from its training data. It argues that any incidental removal of CMI during its technological processes doesn't indicate an attempt to conceal infringement or wrongdoing.

Furthermore, OpenAI seeks to invalidate the authors' claims of unfair competition, negligence, and unjust enrichment, asserting that these claims are preempted by federal copyright law.

OpenAI's motion to dismiss focuses on debunking the legal theories presented by the authors. While OpenAI seeks to narrow down the scope of the lawsuit, the authors' perspective highlights the concerns over AI's use of copyrighted material without proper attribution or authorization. The ongoing legal battle underscores the complexities and challenges of intellectual property rights in the era of AI-generated content.

Source: Ars Technica


US Copyright Office Initiates Public Input on AI and Copyright Issues

The US Copyright Office has commenced a public comment period to address the intricate intersection of AI technology and copyright regulations. The initiative, which began on August 30th, reflects the agency's intent to grapple with the complexities posed by artificial intelligence and its implications for copyright law.

The Copyright Office has outlined three pivotal questions for consideration. These queries revolve around how AI models employ copyrighted data during training, whether AI-generated content can receive copyright protection sans human involvement, and how copyright liability should be navigated in relation to AI-generated material. Additionally, the agency seeks insights into potential concerns about AI infringing upon publicity rights, although it clarifies that these issues are distinct from copyright matters.

Interested parties are encouraged to submit written comments by October 18th, with responses to comments due by November 15th. The significance of AI-generated content's copyright status has emerged as a contentious topic, garnering attention from an array of stakeholders including politicians, artists, authors, and civil rights groups. The Copyright Office acknowledges the increasing number of applications for copyright registration containing AI-generated content and aims to leverage public input to guide its decisions on future copyright determinations.

The Copyright Office's engagement in last year's lawsuit involving AI-generated images, where it declined to grant copyright, underscores the complexities at hand. The recent ruling supporting the Copyright Office's position highlights the agency's commitment to human involvement as a criterion for copyright.

Moreover, numerous cases have emerged where large language models powering AI tools have raised concerns about potential copyright infringement. Lawsuits involving AI art platforms and AI-powered language models have surfaced, alleging unauthorized use of copyrighted material for training purposes.

In response to these challenges, several news organizations have taken preventive measures by blocking OpenAI's web crawlers from scraping their data. This wave of legal actions and concerns has prompted lawmakers to engage with stakeholders to explore suitable avenues for AI regulation. The issue has gained significant momentum, with Senate Majority Leader Chuck Schumer urging expeditious rule making in the realm of AI governance.

Source: The Verge


Meta Offers Users Control Over Personal Data Used in AI Model Training

Meta (formerly Facebook) has introduced a form in its help resource center, allowing users to exercise control over their personal data used in training generative artificial intelligence (AI) models. The form, titled "Generative AI Data Subject Rights," allows users to submit requests related to their third-party information being utilized for training AI models. The move aims to address growing concerns around data privacy and AI model training.

The form enables users to take several actions regarding their personal data:

  1. Access and Alteration: Users can access, download, or correct personal information obtained from third-party sources that is used to train generative AI models.
  2. Deletion: Users can delete their personal information from third-party data sources used for AI model training.
  3. Other Issues: An option is provided for users with different issues or concerns regarding their data.

Meta's opt-out tool addresses data privacy as generative AI technology gains momentum. The company acknowledges that generative AI models utilize billions of pieces of data, including publicly available information from the internet and licensed sources, to create new content.

While the form allows users to control third-party data, it doesn't currently extend to user activity on Meta-owned platforms like Facebook and Instagram. Therefore, data from these platforms might still be employed for AI model training.

The initiative comes amid growing scrutiny from data protection agencies and advocates regarding the use of publicly available information for AI model training. A consortium of global data protection agencies recently issued a joint statement to companies like Meta, Alphabet, Microsoft, and others, emphasizing compliance with data protection and privacy laws while protecting user information from data scraping.

The introduction of this form aligns with the trend of increased focus on user data privacy and control, highlighting the need for transparency and options when it comes to the utilization of personal information in AI model development. Users are becoming more conscious of their data's applications and are seeking ways to manage its usage.

Source: CNBC


UK Officials Highlight Cyber Risks Associated with AI Chatbots

British authorities are cautioning organizations about the potential cybersecurity risks tied to integrating AI-driven chatbots into their operations. The National Cyber Security Centre (NCSC) of the UK issued a pair of blog posts expressing concerns that artificial intelligence-driven chatbots, also known as large language models (LLMs), could be manipulated into executing harmful actions due to inadequately addressed security issues.

LLMs, which use algorithms to generate human-like interactions, have gained traction as chatbots with applications ranging from customer service to sales calls. The NCSC highlighted the risks associated with plugging such models into various aspects of business processes. Researchers have shown that chatbots can be tricked by malicious actors into performing unauthorized actions or circumventing security measures.

The NCSC compared the integration of LLMs into services to using experimental software releases and emphasized the need for caution. It recommended organizations treat LLMs similarly to beta products and not fully trust them with tasks involving financial transactions or critical operations.

Authorities globally are grappling with the security implications of LLMs, exemplified by platforms like OpenAI's ChatGPT. Businesses have incorporated these models into various services, raising concerns about potential vulnerabilities. The US and Canada have reported instances of hackers leveraging AI for malicious purposes.

A recent Reuters/Ipsos poll revealed that many corporate employees are already using tools like ChatGPT for tasks such as drafting emails, summarizing documents, and conducting initial research. While some companies have explicitly banned external AI tools, a significant portion of respondents were unaware of their organization's stance on AI technology.

Oseloka Obiora, Chief Technology Officer at cybersecurity firm RiverSafe, warned that hastily integrating AI into business operations without implementing necessary security measures could have dire consequences. Obiora urged business leaders to carefully assess both the benefits and risks of AI adoption and to prioritize cybersecurity protection to safeguard their organizations from potential harm.

Source: Reuters


Google's Ambitious Plans to Revolutionize Healthcare with Generative AI

Google is making significant strides in the healthcare sector, leveraging the power of generative artificial intelligence to address complex challenges. The article explores Google's efforts to integrate AI into healthcare systems, especially in tasks such as patient handoffs, note-taking, and medical data analysis. It also delves into the potential benefits, challenges, and the competitive landscape Google faces.

At the heart of this endeavour is the application of generative AI to streamline critical healthcare processes. The example of patient handoffs, where the outgoing nurse transfers patient information to the incoming nurse, is highlighted. Google's Vertex AI, a machine learning model-building platform, serves as the foundation for these AI-driven solutions. HCA Healthcare, one of the largest healthcare systems in the US, partnered with Google to implement AI-driven handoff tools.

To cater specifically to healthcare needs, Google has developed a healthcare-specific large language model known as Med-PaLM 2. This model aims to enhance data summarization and organization in the medical context. However, Google is not alone; Amazon and Microsoft are also actively exploring AI-powered healthcare solutions, raising questions about which company will emerge as the industry leader.

Despite progress, industry experts note that the healthcare AI landscape is still evolving, and challenges remain. Notably, AI solutions need to prove themselves in complex and regulated sectors like healthcare to gain broader adoption. The need for domain-specific adaptation is crucial, as AI models cannot be expected to handle all scenarios off-the-shelf.

Concerns over AI's transparency and performance degradation over time are also discussed. Google acknowledges these concerns and is exploring niche large language models tailored to specific use cases. Additionally, the limitations of generative AI's static learning nature are acknowledged, as medical knowledge is continually evolving. The role of AI as an assistant to the care team and its potential in reducing administrative burdens are emphasized.

Privacy and consent remain paramount in healthcare AI adoption. Google's previous partnership with Ascension raised concerns over data privacy. To address these concerns, Google clarifies that models like PaLM and Med-PaLM are not trained on patient data from customers like HCA. Privacy measures are in place to ensure customer data is not shared or used to improve base models.

The article concludes on an optimistic note, stating that AI tools built on technology are poised to become an integral part of the standard toolkit for healthcare professionals. HCA's cautious approach to AI adoption aims to ensure that doctors view AI as a valuable partner before venturing into more complex applications.

Source: Forbes


Google DeepMind Co-Founder Calls for Stricter AI Standards in the US

Mustafa Suleyman, co-founder of Google DeepMind and CEO of Microsoft-backed AI startup Inflection AI, has urged the United States to enforce ethical standards in the use of artificial intelligence (AI) and to impose stricter regulations on Nvidia's AI chips. Suleyman emphasized that the U.S. should establish minimum global standards for AI usage and require companies purchasing Nvidia chips to commit to ethical guidelines similar to those adopted by leading AI firms in their voluntary pledge to the White House. These measures could include watermarking AI-generated content to enhance safety, an initiative that Google DeepMind has unveiled in SynthID.

Source: Reuters


Google DeepMind Enhances Vertex AI with Colab Enterprise and MLOps for Generative AI

Google DeepMind is introducing new features to its Vertex AI platform, focusing on enhancing productivity, collaboration, and management of machine learning operations (MLOps). These additions include Colab Enterprise, an enterprise-grade version of Google's popular cloud-based Jupyter notebook, and MLOps improvements tailored for generative AI.

Three years after launching Vertex AI, Google DeepMind is making strides to further broaden its capabilities. The platform now supports generative AI, offering tools for developing apps targeting common generative AI use cases. Additionally, it provides access to over 100 large models from Google, open-source contributors, and third parties, making it a comprehensive AI/ML platform.

Colab Enterprise: This managed service combines the ease-of-use of Google's Colab notebooks with enterprise-level security and compliance support. It enables data scientists to collaborate on AI workflows, access Vertex AI's capabilities, integrate with BigQuery for data access, and benefit from features like code completion and generation. Colab Enterprise also powers a notebook experience for BigQuery Studio, allowing seamless data exploration and analysis.

Ray on Vertex AI: The platform's open-source support is expanded with Ray, a unified compute framework for scaling AI and Python workloads. Ray on Vertex AI provides managed security and increased productivity, offering integrations with other Vertex AI products for a cohesive experience.

Advancing MLOps for Generative AI: Google DeepMind recognizes the challenges that generative AI introduces to MLOps and has developed a new MLOps Framework to address them. This framework encompasses managing AI infrastructure, customizing models with new techniques, handling new types of artifacts, monitoring generated output, and more. Google's existing AI/ML capabilities, such as a variety of hardware choices and support for various open-source frameworks, form the foundation for efficient MLOps in the era of generative AI.

With these new features, Google DeepMind aims to empower organizations to effectively leverage generative AI in their AI practice, while ensuring productivity, collaboration, and operational efficiency. The enhancements cater to both data scientists and organizations seeking to harness the power of AI technology.

Source: Google Cloud


The Sieve of Knowledge: Efficient Learning in the Age of AI

In the realm of knowledge acquisition and the burgeoning influence of artificial intelligence (AI), understanding the various forms of knowledge and their filtration has become paramount. This thought-provoking piece delves into the multifaceted nature of knowledge and its implications in the context of AI, and introduces the concept of "SocratiQ," a technology designed to revolutionize learning.

The piece delineates six distinct forms of knowledge:

  1. Inherited: Knowledge embedded in DNA, a result of eons of evolution.
  2. Absorbed: Knowledge gleaned from one's environment and immediate circles.
  3. Shared: Knowledge gained from social structures, communities, networks, and kin.
  4. Sensed: Knowledge accumulated through experiences, work, and interactions with nature.
  5. Sought: Active knowledge sought from schools, textbooks, blogs, and various sources.
  6. Imposed: Knowledge as an attack vector, imposed through external forces.

The piece reflects on humanity's historical "mining" of knowledge from diverse sources and acknowledges the growing efficiency brought by AI technologies. It underscores the potential of AI-driven knowledge mining to accelerate discoveries that once took generations. This accelerated knowledge acquisition could bring about an entirely new era, exceeding even the wildest imaginations of science fiction writers.

Enter "SocratiQ," a technology designed to establish a symbiotic partnership between humans and AI in the learning process. SocratiQ is presented as a tool that efficiently mines and refines knowledge, promoting effective learning by removing contaminants and producing insightful work. The article emphasizes SocratiQ's ability to foster personalized, exploratory, and equitable learning experiences while retaining the essence of human connection and interaction.

The author envisions teachers as facilitators in the SocratiQ ecosystem, creating explorations and pathways for students to delve into knowledge. This interactive learning process encourages collaboration and serendipitous discovery, mimicking the spontaneity of classroom discussions. SocratiQ's "Teach the World" project, akin to publishing on GitHub, enables learners to share their explorations with the global community, extending the reach of knowledge sharing.

The piece concludes with the notion that SocratiQ acts as a "Sieve of Knowledge," sifting through various forms of knowledge with the aid of capable facilitators, collaborative environments, and limitless workspace. As pilots for SocratiQ commence, the article invites educators and school administrators to partake in this innovative learning endeavor, potentially reshaping the landscape of education in the age of AI.

Try SocratiQ here

Source: SocratiQ


OpenAI Launches ChatGPT Enterprise: Elevating AI Assistance with Enhanced Security and Features

OpenAI is introducing ChatGPT Enterprise, an advanced version of their ChatGPT platform designed to meet the demands of enterprise environments. With a focus on security, customization, and powerful capabilities, ChatGPT Enterprise aims to provide AI assistance that is tailored to organizations' needs while safeguarding data privacy.

Enterprise-Grade Security and Privacy: ChatGPT Enterprise prioritizes data security and privacy. Unlike other models, it doesn't utilize customer prompts or company data to train the AI models. All conversations are encrypted both in transit and at rest. It is also certified SOC 2 compliant, offering a higher level of data protection.

Enhanced Features for Large-Scale Deployment: ChatGPT Enterprise comes with an admin console that enables efficient member management for larger teams. The console also supports single sign-on (SSO) and domain verification for added security. An analytics dashboard provides insights into usage patterns, facilitating better management of AI resources.

Most Powerful Version of ChatGPT: Users of ChatGPT Enterprise benefit from unlimited access to GPT-4, the latest iteration of OpenAI's generative model. The new version offers significantly higher speeds, up to two times faster, enabling faster AI responses. Moreover, ChatGPT Enterprise boasts a 32k token context window, allowing for processing of inputs or files that are up to four times longer.

Advanced Data Analysis and Collaboration: The platform includes unlimited access to advanced data analysis capabilities, previously known as Code Interpreter. This feature empowers both technical and non-technical teams to swiftly analyze data, aiding functions such as financial analysis, marketing insights, and more. The introduction of shareable chat templates promotes collaboration and the creation of common workflows within an organization.

Customization and Future Developments: OpenAI has additional features in the pipeline, including plans for customization that will allow companies to extend ChatGPT's knowledge using their internal data and applications. The roadmap also includes tailored solutions for specific roles such as data analysts, marketers, and customer support teams.

Source: OpenAI