Armilla Review - AI in the Spotlight: Privacy Lawsuits, Global Governance, and Future Trends
The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
September 12, 2023
5 min read
OpenAI and Microsoft Face New Consumer Privacy Lawsuit, Microsoft Vows to Protect Customers from AI Copyright Lawsuits
On September 6, OpenAI and its primary backer, Microsoft, were hit with a second class-action lawsuit in San Francisco federal court, alleging violations of privacy laws in the development of ChatGPT and other generative AI systems. The complaint, filed on behalf of two unnamed software engineers who use ChatGPT, accuses the companies of using stolen personal information from millions of internet users to train their AI technology.
This lawsuit, brought by Morgan & Morgan, closely resembles a previous complaint filed by the Clarkson Law Firm against OpenAI and Microsoft in June. Both cases share extensive content. The attorneys involved aim to hold "BigAI" accountable for privacy violations.
In addition to these privacy cases, tech giants including Microsoft, OpenAI, Google, and Stability AI face lawsuits over scraping copyrighted materials and personal data from the internet to train their AI systems.
OpenAI's ChatGPT, which quickly gained 100 million active users after its launch, is at the center of the privacy lawsuit, with allegations of misusing personal data from social media platforms. The engineers behind the lawsuit seek unspecified damages and safeguards against data misuse.
Microsoft has responded by announcing the "Microsoft Copilot Copyright Commitment," vowing to defend customers against copyright infringement lawsuits related to its AI products. The company will protect customers who use its AI "Copilots" as long as they adhere to built-in content filters and guardrails. Microsoft also commits to covering related fines or settlements and ensuring Copilots respect copyright.
Generative AI applications, which create new content from existing material, have raised concerns among artists, writers, and software developers. Lawsuits are emerging, alleging unauthorized use of intellectual property. Microsoft, in partnership with OpenAI, is incorporating generative AI into major products like Office and Windows, potentially exposing customers to legal issues.
As generative AI evolves, questions about the fair use of copyrighted materials become increasingly complex, particularly following a Supreme Court ruling on fair use in May.
Microsoft's commitment to protect customers from copyright lawsuits mirrors its past efforts to provide legal shields, emphasizing customer loyalty and differentiation from competitors. The company is taking proactive steps to help customers ensure compliance with global AI regulations.
Overall, these legal developments highlight the growing challenges and controversies surrounding AI regulation, privacy, and copyright issues in the tech industry.
G7 Nations Forge Ahead with International AI Code of Conduct
Officials from the G7, a coalition of leading democratic nations, have announced their commitment to creating an international code of conduct for artificial intelligence (AI). The voluntary guidelines will encompass advanced AI technologies, including generative AI, and are aimed at establishing a unified but nonbinding international rulebook.
Unified AI Code of Conduct: The G7 countries have joined forces to develop a code of conduct that will guide the ethical development and deployment of AI. This initiative seeks to foster international cooperation on AI principles.
Voluntary and Nonbinding: The forthcoming code of conduct will be voluntary and nonbinding, emphasizing collaboration and shared values among G7 nations rather than imposing rigid regulations.
AI Industry Commitments: The code is expected to include commitments from companies involved in AI development. These commitments will focus on preventing societal harm caused by AI systems, enhancing cybersecurity controls during AI development, and implementing risk management systems to prevent misuse.
Global Context: This initiative aligns with efforts by other regions, such as the European Union and the United States, to address AI governance. The European Union aims to finalize its own AI legislation by year-end, while the United States has introduced voluntary commitments related to AI.
Recognition of Challenges and Opportunities: The G7 recognizes the need to manage the challenges and opportunities presented by advanced AI systems. Their goal is to strike a balance that safeguards individuals, society, and democratic values while harnessing the benefits of AI technology.
Meeting and Finalization: G7 officials will convene in Kyoto, Japan, in early October to further advance the development of the AI code of conduct. The digital ministers of the G7 nations are expected to finalize the code during a virtual meeting planned for November or December.
This collaborative effort among G7 countries reflects the growing global awareness of the importance of responsible AI governance, with a focus on both promoting AI's benefits and addressing potential risks to society and democratic values.
UK Government's Interim Report on AI Governance Highlights Challenges and Urges Global Collaboration
The UK government has released an interim report addressing the governance of artificial intelligence (AI). This House of Commons Committee report emphasizes the need for effective AI governance frameworks to harness the technology's benefits while safeguarding against potential harms. The report identifies twelve key challenges policymakers and frameworks must address, including issues related to bias, privacy, data access, and more.
Evolution of AI: The report acknowledges that AI has evolved significantly since its inception in the 1950s. Notably, large language models like ChatGPT have made AI a ubiquitous technology with widespread applications.
Benefits and Challenges: While AI offers significant benefits in fields like medicine, healthcare, and education, it also poses challenges such as bias, privacy concerns, and the potential for misrepresentation.
Rate of Development: The rapid pace of AI development has made governance and regulation discussions more significant and complex. Policymakers face the challenge of keeping up with technological innovation to ensure safety and ethics.
Twelve AI Governance Challenges: The report outlines twelve challenges that AI governance must address, including bias, privacy, misrepresentation, access to data and compute power, transparency, intellectual property, liability, employment disruption, international coordination, and existential threats.
UK's Proposed Approach: In March 2023, the UK government proposed a "pro-innovation approach to AI regulation" in a white paper. This approach includes five principles that guide regulatory activity without immediate statutory implementation.
Global Leadership: The report underscores the importance of the UK establishing itself as a leader in AI governance. It highlights the risk of falling behind other jurisdictions, particularly the European Union and the United States, in setting international AI standards.
Call for a Focused AI Bill: The report suggests that a focused AI Bill in the near future would help the UK's ambitions to lead in AI governance. Rapid efforts are needed to establish the right governance frameworks and participate in international initiatives.
AI Safety Summit: A summit on AI safety, expected in November or December, is crucial to advancing a shared international understanding of AI challenges and opportunities. The report calls for broad international participation and the creation of a forum for like-minded countries to protect democratic values against potential threats.
This report underscores the global importance of addressing AI governance challenges and encourages international collaboration to ensure that AI benefits society while mitigating potential risks and ethical concerns.
California Governor Takes Bold Steps to Embrace and Regulate GenAI
Governor Gavin Newsom of California has signed an executive order aimed at preparing the state for the advancements in generative artificial intelligence (GenAI). Acknowledging California's global leadership in GenAI technology, the order focuses on ensuring that AI's potential benefits are harnessed while safeguarding against potential risks and ethical concerns.
GenAI Leadership: California has established itself as a global hub for GenAI innovation, with numerous top AI companies and significant AI-related patents and research institutions like UC Berkeley and Stanford University.
Governor's Vision: Governor Newsom recognizes the transformative potential of GenAI, comparing it to the advent of the internet. He emphasizes the need to approach GenAI with a clear-eyed and humble perspective, addressing both its benefits and risks.
Executive Order Provisions: The executive order outlines several key provisions to deploy GenAI ethically and responsibly within state government:
Risk-Analysis Report: State agencies are directed to conduct a joint risk analysis of potential threats and vulnerabilities to California's critical energy infrastructure posed by GenAI.
Procurement Blueprint: Guidelines for public sector procurement and the responsible use of GenAI will be developed, building on existing frameworks such as the AI Bill of Rights and AI Risk Management Framework.
Beneficial Uses Report: State agencies will create a report identifying significant and beneficial applications of GenAI in the state while addressing potential harms and risks to communities and government.
Deployment and Analysis Framework: Guidelines will be established to assess the impact of adopting GenAI tools on vulnerable communities. The state will provide environments or "sandboxes" for testing GenAI projects.
State Employee Training: Training programs will be implemented to equip the state government workforce with GenAI skills to achieve equitable outcomes. Criteria will be established to evaluate GenAI's impact on the workforce.
GenAI Partnership and Symposium: A formal partnership with UC Berkeley and Stanford University will be formed to evaluate GenAI's impact on California and advance the state's leadership in the field. A joint summit in 2024 will facilitate discussions on GenAI's implications.
Legislative Engagement: Collaboration with legislative partners and stakeholders will occur to develop policy recommendations for responsible AI use, including guidelines, criteria, reports, and training.
Ongoing Evaluation: Regular evaluations of GenAI's impact on regulatory matters will be conducted, with necessary updates recommended as technology evolves.
Governor Newsom's executive order reflects California's commitment to embracing GenAI while ensuring it is developed and deployed ethically and transparently. The order sets the stage for California to continue leading the world in technological progress while addressing the challenges and opportunities presented by AI.
Global AI Legislation Tracker: Navigating the Complex Landscape of AI Governance
The "Global AI Legislation Tracker" offers insight into the rapidly evolving landscape of AI governance across the world. With the proliferation of AI-powered technologies, countries are enacting legislation to address the challenges and opportunities presented by artificial intelligence. This tracker highlights various legislative efforts, including comprehensive laws, use-case-specific regulations, and voluntary guidelines, in a subset of jurisdictions.
Diverse Legislative Approaches: Countries are adopting a range of approaches to regulate AI, reflecting the diversity of AI applications and concerns. This includes comprehensive legislation, targeted regulations for specific AI use cases, and the establishment of voluntary standards.
Partial Jurisdiction Coverage: The tracker focuses on selected jurisdictions and their AI-related policy developments. It acknowledges the limitations of providing a globally comprehensive overview due to the rapid pace of policymaking in this field.
AI Context Commentary: The tracker provides brief commentary on the broader AI context within specific jurisdictions, shedding light on the motivations and challenges driving their regulatory efforts.
Index Rankings: Tortoise Media's index rankings, which assess nations based on their AI investment, innovation, and implementation, are included in the tracker. These rankings offer additional context for understanding each jurisdiction's AI landscape.
Multilateral Coordination: While countries develop their own AI frameworks, many are also engaged in multilateral efforts to align and harmonize AI governance approaches. Organisations such as the Organisation for Economic Co-operation and Development (OECD), UNESCO, the International Organization for Standardization (ISO), the African Union, and the Council of Europe are working on multilateral AI governance frameworks.
AI Safety Summit: The U.K. government is taking a lead role by organizing the first AI Safety Summit, bringing together government and industry stakeholders to identify, assess, and monitor significant AI-related risks.
Strategic Importance: Tracking and understanding global AI governance is increasingly crucial for organizations. The IAPP AI Governance Center aims to provide professionals in this field with resources, training, networking opportunities, and certification to address the complex risks associated with AI governance.
The "Global AI Legislation Tracker" serves as a valuable resource for those navigating the intricate landscape of AI regulation, reflecting the growing importance of AI governance at the international level. Feedback and insights from AI governance professionals are welcomed to contribute to this evolving field.
Unlocking the Power of Artificial Intelligence in Legal Practice: A Comprehensive Guide
The legal sector is witnessing a significant transformation with the integration of artificial intelligence (AI) technologies. This comprehensive guide explores how AI, encompassing natural language processing, machine learning, and robotic process automation, is reshaping legal practices, increasing efficiency, and delivering faster client outcomes. The guide covers key applications of AI in legal research, contract management, document automation, e-discovery, and client support, providing insights into the technologies driving these innovations. Additionally, it addresses essential considerations such as data privacy, ethics, transparency, and overcoming barriers to AI adoption in the legal field.
Efficiency Through AI: Law firms are harnessing AI technologies to streamline legal research, contract management, document automation, and e-discovery. Natural language processing and machine learning enable quicker data analysis, automated contract review, and enhanced document management.
Cost Savings and Time Efficiency: AI solutions significantly reduce the time spent on manual tasks and improve efficiency in legal workflows. Automated document assembly, in particular, expedites the creation of legal documents, saving both time and resources.
Enhanced Predictive Analytics: Machine learning and predictive analytics offer valuable insights by analyzing historical data, making them invaluable for forecasting trends and predicting legal outcomes.
Robotic Process Automation (RPA): RPA is revolutionizing legal workflows by automating administrative tasks, improving accuracy, and freeing up resources for more strategic work.
AI-Powered Virtual Assistants: AI-powered virtual assistants and chatbots enhance client support, providing personalized responses, 24/7 availability, and cost assessment for legal services.
Data Privacy and Security: Adhering to data privacy and security regulations is paramount when implementing AI solutions in legal practice. Firms must ensure third-party services prioritize data protection.
Mitigating Bias in AI: Law firms must mitigate bias in AI algorithms to ensure fairness and transparency in decision-making processes. Consistent evaluation tests and ethical reviews are essential.
Transparency and Explainability: Transparent AI systems are crucial to ensure accountability and meaningful decision-making. AI models should generate interpretable rules and explanations for their outputs.
Legal Ethics: Legal ethics must be upheld when adopting AI in law firms. Practitioners should adhere to principles of candor, confidentiality, and integrity to maintain public trust in AI applications.
Barriers to Adoption: Law firms must address resistance to change, cost concerns, ROI expectations, and regulatory challenges when adopting AI technologies.
Future Transformation: AI adoption is positioned to bring transformative changes to the legal industry, turning AI from an expenditure into a strategic advantage.
This comprehensive guide serves as a roadmap for law firms looking to embrace AI's potential and navigate the evolving landscape of legal practice. It emphasizes the importance of ethical and regulatory considerations while highlighting the opportunities AI presents for innovation and efficiency in the legal sector.
Harnessing AI for Environmental Conservation: Deep Dives into Coral Reef Monitoring
Researchers from the University of Hawaii at Mānoa have developed an AI-powered tool that utilizes high-resolution satellite imagery to monitor the health of coral reefs in real time. This innovative approach, which focuses on identifying and tracking coral reef halos—distinctive rings of barren sand encircling reefs—has the potential to revolutionize global marine conservation efforts.
AI-Enabled Coral Reef Monitoring: The University of Hawaii researchers have harnessed deep learning models and high-resolution satellite imagery, powered by NVIDIA GPUs, to create a tool for monitoring coral reef health. This AI-based system can spot and track coral reef halos, visible patterns indicative of ecosystem health, from space.
Ecosystem Health Indicator: Coral reef halos, previously associated with fish grazing, have been found to offer insights into the health of predator-prey ecosystems. The presence and size of halos can reveal the abundance of food sources for a diverse population of marine life. Changes in halo shape can signal imbalances in the marine food web, indicating potential reef health issues.
Global Coral Reef Importance: Coral reefs cover less than 1% of the ocean but provide critical habitat, food, and nursery grounds for over one million aquatic species. They also hold substantial commercial value, contributing approximately $375 billion annually to industries like fishing, tourism, and coastal protection. Furthermore, coral reefs are sources of antiviral compounds for drug discovery research.
Threats to Coral Reefs: Coral reefs face numerous threats, including overfishing, nutrient pollution, ocean acidification, coral bleaching due to climate change, and infectious diseases. Over half of the world's coral reefs are already lost or severely damaged, with most reefs expected to face threats by 2050.
AI-Driven Conservation: Tracking changes in reef halos is essential for conservation efforts. However, manual surveys are time-consuming and limited in frequency. The researchers developed an AI tool that leverages global satellite imagery, enabling proactive monitoring of reef degradation. Their dual-model framework employs convolutional neural networks (CNNs) for image segmentation, achieving quick and accurate halo detection.
NVIDIA GPU Acceleration: To handle the AI tool's computational demands, researchers used an NVIDIA RTX A6000 GPU with cuDNN-accelerated PyTorch framework. This GPU significantly sped up image processing, reducing the time required for halo identification from hours to minutes.
Future Goals: The researchers aim to transform their AI tool into a robust monitoring tool capable of assessing halo size changes and correlating them with predator and herbivore population dynamics. They are investigating relationships between species composition, reef health, and halo presence and size, including the potential estimation of shark abundance from satellite data.
This AI-powered coral reef monitoring tool offers a promising approach to addressing the urgent need for real-time conservation efforts to protect these vital marine ecosystems.
Streamlining Responsible AI Implementation: A Comprehensive Matrix for Framework Selection
The report introduces a matrix designed to simplify the process of selecting responsible artificial intelligence (AI) frameworks for organizations. With an increasing array of tools available for implementing responsible AI systems, organizations often struggle to choose the right ones. The matrix systematically categorizes over 40 public process frameworks, offering clarity on their focus areas and intended user teams. By primarily catering to the Development and Production team and the Governance team, the matrix helps organizations pinpoint the frameworks that align best with their specific needs. This comprehensive resource empowers organizations to efficiently implement responsible AI practices and gain a deeper understanding of the utility and applicability of each framework.
TIME100 AI: Mapping the Minds Shaping the Future of Artificial Intelligence
TIME magazine's prestigious TIME100 AI list comprises 100 individuals who have been meticulously selected by editors and reporters after months of research and nominations. The list reflects the diverse and influential voices in the world of artificial intelligence, ranging from youthful activists advocating for ethical AI practices to seasoned experts like Geoffrey Hinton, who are voicing concerns about AI's risks. TIME's executive editor, Naina Bajekal, underscores the goal of spotlighting AI industry leaders, ethical thinkers, and innovators addressing societal challenges through AI. The TIME100 AI serves as a unique map of the key figures shaping the trajectory of AI, representing a broad spectrum of perspectives and roles within the AI ecosystem, including competitors, regulators, scientists, artists, advocates, and executives. These individuals collectively influence the future of this increasingly impactful technology.