Armilla Review - AI and Society: Navigating Bias, Regulations, and Advancements in a Complex Landscape

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
August 16, 2023
5 min read

EEOC Resolves Landmark AI Bias Lawsuit in Hiring: What You Need to Know

The US Equal Employment Opportunity Commission (EEOC) has reached a significant settlement in its first-ever AI bias lawsuit, addressing discrimination in hiring practices. The case involves iTutorGroup Inc., a tutoring company accused of using AI to automatically reject job applicants over 40 years old. The $365,000 settlement highlights the urgent need for regulations as AI's role in recruitment grows, posing potential risks of unintentional discrimination. The lawsuit underscores the importance of responsible AI use to ensure fairness and equality in hiring processes.

This groundbreaking settlement raises concerns about the widespread adoption of AI in hiring. A recent survey shows that nearly 79% of employers use AI for recruitment, and while it can streamline processes, it can also lead to biased outcomes against protected characteristics like age, gender, and race. The case signals a shift in the EEOC's focus, as it navigates the integration of AI into existing regulations and seeks to prevent new forms of bias in employment decisions.

As the EEOC, now led by a Democratic majority, intensifies its policy efforts, the settlement exemplifies the commission's commitment to addressing AI's potential biases and its impact on civil rights. The outcome of the lawsuit will likely influence discussions around the regulatory landscape for AI-powered hiring tools, urging companies to adopt responsible practices and encouraging lawmakers to establish comprehensive guidelines to safeguard against discrimination in the age of AI-driven employment.

Source: Bloomberg Law


Cigna Faces Lawsuit Over Alleged Algorithmic Denials of Medical Coverage

Cigna, a major healthcare and insurance provider, is facing a lawsuit that accuses the company of using an algorithmic system to systematically deny medical claims. The lawsuit highlights concerns about the impact of automation and AI on healthcare, raising questions about fairness and patient well-being. The legal action, seeking class action status, alleges that Cigna's digital claims system, called PXDX, is designed to automatically reject medical payments, potentially violating state laws.

The lawsuit, filed in California, alleges that Cigna's algorithmic system enables swift claim rejections without medical professionals accessing patient records. This practice came to light through a ProPublica investigation, revealing that over two months, Cigna denied a significant number of claims, dedicating minimal time to each. The lawsuit raises concerns about the impact of automated decisions on patients' access to care and the broader discussion about technology's role in healthcare.

Cigna has defended its software system, stating that PXDX serves to expedite payments rather than unjustly deny coverage. The company's stance underscores the ongoing debate about the appropriate utilization of AI in healthcare processes. This lawsuit is part of a larger trend where healthcare companies, like Google's cloud division, are adopting AI-powered tools to streamline operations and claims processing. While technology holds potential benefits for the sector, challenges regarding privacy, fairness, and medical decision-making persist.

The lawsuit against Cigna reflects the broader legal landscape surrounding AI and data usage. The plaintiffs' law firm, Clarkson Law, has been involved in other AI-related cases, underscoring the growing legal scrutiny over data practices in technology development. The outcome of this case will likely shape the discourse around algorithmic decision-making in healthcare and influence the industry's approach to adopting AI-driven tools in patient care and insurance processes.

Source: Forbes


AI Language Models Display Political Biases, New Research Reveals

Recent research conducted by the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University sheds light on the prevalence of political biases within AI language models. The study demonstrates that different AI models provide varying responses on politically sensitive topics, indicating potential biases. This issue gains significance as AI models are increasingly integrated into products and services, impacting millions of users.

The research involved testing 14 prominent language models on their stances regarding politically charged statements. Results showed distinct political leanings among these models. Notably, OpenAI's GPT-2 and GPT-4 appeared left-leaning libertarian, while Meta's LLaMA exhibited a right-wing authoritarian tendency. These biases were further influenced by retraining models on more politically biased data.

The study also explored how these biases affected the models' behaviour and classification of content. AI models trained on left-leaning data demonstrated sensitivity to hate speech targeting marginalized groups, whereas right-leaning models were sensitive to hate speech against specific demographics. The models' ability to detect misinformation also varied based on their political inclinations.

While efforts have been made to mitigate biases in AI models by cleaning biased data, this study highlights that data cleaning alone may not suffice. Cleaning a vast dataset of biases is challenging, and AI models might still reflect underlying biases present in the data. However, the study's scope was limited to older models, and gaining access to state-of-the-art models for analysis remains a challenge.

As AI models become integral to various industries, including customer service and healthcare, companies must acknowledge and address the biases present within these models. Developing awareness of these biases is crucial to ensuring fairness in AI-driven interactions. The research underscores the need for continued exploration and transparency in AI development to mitigate potential harm caused by biases.

Source: MIT Technology Review


Divided Views on AI: American Public Holds Varied Opinions

Public sentiment towards artificial intelligence (AI) in the United States is sharply divided, with a plethora of opinions and attitudes reflecting a complex societal landscape. This dichotomy is underscored by a range of demographic factors, including age, education, ethnicity, and political affiliation, which shape people's perceptions and interactions with AI technologies.

While there exists an even split in overall favourability towards AI, significant variations emerge when delving into demographic nuances. Individuals under 40 and those with a college education are more likely to engage with AI, while age and education impact attitudes towards AI usage in the workplace. Despite limited direct experience, skepticism abounds as Americans form opinions about the technology.

Ethnicity plays a role in shaping awareness and opinions on AI, with differences between Asian Americans, Pacific Islanders, and Black Americans. Political affiliation also contributes to the diverse spectrum of viewpoints, as Democrats tend to view AI more favourably than Republicans. These divergent perspectives highlight the need for nuanced approaches in AI development and integration.

As AI becomes increasingly integrated into various facets of society, the question of trust emerges as a central concern. While specific demographics exhibit varying degrees of trust in AI, the overarching theme emphasizes the importance of transparency, ethical AI design, and addressing biases. Understanding the intricate interplay of these demographic factors is pivotal for ensuring that AI is harnessed responsibly and equitably to benefit society at large.

The complex landscape of American attitudes towards AI underscores the multifaceted nature of this technology's impact. Demographic variations in perception and usage highlight the need for tailored strategies in AI development and deployment. As technology companies, policymakers, and stakeholders navigate this intricate terrain, fostering trust, promoting transparency, and embracing responsible AI practices will be pivotal in shaping a future where AI enhances society while respecting diverse perspectives.

Source: Axios


Report Raises Concerns Over UK's Approach to AI Safety Regulation

The UK government's stance on AI safety has been criticized as lacking credibility, according to a recent report by the independent research-focused Ada Lovelace Institute. The report highlights contradictions in the UK's approach to AI regulation, noting the government's ambitious claims in AI safety research and leadership, but its hesitancy to pass new domestic legislation for AI applications. The report's findings emphasize the need for substantive rules and regulations to address the various risks and harms associated with AI.

The Ada Lovelace Institute's report offers 18 recommendations for improving the UK's policy and credibility in AI regulation. It stresses the necessity of expansive AI safety regulation, focusing on the actual harms AI systems can cause today rather than speculative future risks. The report questions the UK government's approach of entrusting existing, sector-specific regulators with the responsibility of overseeing AI developments without granting them the legal powers or resources needed to enforce AI safety principles effectively.

One of the primary concerns raised by the report is the UK government's commitment to both becoming an "AI superpower" and a global hub for AI safety, while simultaneously diluting data protection measures. The deregulatory Data Protection and Digital Information Bill (No. 2) is seen as undermining the government's claim to prioritize AI safety, as it reduces protections for individuals subject to automated decisions with significant impacts. The report recommends that the government revisits its data protection reform bill to ensure that underlying regulations adequately govern AI development.

The report concludes that the UK's credibility in the field of AI regulation depends on its ability to establish a robust domestic regulatory framework. While international coordination efforts are welcomed, a strong domestic regulatory regime is essential for the UK to be taken seriously as a leader in AI safety and to realize its global ambitions in the field.

Source: TechCrunch


Uniting for Open Source: AI Stakeholders Rally to Shape EU AI Legislation

A coalition of prominent open-source AI stakeholders, including Hugging Face, GitHub, EleutherAI, Creative Commons, LAION, and Open Future, has joined forces to advocate for the protection of open source innovation within the forthcoming EU AI Act. The EU AI Act, set to become the world's first comprehensive AI law, is currently being debated by the European Commission, Council, and Parliament. This coalition's policy paper, titled "Supporting Open Source and Open Science in the EU AI Act," presents recommendations to ensure that the act fosters open AI development without impractical or counterproductive obligations.

The coalition's main thrust centers around the preservation of innovation within the open-source AI community. The paper underscores the importance of user choice and the ability to mix and match different components and models. Open-source AI is deemed essential, and regulations should not hinder its advancement. While acknowledging that openness alone does not guarantee responsible development, the coalition asserts that transparency and openness are prerequisites for responsible governance. Thus, regulatory requirements should support rather than impede open development.

The EU AI Act is designed to categorize AI systems based on the risks they pose to users. The higher the risk, the more stringent the regulation. As policymakers scrutinize the AI value chain to mitigate risks, the coalition emphasizes that certain regulatory expectations, tailored for well-resourced entities, should not be disproportionately imposed on open-source developers, who often operate as hobbyists, nonprofits, or students. The coalition aims to ensure that the regulations align with the unique characteristics of the open-source AI context.

The EU's historical role as a trailblazer in tech regulation, exemplified by the General Data Protection Regulation (GDPR), has led to the so-called "Brussels Effect." Policymaking decisions in Brussels set global regulatory standards. The coalition's efforts to provide clear insights into open-source AI development could influence global regulatory conversations. The anticipated AI-focused "Insight Forums" in the U.S., announced by Senator Chuck Schumer, offer an opportunity to amplify diverse input, including that of open source developers, into the policymaking process.

As the EU AI Act undergoes the trilogue process, this coalition of open-source AI advocates is shaping the discourse around AI regulation and innovation. By addressing the distinct needs of the open-source community and underscoring the vital role of openness and transparency, these stakeholders are steering AI regulation towards a balanced and inclusive future. The global ramifications of their efforts echo the potential influence of the "Brussels Effect" on shaping AI policy and practice around the world.

Source: VentureBeat


Critical Vulnerability Exposes Risks in AI Chatbot Security

A recent study by Carnegie Mellon University has unveiled a concerning vulnerability in major AI chatbots, including ChatGPT and Bard. The research demonstrated that appending seemingly harmless strings of text to prompts could trigger these chatbots to generate inappropriate and harmful responses, raising significant concerns about the effectiveness of existing safety measures in AI development.

The study's findings underscore the complexity of controlling AI behaviour and the limitations of relying solely on predefined rules to prevent misuse. The vulnerability revealed that even sophisticated chatbots are susceptible to adversarial attacks, which could have far-reaching implications for their integration into real-world applications. This discovery challenges the notion that simple guidelines are sufficient to ensure responsible AI behaviour.

The research highlights the need for a more comprehensive and adaptable approach to AI security. As AI models become more integrated into various aspects of society, it is essential for developers and policymakers to address potential vulnerabilities and enhance AI safety protocols. Collaborative efforts are required to establish robust mechanisms that not only mitigate current vulnerabilities but also anticipate and respond to future challenges in AI technology, fostering a safer and more responsible AI ecosystem.

The exposed vulnerability in major AI chatbots serves as a stark reminder of the intricate nature of AI development and regulation. As the technology continues to advance, a proactive and adaptive approach to AI security is crucial to ensure that these systems can be safely integrated into society while minimizing risks of unintended consequences or malicious misuse.

Source: Wired


Unveiling the Hidden Threat: The Intricate Dilemma of Generative AI's Self-Consumption

Generative AI has swiftly integrated itself into various aspects of modern life, from classrooms to entertainment and even search engines. As it gains momentum, an underlying concern emerges: the synthetic content generated by these AI models could inadvertently become their biggest threat. A recent study reveals that feeding AI-generated data back into generative AI models, a process akin to data inbreeding, leads to perplexing and deteriorating outputs, a phenomenon dubbed "Model Autophagy Disorder" (MAD). This unanticipated complication brings to light the challenges of maintaining the integrity and effectiveness of generative AI.

Generative AI relies on vast quantities of human-made data for training, yet the cycle of utilizing synthetic outputs from other generative models poses risks. The MAD phenomenon describes the detrimental consequences when AI models consume their own synthetic outputs. Over multiple iterations, these models gradually produce inferior, monotonous, and even bizarre outputs, resembling the effects of inbreeding. Researchers warn that if left unaddressed, this process could threaten the quality and usability of generative AI in various applications.

The implications of MAD extend to both AI developers and end-users. For AI companies, the challenge lies in finding a balance between real and synthetic data to avoid triggering model breakdowns. Watermarking synthetic data may help identify and manage potentially problematic content. However, this approach is not without drawbacks, as artificially introduced artifacts can accumulate over time. The potential impact on services like search engines is concerning, as an abundance of synthetic data could lead to a decline in the quality of search results and content across the web. As generative AI continues to evolve, understanding and mitigating MAD become crucial for maintaining a functional and reliable AI ecosystem.

The emerging phenomenon of Model Autophagy Disorder sheds light on the intricate complexities of generative AI. The unintended consequences of using synthetic data to train AI models emphasize the need for vigilant oversight and innovative solutions. To ensure the continued success and integration of generative AI, collaborative efforts are essential in navigating the delicate balance between AI-driven creativity and maintaining the integrity of AI-generated content.

Source: Futurism


From Pre-training Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

Language models (LMs) are pre-trained on diverse data sources—news, discussion forums, books, online encyclopedias. A significant portion of this data includes facts and opinions which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. This work develops new methods to (1) measure media biases in LMs trained on such corpora, along social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. The researchers focused on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pre-training data on the fairness of high-stakes social-oriented tasks. Their findings reveal that pre-trained LMs do have political leanings which reinforce the polarization present in pre-training corpora, propagating social biases into hate speech predictions and media biases into misinformation detectors. They discuss the implications of their findings for NLP research and propose future directions to mitigate unfairness.

Source: ACL Anthology


Unraveling the Mysteries of Changing AI Performance: The Complex Evolution of GPT-4

OpenAI's introduction of the GPT-4 language model initially showcased impressive capabilities, including prime number identification accuracy of 97.6%. However, a recent study has revealed a dramatic shift in GPT-4's performance over a few months, with prime number recognition plummeting to 2.4%. This phenomenon highlights the intricate nature of AI model development, challenging the notion that AI continuously improves along a linear trajectory. The study, conducted by computer scientists from Stanford University and the University of California, Berkeley, offers insights into the multifaceted dynamics of large-scale AI models like GPT-4.

The study's findings unveil significant differences between GPT-4 and its predecessor, GPT-3.5, as well as shifts in behaviour within each model over time. GPT-4 exhibited reduced verbosity and a decreased inclination to explain its responses in later tests. Paradoxically, it developed new behaviours, such as appending potentially disruptive descriptions to generated computer code. While the June version of GPT-4 demonstrated enhanced safety features, filtering out offensive content, and improving visual reasoning problem-solving, it also displayed a nuanced blend of performance variations.

The complexity of GPT-4's behavioural evolution raises questions about its overall trajectory. Researchers and AI enthusiasts have pondered whether GPT-4 is becoming "dumber" over time, but this oversimplification overlooks the multifaceted nature of AI's performance assessment. The lack of comprehensive benchmark data and OpenAI's reluctance to discuss its model development process contribute to the challenge of attributing changes in performance to specific factors.

Two key factors influence AI models' capabilities and behaviour: the model's parameters and the training data. The intricate relationship between these elements means that modifying parameters or fine-tuning the model can result in unintended consequences. Fine-tuning, akin to gene editing, introduces new data to enhance performance but can lead to unexpected shifts in behaviour. Researchers are actively exploring ways to precisely adjust AI models' parameters without introducing undesirable effects, aiming to achieve surgical modifications to model behaviour.

Source: Scientific American


NVIDIA and Hugging Face Team Up to Propel Generative AI Advancements

NVIDIA and Hugging Face have united to offer a game-changing collaboration that brings the capabilities of NVIDIA DGX™ Cloud AI supercomputing to the Hugging Face platform. This strategic partnership aims to empower developers with unprecedented access to advanced AI resources, propelling the development of large language models (LLMs) and other sophisticated AI applications. By seamlessly integrating NVIDIA's AI supercomputing technology, developers can fine-tune and train LLMs for industry-specific tasks, accelerating the adoption of generative AI across various sectors.

NVIDIA and Hugging Face's alliance heralds a new era of AI innovation. Through the integration of NVIDIA DGX Cloud into the Hugging Face platform, developers gain streamlined access to robust AI supercomputing resources. A forthcoming service, "Training Cluster as a Service," further simplifies LLM customization for enterprises. This collaboration not only enhances the speed and efficiency of AI model development but also empowers developers to create tailored, high-performing AI solutions, ushering in a transformative chapter in generative AI advancement.

Source: NVIDIA