Armilla Review - AI in 2023: From Industries to Ethics - Exploring Generative Power, Political Impact, and Future Transformations

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
August 9, 2023
5 min read

The Rise of Generative AI in 2023: McKinsey Survey Highlights its Impact on Industries and Workforces

Generative AI (gen AI) has emerged as a game-changer in the AI landscape, as organizations rapidly adopt these powerful tools. According to the latest McKinsey Global Survey, a significant one-third of respondents state that their companies are regularly using gen AI in at least one business function, less than a year after its introduction. Company leaders, including C-suite executives, are actively embracing gen AI tools for their work, indicating the technology's growing importance. The survey also reveals that organizations are optimistic about the transformative potential of gen AI, foreseeing substantial disruptions to their industries and significant changes in their workforce.

Key Survey Findings:

  1. Early and Widespread Adoption: Despite being relatively new, gen AI has garnered significant interest across industries, regions, and seniority levels. A remarkable 79% of respondents have had some exposure to gen AI, either in their work or personal life, with 22% regularly using it for work tasks. The technology's adoption is highest in the technology sector and North America, reflecting its rapid integration into various businesses.
  2. Leading Companies Taking the Lead: AI high performers, organizations that derive at least 20% of their EBIT from AI, are at the forefront of gen AI adoption. These companies are leveraging both gen AI and traditional AI capabilities to achieve significant value. They are utilizing gen AI in various business functions, particularly in product development, service management, and risk analysis. AI high performers are also more likely to use AI in HR and organizational design.
  3. Shifting Talent Needs: As organizations embrace AI, the roles needed to support their AI ambitions are evolving. In the past year, data engineers, machine learning engineers, and AI data scientists were among the most sought-after roles. However, the demand for AI-related software engineers, previously the most in-demand role, has declined. On the other hand, prompt engineering roles have emerged to cater to the growing need alongside gen AI adoption.
  4. Steady AI Adoption and Impact: While gen AI tools have gained swift popularity, their adoption has not significantly impacted overall AI usage. The survey indicates that AI adoption remains steady at 55% of respondents, and the share of organizations adopting AI in multiple business functions is limited. Product development and service operations are the two areas where AI is most commonly adopted. However, only 23% of respondents report that at least 5% of their organizations' EBIT is attributed to AI use, suggesting untapped potential.

The McKinsey survey highlights generative AI's rapid rise in 2023, as it becomes an integral part of various industries and organizational strategies. Leaders in AI high-performing companies are driving the adoption of both gen AI and traditional AI capabilities to derive significant value. The workforce landscape is evolving with new roles emerging, signaling the transformative impact of gen AI on businesses. However, despite its explosive growth, the overall adoption of AI tools remains steady, leaving ample room for companies to capture further value from these groundbreaking technologies. As organizations continue to deploy and explore the potential of generative AI, the future of AI-driven innovation looks promising and full of possibilities.

Source: McKinsey


The Role of AI in Politics: Embracing Opportunities and Overcoming Challenges

Artificial Intelligence (AI) is finding its way into professional politics, according to Eric Wilson, a digital strategist and managing partner at Startup Caucus. He sees the current wave of generative AI as a potential disruptive force in the 2024 election cycle. However, AI companies like OpenAI and Meta impose restrictions on certain political content, causing frustration among political operatives who believe tech platforms should loosen their approach to political use. Wilson argues that properly registered campaigns and committees should have access to AI's capabilities for tasks like drafting press releases and social media copy.

Despite concerns about AI's impact on politics, Wilson believes the integration is practical and mundane, enhancing tasks in various industries. He personally uses AI for blog post drafting, podcast transcript editing, and generating social media ideas. Campaigns are using open-source models and purpose-built tools to overcome restrictions imposed by AI companies and to aid in writing fundraising emails and nonpartisan copy.

Wilson acknowledges the risk of deepfake misuse but believes restricting access for legitimate political actors is not the solution. Instead, he advocates for providing them with the best tools available to address such challenges effectively. As AI policies evolve, finding a balance between allowing political use and addressing concerns will be crucial for AI's future in politics.

Source: POLITICO


Massachusetts Regulators Investigate AI Use in Securities Industry

Massachusetts securities regulators have launched a probe into the application of artificial intelligence (AI) by investment firms, expressing concerns about potential unchecked use and its impact on investors. Secretary of State Bill Galvin announced that his office sent letters of inquiry to several firms, including JPMorgan Chase and Morgan Stanley, among others, seeking information on their use and development of AI for their businesses. The investigation aims to ensure that the technology is employed responsibly, with proper disclosure and conflict consideration, to prevent potential harm to investors.

Massachusetts regulators are particularly interested in examining the supervisory procedures of these firms to ensure that AI will not prioritize the companies' interests over those of their investor clients. The investigation comes in the wake of the U.S. Securities and Exchange Commission's (SEC) proposal to eliminate conflicts of interest resulting from AI use on trading platforms. Inspired by the "meme stock" frenzy of 2021, where predictive analytics contributed to the "gamification" of retail investors' behavior, the SEC's proposal is currently open to public comment.

Secretary of State Bill Galvin's office is taking a proactive approach to assess the disclosure processes and marketing materials used by firms that have already deployed AI. By scrutinizing these aspects, Massachusetts regulators seek to ensure that AI applications in the securities industry are conducted responsibly, transparently, and without jeopardizing investors' interests. As AI continues to play a significant role in the financial sector, regulatory efforts are critical to strike a balance between harnessing AI's potential and safeguarding investors from potential risks.

Source: Reuters


Unraveling the Anti-Black Bias in AI

As artificial intelligence (AI) continues to advance, so does the concern over its potential biases. The rise of generative AI platforms like ChatGPT has captivated the public, but there is a darker side that deserves scrutiny - the prevalence of anti-Black bias within AI systems.

Research has highlighted instances of anti-blackness in various AI applications. In recruiting and hiring platforms, AI algorithms may inadvertently favor white applicants due to biased keyword identification in resumes. Facial recognition technology has also exhibited racial bias, leading to wrongful arrests of Black individuals based on misidentification. Social media filters and beauty apps perpetuate white beauty standards, while generative AI platforms like ChatGPT can produce racist outputs when given certain prompts.

The underrepresentation of Black individuals in the AI field contributes to the problem. To address these biases, AI systems must be programmed from an anti-racist and anti-oppressive perspective. Establishing more AI ethics committees and increasing diversity within the AI community are essential steps towards combating anti-Black bias in AI. As the use of AI becomes more widespread, it is crucial for the public to remain vigilant and informed about the potential pitfalls of these technologies.

Source: Forbes


AI in Medicine: Promising Possibilities and Ethical Challenges

The integration of artificial intelligence (AI) in medicine holds the promise of transforming healthcare, from early disease detection to treatment improvement. AI tools have shown potential in diagnosing and predicting various medical conditions, including cancer. However, while researchers and companies are enthusiastic about AI's capabilities, ethical concerns and technical limitations need addressing to realize its full potential.

The use of AI in healthcare is not new; early AI tools like MYCIN were developed in the 1970s to aid in diagnosing and treating bacterial infections. Today, machine learning, particularly artificial neural networks (ANNs), has become the de rigueur approach for medical applications. AI's standout advantage lies in medical imaging, where it excels at pattern recognition, and in leveraging other data in electronic health records to assess disease risks.

Although AI has made significant strides, there are challenges to overcome. The quality, access, and paucity of data used to develop AI models remain key limitations. The datasets can be biased, leading to perpetuated biases in AI decisions. Ensuring AI's explainability (XAI) has become a growing concern to understand the reasoning behind its recommendations and to allocate responsibility in case of errors.

Additionally, the delicate balance between data sharing and patient privacy is a major ethical consideration. While the potential benefits of AI are immense, the responsible handling of patient data must be ensured to avoid breaches and scandals.

As the medical AI space rapidly evolves and investments soar, the race between AI development and the supporting infrastructure will shape the future of healthcare. Addressing these challenges will be crucial to make AI a truly intelligent, accurate, and safe prediction tool for patients in clinical settings.

Source: Al Jazeera


Evidence, Ethics, and the Promise of AI in Psychiatry

In a thought-provoking paper on AI in psychiatry, the authors emphasize the significance of maintaining "epistemic humility" in clinical decision-making processes. They caution against privileging AI over patient perspectives and experiential knowledge, as this could lead to unintended consequences and perpetuate historical inequities faced by those with mental illnesses. The authors advocate for a balanced approach that combines AI's benefits with shared decision-making and patient involvement in the treatment process.

The paper does not diminish the role of AI in psychiatry but urges a collaborative effort between AI developers, clinicians, and individuals with mental illness to understand and address potential unintended consequences of AI algorithms. By involving patients in the development process and fostering open communication, health systems and clinicians can ensure a commitment to epistemic humility, where AI is integrated as a supplementary tool rather than an absolute authority. This approach aims to support a more equitable and compassionate mental health care system, where both AI and human clinical judgment work in harmony to improve patient outcomes.

Ultimately, the paper calls for a thoughtful integration of AI in psychiatry that respects the valuable perspectives of patients and clinicians. By embracing AI while preserving the human touch, we can create a healthcare landscape that leverages technology's advantages without sidelining the essential experiences and insights of those seeking mental health treatment.

Source: The British Medical Association


MIT's Liquid Neural Networks: A Compact Solution for AI Challenges in Robotics and Self-Driving Cars

MIT's Liquid Neural Networks (LNNs) offer a groundbreaking solution to AI challenges in fields like robotics and self-driving cars. While large language models (LLMs) dominate the AI landscape, not all applications can accommodate their computational demands. LNNs provide a compact, efficient, and adaptable alternative by using a unique mathematical formulation that stabilizes neurons during training and allows them to adapt to new situations post-training. Inspired by biological neurons in small organisms, LNNs require significantly fewer artificial neurons to accomplish tasks, making them more interpretable and suitable for small devices.

The advantages of LNNs extend beyond their compactness. They demonstrate a better grasp of causal relationships compared to traditional deep learning models, enabling improved generalization to new scenarios. Unlike other neural networks that over-analyze the context, LNNs focus on core tasks, making them highly adaptable even when settings change. This adaptability allows them to excel in safety-critical and computationally constrained environments. LNNs' unique architecture and mathematical equations facilitate learning continuous-time models, further enhancing their capabilities in handling time-series data streams like video or audio sequences.

MIT CSAIL has already tested LNNs in single-robot settings with promising results, making them a viable option for various applications in robotics and self-driving cars. Their ability to run efficiently on edge devices without cloud connectivity and their interpretability set them apart from traditional deep learning models. However, while LNNs excel in handling continuous data streams, their application to static databases like ImageNet may not be as effective. As researchers continue to explore LNNs and extend their tests to multi-robot systems, these innovative neural networks hold the potential to revolutionize AI in safety-critical domains.

Source: VentureBeat


Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to fine-tune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, researchers (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Their work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.

Source: arXiv


Google Assistant's AI Transformation: Embracing Generative Power for a Bright Future

Google is set to give its virtual assistant, Google Assistant, a major AI makeover, shifting its focus towards using generative AI technologies similar to those used in ChatGPT and Bard chatbot. The move is expected to bring significant changes to how Assistant functions for consumers, developers, and Google's own employees. While the company will initially support both new and old approaches, the emphasis will be on harnessing the potential of generative AI to create a more advanced and supercharged Assistant experience. The overhaul has already begun, starting with the mobile version of the product.

As part of the reorganization, Google is merging the Services and Surfaces teams and making leadership changes in various departments. However, this shift also means some job cuts within the Assistant team. The company has confirmed that a small number of layoffs will be made, affecting a fraction of the thousands of employees working on the Assistant. Despite the changes, Google remains committed to Assistant's future and is optimistic about the transformative potential of generative AI technology.

This move by Google aligns with a broader trend in the industry, as companies like Amazon are also working on AI-powered upgrades for their digital assistants, such as Alexa. It signifies a shift away from traditional digital assistant approaches and highlights the increasing significance of generative AI in shaping the future of conversational technology.

Source: Axios


Run Llama 2 on Your Mac: Unleashing the Power of Generative AI with LLM and Homebrew

Meta AI's recent release of Llama 2, a commercially usable openly licensed Large Language Model, has opened up new possibilities for Mac users. By using the LLM utility and Homebrew, developers can now install Llama 2 and other compatible llama-cpp models with ease. The new llm-llama-cpp plugin expands support for Llama-style models, enabling users to access the potential of this advanced conversational AI. With a step-by-step guide on installation and usage, users can now explore the capabilities of Llama 2 and other GGML models to interact with the AI and experiment with exciting applications. Although in its early stages, this development presents a promising opportunity for further improvements and contributions to enhance performance and expand support across different platforms.

Source: Simon Willison