Armilla Review - Shaping the Future of AI: States Lead in Regulation as Experts Urge Global Collaboration, New Industry Challenges and Tools

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
August 29, 2023
5 min read

States Take the Lead: Pushing for Stricter AI Regulation Amidst Congressional Inertia

In the face of stagnation on the federal level, state lawmakers from various corners of the United States are banding together to tackle the intricate challenges posed by artificial intelligence (AI) technology. A significant development on this front is the unveiling of a task force, announced at a national summit in Indianapolis, that is dedicated to formulating legislation with unified language, aimed at establishing boundaries for AI's deployment across public and private domains.

The coalition is composed of lawmakers from states including Colorado, California, Texas, Florida, and Minnesota. At the forefront of this effort is Connecticut state Sen. James Maroney, who emphasizes the pressing need for oversight due to the known biases present in AI systems. The initiative reflects growing apprehension and concern among state legislatures, as they strive to address the multifaceted implications of AI, even while acknowledging the difficulty of keeping pace with its rapid evolution.

One striking indicator of the urgency surrounding AI regulation is the considerable interest it garnered at the National Conference of State Legislatures (NCSL) summit, where discussions on AI regulation were so popular that they overflowed into the hallways. Chloe Autio, the director of applied AI governance at the Cantellus Group, recognized the unique role states could play in solving AI-related questions, given the current political dynamics in Washington.

The proposed state-level approach to AI regulation draws inspiration from the European Union's ongoing efforts to regulate tech giants. A recent report by the NCSL reveals that this year alone, around 25 states introduced AI-related legislation, with 14 of them successfully passing measures. The initial steps towards regulation involve enhancing transparency and accountability by necessitating public disclosures and implementing constraints on data usage.

In this pursuit, Connecticut is leading the way by cataloging its AI-utilizing systems to prevent discriminatory outcomes. Meanwhile, several other states have established councils to delve into the complexities of AI's impact. Colorado, for instance, passed a law in 2022 to curb the use of facial recognition technology by government entities, demanding both public transparency and accountability reports. Moreover, various states are exploring AI's influence on education and criminal justice.

However, while calls for regulation are strong, cautionary voices from the tech industry and AI experts urge against overly stringent rules. They highlight the importance of fostering trust in AI while incentivizing its application in sectors facing labor shortages, along with encouraging innovative uses that improve daily life. Nicole Foster of Amazon underscores the need to strike a balance to avoid stifling the potential benefits of AI technology.

In the midst of this evolving landscape, the consensus is that while necessary, regulation should be targeted and flexible, reflecting the dynamic nature of AI advancement. Colorado state Sen. Robert Rodriguez echoes this sentiment, stressing the importance of implementing limitations without stifling technological progress. The current efforts thus represent a pivotal juncture where states take the reins to steer AI's future trajectory, aspiring to harness its potential while ensuring responsible and ethical usage.

Source: Axios


AI Expert Max Tegmark Urges Global Collaboration to Address AI's Growing Risks

Max Tegmark, a renowned physicist and AI researcher at MIT, who has long championed the potential benefits of artificial intelligence, is now sounding the alarm about its potential risks. Tegmark's optimism has transformed into concern as he observes a "dangerous race" among tech companies to develop and deploy AI systems without adequate understanding, regulation, or control.

Tegmark warns about the potential emergence of Artificial General Intelligence (AGI), where AI systems could match or even surpass human capabilities, posing an existential threat if their priorities diverge from humanity's. His concerns have prompted him to spearhead an open letter through the Future of Life Institute, urging a pause in training the most powerful AI systems until shared safety standards are established.

Despite not expecting an immediate halt in AI advancements, Tegmark's efforts have fuelled vital conversations about AI risks. He emphasizes the need for government intervention to enforce safety regulations, as commercial pressures drive tech leaders to prioritize innovation over safety. Prominent figures such as Elon Musk and Steve Wozniak have endorsed this call for caution.

Tegmark suggests establishing an entity similar to a Food and Drug Administration for AI, ensuring products' safety before release. He highlights instances where AI models generated dangerous molecules and criticizes reckless code-sharing by tech companies, likening it to handing out "hand grenades and chain saws."

As Tegmark's perspective evolves from cosmic exploration to AI advocacy, he seeks a world where AI is developed thoughtfully to preserve the future's potential. With a growing awareness of AI's perils, Tegmark's endeavours emphasize the need for a collaborative approach among governments, companies, and experts to ensure AI benefits humanity without compromising its future.

Source: The Wall Street Journal


Vulnerabilities Exposed: Adversarial Attacks on Major AI Chatbots Raise Concerns

A recent revelation has sent shockwaves through the AI community, highlighting the fragility of even the most sophisticated AI chatbots. Researchers at Carnegie Mellon University have uncovered a flaw that enables the manipulation of popular AI chatbots, including ChatGPT, Bard, and others, through what are known as adversarial attacks. These attacks involve crafting seemingly innocuous strings of text that, when added to a prompt, can cause these chatbots to generate undesirable or harmful output.

Traditionally, AI developers have diligently fine-tuned chatbot models to mitigate potential risks such as hate speech or dangerous instructions. However, the Carnegie Mellon study suggests that these precautions aren't sufficient to prevent malicious misuse. By adding specific strings to prompts, the researchers demonstrated that even advanced AI chatbots could be coerced into producing responses that go against their intended guidelines.

Zico Kolter, an associate professor at CMU, underscores the gravity of the situation by stating that the identified vulnerability isn't easily fixable. He highlights the inherent challenge in securing AI chatbots against such attacks, reflecting a fundamental weakness that complicates the deployment of cutting-edge AI.

The open source language model utilized by the researchers enabled them to develop these adversarial attacks. The strategy involved gradually nudging the chatbots off-course by modifying the input prompts. Startlingly, the same attack method successfully affected well-known commercial chatbots, revealing a vulnerability across the AI landscape.

In response to the study, companies like OpenAI, Google, and Anthropic have taken steps to address the specific exploits detailed in the research paper. However, the broader challenge of defending against adversarial attacks remains unsolved. While some guardrails have been introduced, the complexity of these attacks poses a formidable obstacle.

OpenAI and Google both expressed their commitment to enhancing AI models' resilience against adversarial attacks. They outlined strategies such as identifying unusual activity patterns and continuous red-teaming efforts to simulate threats. Nevertheless, the fact that new strings were successfully employed in subsequent tests on ChatGPT and Bard emphasizes the ongoing battle against such attacks.

The study not only sheds light on the vulnerabilities within language models but also underscores the importance of open source models in studying AI's weaknesses. The researchers' work reflects the need for a broader discussion on AI safety and the challenges posed by rapidly advancing AI technology. It also emphasizes the responsibility of developers and users to exercise caution, ensuring that critical decisions aren't solely entrusted to AI models.

As the AI landscape evolves and AI capabilities expand, it's clear that ensuring robust defenses against adversarial attacks is a critical step. This incident serves as a wake-up call, urging the AI community to explore innovative strategies for safeguarding AI systems from potential exploitation and misuse.

Source: Wired


Deployment Challenges Hindering Companies' Generative AI Initiatives

The allure of generative AI is undeniable, but the road to successful deployment is proving challenging for large corporations. Amid the hype surrounding this transformative technology, high costs and data management complexities have become significant roadblocks, stalling many generative AI projects in the pilot phase.

While companies hold a positive outlook on the productivity gains promised by generative AI, achieving its potential has become a lengthier and more costly endeavour than initially anticipated.

Deloitte and NVIDIA's Joint Effort

Addressing these issues, Deloitte and NVIDIA recently unveiled their collaborative "Ambassador AI program." Designed to facilitate the transition from pilot to full-scale AI deployment, the initiative aims to provide support to companies grappling with generative AI challenges.

The Cost Conundrum

According to S&P Global's 2023 Global Trends in AI report, over 50% of AI decision-makers at major firms are encountering cost-related barriers to implementing the latest AI tools. This survey, covering 1,500 decision-makers in companies with 250+ employees, offers insights into the uphill battle companies are facing.

Deployment Status

A notable finding is that around 70% of respondents have initiated at least one AI project. However, a significant portion of these projects (31% of respondents) remain in the pilot or proof-of-concept stage, outnumbering those that have scaled to enterprise-level deployment (28% of total respondents).

Data Management and More

Data-related challenges emerge as a central hurdle. Disorganized data, stored in various formats and locations, is necessitating a fundamental reevaluation of data storage and management practices. The survey's respondents cite data management (32%), security (26%), and access to computing resources (20%) as their primary challenges.

Broader Implications

Beyond the deployment hurdles, increased AI usage raises environmental concerns due to heightened energy consumption. The report reveals that 68% of respondents are experiencing strain on their energy usage targets as AI requires substantial computing power.

Future Outlook

While AI has the potential to save time and provide valuable insights, reservations persist. The lack of transparency and potential bias in AI decision-making processes worry leaders. Ensuring fairness and human connections with customers remains paramount even in the face of technological advancement.

In essence, the journey to fully reaping the benefits of generative AI is rife with complexities. Deloitte and NVIDIA's partnership and the insights from the S&P Global report underscore the urgent need for collaborative efforts and strategic planning to surmount the obstacles and achieve AI's transformative potential.

Source: Axios


Can AI Be A Startup Founder: Exploring the Potential and Challenges

The landscape of startup entrepreneurship could be on the brink of a groundbreaking transformation, with the emergence of AI founders. The notion that the next iconic figure in the startup world might not be a human, but rather an AI, prompts us to question the future dynamics of founding and leading a startup.

Imagine an AI that harnesses Elon Musk's extensive data to craft itself into the ultimate AI Super Founder. This AI never rests, possesses insatiable curiosity, engages in customer interactions, identifies problems, formulates plans, builds MVPs, hires and fires staff, learns from data, secures funding, and more.

While AI has been widely used for practical tasks like web design, coding, and content creation, the idea of AI as a founder raises profound questions about its creative capabilities and its role in the startup ecosystem. Can AI originate startup ideas, execute them effectively, and lead teams with the charisma of human founders?

AI's knack for generating creative content, coupled with its ability to synthesize vast amounts of information, positions it to potentially identify real-world problems and devise startup ideas. Through exploring its own model, analyzing the internet's breadth, or interacting with users, AI can unearth valuable opportunities for innovation.

The crux of any startup is executing ideas effectively. AI's aptitude for coding, rapid prototyping, and continuous learning sets the stage for it to excel in creating minimal viable products (MVPs) and refining them based on real-time data and user feedback. In this context, AI could outperform human founders by iterating quickly and optimizing every aspect of the product.

The leadership qualities of AI are a subject of intrigue. While emotional intelligence remains an AI's blind spot, its consistency, data-driven decision-making, and strategic insight could earn the trust of team members. AI's ability to interact with and learn from advisors and team members allows it to lead through transparency and clear expectation management.

As we navigate the possibility of AI founders, the potential collaboration between AI and humans comes into focus. AI might become a founder coach or assistant, leveraging tools and resources to handle various startup tasks. However, ethical considerations, biases in AI models, and limited access to domain expertise remain challenges.

While fully autonomous AI founders might be a few steps away, the immediate potential lies in a hybrid model where AI collaborates with humans to drive startups forward. As AI agents improve and their capabilities expand, we could witness a significant shift in the entrepreneurial landscape.

The concept of AI founders blurs the lines between human creativity and machine precision, opening up exciting possibilities and prompting us to rethink the nature of entrepreneurship. While challenges and ethical considerations lie ahead, the synergy between AI and human expertise could propel startups into uncharted territories of innovation and success.

Source: AI Super Founder


Code Llama: A Revolutionary AI Tool for Coding

Introducing Code Llama, a cutting-edge large language model designed to revolutionize coding tasks. Code Llama's unique capabilities allow it to generate both code and natural language explanations, making it a powerful asset for developers of all skill levels. Available for both research and commercial use, this tool offers specialized models for different coding needs and languages.

Code Llama's foundation lies in Llama 2, and it boasts three distinct models: the foundational Code Llama, the Python-specialized Code Llama - Python, and the instruction-focused Code Llama - Instruct. With a focus on performance, Code Llama outperforms other publicly available models on code tasks, offering enhanced coding assistance, code completion, and debugging support.

Incorporating a scalable approach, Code Llama models come in various sizes to suit different requirements, with the ability to handle up to 100,000 tokens of context. Specialized variants cater to Python enthusiasts and those seeking human-like understanding of natural language instructions.

Safety and responsible use are paramount. Extensive safety assessments demonstrate Code Llama's capacity to generate safer responses. Developers are encouraged to embrace Code Llama's potential while adhering to ethical guidelines and responsible practices outlined in the documentation.

Code Llama heralds a new era of coding, empowering developers to streamline tasks, enhance education, and foster innovation. As the coding landscape transforms, Code Llama is poised to play a central role in shaping the future of AI-assisted coding.

Also from Meta AI: SeamlessM4T: The first, all-in-one, multimodal translation model

Source: Meta AI


Customizing AI with Fine-Tuning: GPT-3.5 Turbo Enhancements

OpenAI is pushing the boundaries of AI customization by introducing fine-tuning for its GPT-3.5 Turbo model, with fine-tuning for GPT-4 slated for later this year. This development empowers developers to tailor models to their specific use cases and deploy them at scale. Remarkably, early tests indicate that a fine-tuned version of GPT-3.5 Turbo can even surpass the performance of base GPT-4 for certain focused tasks. Importantly, the data utilized for fine-tuning remains the property of the customer and is not utilized to train other models.

The new feature addresses the clamor from developers and businesses to personalize models for distinct user experiences. Fine-tuning facilitates supervised training, refining the model's output for specific scenarios, such as adherence to instructions, consistent response formatting, and custom tones. For instance, developers can ensure that the model consistently responds in a preferred language or formatting style.

The benefits of fine-tuning extend to improved steerability, reliable output formatting, and distinctive brand tone. Furthermore, fine-tuning enables the shortening of prompts without sacrificing performance. This innovation can handle up to 4k tokens, doubling the capacity of previous fine-tuned models. In early trials, prompts were reduced by up to 90%, boosting efficiency and cost savings in API calls.

OpenAI remains committed to safety during the fine-tuning process. To preserve the model's safety features, data used for fine-tuning undergoes moderation to identify any content conflicting with safety standards. This cautious approach ensures responsible AI deployment.

OpenAI also introduced updated GPT-3 models, babbage-002 and davinci-002, which can replace the original models (ada, babbage, curie, davinci). These models can be fine-tuned, offering enhanced pagination and flexibility. The old endpoint will be phased out on January 4th, 2024.

Source: OpenAI


YouTube's Commitment to Responsible AI Partnership with the Music Industry

YouTube is taking a proactive approach to its collaboration with the music industry in the realm of AI technology. In a recent announcement, the platform emphasized its dedication to responsibly embracing artificial intelligence advancements. This collaboration aims to leverage AI's transformative potential while ensuring the protection of creative works and artists' rights. The principles outlined in YouTube's framework lay the groundwork for a mutually beneficial partnership with the music industry.

Principle #1: Embracing AI Responsibly Together with Music Partners

YouTube acknowledges that AI technology is no longer a distant promise but a present reality, with millions of people already engaging with it in various aspects of their lives. The platform has observed creators on its platform utilizing AI tools to streamline their creative processes, resulting in over 1.7 billion views of videos related to AI tools in 2023 alone. YouTube is committed to responsibly embracing AI's evolving capabilities while upholding its commitment to responsible innovation. The introduction of the YouTube Music AI Incubator exemplifies this commitment, allowing collaboration with innovative artists, songwriters, and producers to explore AI's potential in the music space.

Principle #2: Balancing Creative Expression and Protection

While AI opens up new horizons for creative expression, YouTube is determined to ensure appropriate safeguards for copyright holders and artists. The platform's renowned Content ID technology has played a significant role in compensating rights holders for the use of their content. As AI generates new forms of content, YouTube plans to reimagine its approach to protecting creators and artists, ensuring that they can continue to earn from their work. The platform remains dedicated to its trust and safety efforts, applying these principles to AI-generated content to prevent potential challenges.

Principle #3: Scaling Trust and Safety Measures for AI

YouTube's commitment to maintaining a safe and trustworthy environment extends to AI-generated content. The platform's existing policies and systems designed to protect users will also be applied to AI-generated content. YouTube recognizes the potential for challenges like misinformation and copyright abuse to arise with generative AI systems. However, the platform remains determined to utilize AI technology to identify and address such issues, amplifying its ongoing efforts to ensure user safety.

The collaborative effort between YouTube and the music industry underscores the platform's proactive stance on embracing AI technology while safeguarding the interests of creators and artists. By establishing clear principles and committing to responsible practices, YouTube is paving the way for a fruitful partnership that harnesses the benefits of AI without compromising creative integrity and protection.

Also read this on Universal Music Group’s collaboration with YouTube

Source: YouTube Blog


AI2 Releases Dolma: A Game-Changing Open Dataset for Language Models

The Allen Institute for AI (AI2) has introduced Dolma, an expansive open dataset that promises to reshape the landscape of language models and their development. Unlike the prevailing trend of secrecy surrounding training data, Dolma is a substantial step towards transparency and accessibility. The dataset is destined to be the foundation for AI2's open language model (OLMo), marking a commitment to not only sharing the model but also its underlying data.

Dolma's significance lies in its effort to break the opacity barrier that often shrouds AI advancements. Traditional models like GPT-4 and Claude are powerful but keep their training data confidential. Dolma challenges this by offering a huge text dataset that's free to access and open for examination. This endeavor aligns with AI2's mission to cultivate a more collaborative and transparent AI research environment.

Luca Soldaini, representing AI2, delves into Dolma's creation and curation process in a comprehensive blog post. The meticulous steps undertaken to refine the dataset and its sources set a precedent for transparency in AI data utilization. Although a detailed research paper is in progress, the initial insights provide a glimpse into the dataset's origins and the rationale behind its composition.

Dolma, encompassing a staggering 3 billion tokens, assumes a pioneering role as the most expansive open dataset of its kind. It is a testament to AI2's dedication to fostering a culture of openness and knowledge-sharing in AI research. The dataset's availability is facilitated through the Hugging Face platform, making it accessible to a wide array of researchers and practitioners.

As AI continues to evolve, AI2's Dolma initiative stands as a beacon of transparency and accountability. The dataset's release not only fuels advancements in language models but also sets a precedent for the responsible and ethical use of AI technologies. In an industry often characterized by proprietary practices, Dolma heralds a new era of cooperation, transparency, and innovation in AI development.

Source: TechCrunch