Armilla Review - Billionaire-Backed AI Advisers and the Shaping of Washington: Implications, Deepfakes, and Ethical Concerns

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
October 18, 2023
5 min read

Top Story

Billionaire-Backed AI Advisers Forge Their Way into Washington: Implications and Concerns

In an alarming development, a well-funded network, supported by Silicon Valley billionaires and major AI firms, is infiltrating various branches of the U.S. government, including Congress, federal agencies, and think tanks. This network is driven by a strong focus on long-term AI risks and has the potential to divert Congress's attention away from pressing issues in the AI landscape.

One organization at the center of this influence is the Open Philanthropy-backed Horizon Institute for Public Service. It sponsors AI fellows in congressional offices, at federal agencies, and within influential think tanks. These fellows are already involved in critical negotiations shaping Capitol Hill's plans to regulate AI. Notably, key Senate offices, including those of Senate Majority Leader Chuck Schumer's top lieutenants, have Horizon fellows who work on AI and related topics.

This network extends beyond Horizon, with Open Philanthropy having channeled tens of millions of dollars to various think tanks and researchers, all emphasizing long-term AI risks. However, this concentration on speculative AI dangers is causing concerns among some experts who argue it might divert attention from the immediate harms posed by AI systems in use today, such as bias, misinformation, and threats to privacy.

There is particular concern about the push for AI licensing, which could inadvertently benefit established tech giants by imposing additional hurdles and costs on smaller startups and independent researchers, ultimately solidifying the position of leading AI companies, including OpenAI and Anthropic.

While Open Philanthropy maintains that it does not influence policy discussions or grant recipients to benefit specific companies, the extensive network it has created is prompting ethical and conflict-of-interest questions. The lack of disclosure surrounding these relationships raises concerns about transparency and accountability.

This growing network's influence reflects the broader challenge of the limited number of staffers in Washington with expertise in emerging technologies. As a counterbalance, some civil society groups are attempting to gain traction and shift the focus toward the real-world impacts of AI. However, their efforts are currently overshadowed by the well-organized and ideologically aligned network funded by Silicon Valley billionaires.

As these developments continue, the outcome for AI regulation in Washington remains uncertain, with questions about priorities and potential conflicts of interest at the forefront of the debate.

Source: POLITICO

Featured

Is Your AI Model Going Off the Rails? There May Be an Insurance Policy for That

As businesses increasingly adopt generative AI, the risks associated with AI model failures have led insurance companies to explore opportunities in this emerging field. Companies are beginning to offer financial protection against AI models that go awry. These insurance policies aim to address concerns related to AI risk management voiced by corporate technology leaders, board members, CEOs, and legal departments.

There is a growing appetite for AI insurance, with major carriers considering specialized coverage for financial losses stemming from AI and generative AI-related issues. These issues encompass cybersecurity threats, potential copyright infringement, biased or inaccurate outputs, misinformation, and proprietary data leaks.

Experts predict that a substantial portion of large enterprises may invest in AI insurance policies as they become available. Munich Re and Armilla Assurance have already entered this space, offering coverage for AI services and warranties for AI models' performance, respectively.

As generative AI becomes more integral to business operations, insurance policies covering potential AI-related losses are expected to become increasingly important, with the opportunity for insurers to capture significant market share.

Also listen to: The Future of AI Insurance : An interview with Karthik Ramakrishnan, the CEO & Co-Founder of Armilla Assurance

Source: The Wall Street Journal

Armilla AI launches AutoGuard™, the first truly intelligent firewall for Generative AI

Armilla introduces AutoGuard™, an intelligent firewall designed to help enterprises deploy generative AI models securely while safeguarding both users and organizations from potential harms. Generative AI models offer remarkable capabilities but come with significant risks, including bias, hallucinations, and privacy concerns. AutoGuard™ utilizes the AutoAlign™ framework, featuring auto-feedback fine-tuning, and offers a library of pre-built controls for privacy protection, confidential information security, gender assumption, jailbreaking protection, and racial bias detection. It can also be customized to align AI model behavior with specific enterprise requirements. AutoGuard™ aims to make generative AI more secure, trustworthy, and ethical for various industries, including HR software, financial institutions, and consulting firms. Currently, it is in use by a select group of clients.

Learn more

Top Articles

AI-Generated Deepfake Livestreamers: The New Marketing Trend in China

In the realm of China's bustling e-commerce industry, a peculiar trend is gaining ground: AI-generated deepfake livestreamers. These digital clones are taking the marketing world by storm, offering a cost-effective solution for brands and companies looking to engage with consumers round the clock. As technology continues to advance, these deepfakes have become eerily convincing and are reshaping how products are presented to the audience.

This transformative trend is flourishing on platforms like Taobao, Douyin, and Kuaishou, where influencers typically promote products during livestreams. The rise of deepfake streamers has significantly impacted China's marketing landscape, enabling even smaller brands to automate the process at a fraction of the cost of hiring human streamers.

Chinese startups and tech giants alike are now offering services to create these AI avatars for livestreaming. With just a minute of sample video and a relatively low investment, brands can clone a human streamer to operate 24/7. Although the technology's earlier iterations had ethical and privacy concerns, it's finding a niche within the e-commerce world.

The deepfake streamers are trained to mimic common scripts and gestures seen in e-commerce videos, ensuring a seamless user experience. While these AI clones may not outperform top-tier human influencers, they serve as a suitable replacement for mid-tier streamers. This shift is impacting the job market for human livestream hosts, causing a decrease in average salaries.

As this trend continues to gain traction, the Chinese government and online platforms are beginning to consider regulations and transparency requirements. Platforms like Douyin are emphasizing the need to label AI-generated content clearly, and the debate over real-time human involvement in these streams persists.

In the quest to enhance the capabilities of AI streamers, some companies are working on adding "emotional intelligence" to these avatars, allowing them to respond to audience sentiment. This groundbreaking technology is still in its infancy, with more developments and regulatory scrutiny likely to follow.

The emergence of AI-generated deepfake livestreamers is changing the way brands interact with consumers, offering a glimpse into the future of marketing in China's digital landscape. Whether it's the right path forward or an ethical challenge, only time will tell.

Source: MIT Tech Review

Startup Uses Deepfakes Without Consent, Prompting Ethics Overhaul

U.K.-based startup Yepic AI, which vowed to use deepfakes for ethical purposes and not reenact individuals without their consent, has been exposed for violating its own principles. In an unsolicited email pitch to a TechCrunch reporter, the company shared two unauthorized deepfake videos of the journalist speaking in different languages, created using a publicly available photo. The reporter demanded the removal of the deepfake videos. Yepic AI had claimed on its "Ethics" page and in an August blog post that it would not produce deepfake content without express permission. The company, however, failed to confirm whether similar incidents had occurred with other individuals. Yepic AI CEO Aaron Jones, when contacted for comment, announced an update to the company's ethics policy to accommodate exceptions for artistic and expressive AI-generated images. Deepfakes, a growing concern, have been exploited for scams and nonconsensual content, raising serious ethical and legal questions.

Source: TechCrunch

AI Deepfake Ads: How MrBeast's Illusion Landed on TikTok and the Growing Deceptive Challenge

The world of AI deepfakes has reached a concerning milestone as a fraudulent ad featuring the popular creator MrBeast managed to evade TikTok's ad moderation technology and appear on the platform. In the ad, MrBeast offered 10,000 viewers an iPhone 15 Pro for just $2, a proposition that might seem plausible coming from the influential creator. MrBeast, also known as 25-year-old Jimmy Donaldson, gained fame through his stunt videos, where he gave away homes and cars with no strings attached or organized competitions with substantial cash prizes. This legacy of generosity makes the idea of him offering iPhones on TikTok less suspicious.

TikTok swiftly removed the ad within hours, citing a violation of its advertising policies. Although TikTok doesn't entirely prohibit the use of synthetic or manipulated media, it mandates that advertisers transparently disclose their use of such technology.

The incident highlights the challenge of spotting scams and deepfakes, especially for those who aren't adept at recognizing them. TikTok relies on a combination of human moderation and AI-powered technology to review ads before they are published. In this case, TikTok's AI failed to detect the deepfake ad. Other platforms, like Meta, employ similar methods, primarily relying on automated technology with human reviewers to train the AI and manually assess ads.

Deceptive deepfakes are not a new phenomenon, but as AI technology becomes more accessible, the issue is growing in prominence. Even well-known personalities like actor Tom Hanks and CBS anchor Gayle King have recently warned their followers about deepfake ads misusing their likenesses.

The Federal Trade Commission (FTC) has issued warnings about deepfake marketing, but regulating it at scale remains a significant challenge. With global elections on the horizon, the potential consequences of deceptive advertising using deepfake technology become even more significant, posing a threat to the public's trust and digital safety.

Source: TechCrunch

Evaluating AI Ethics: Microsoft Researchers Propose a Moral IQ Test for LLMs

As artificial intelligence systems and large language models (LLMs) continue to advance, concerns about their ethical decision-making capabilities grow. These systems are being deployed in critical domains like healthcare, finance, and governance, where their judgments directly affect human lives. To ensure their moral soundness, researchers from Microsoft have introduced a novel framework to assess the moral reasoning of LLMs.

LLMs, such as GPT-3, ChatGPT, and others, possess impressive natural language abilities but also exhibit concerning behaviours like generating biased or factually incorrect content. Poor moral judgments by these models can have significant real-world consequences.

The Microsoft researchers adapted the Defining Issues Test (DIT), a psychological assessment tool, to assess LLMs' moral reasoning. DIT presents real-world moral dilemmas followed by statements with different considerations. Subjects rank the statements to calculate a P-score, indicating reliance on advanced moral reasoning. The experiment evaluated prominent LLMs using these DIT-style prompts.

The study revealed that while LLMs showed intermediate moral intelligence, they struggled with complex ethical dilemmas and nuanced trade-offs. The findings emphasize the need for these models to reach a higher level of moral intelligence before they are deployed in ethics-sensitive applications.

This research sets a benchmark for improving LLMs' moral faculties, highlighting the need for more advanced moral reasoning in these systems. The smaller LlamaChat model's outperformance challenges the assumption that model scale directly correlates with moral reasoning sophistication, offering hope for developing highly capable ethical AI with smaller models. The study underscores the importance of evolving LLMs to handle complex moral dilemmas and cultural nuances before their real-world deployment.

Source: DEV Community

Creative Workers Demand AI Regulation for Artistic Protection at FTC Roundtable

Creative workers, including artists, musicians, actors, and authors, gathered at a virtual Federal Trade Commission (FTC) roundtable to advocate for AI regulation, emphasizing the need for "consent, credit, control, and compensation" to safeguard their artistic creations and brands. The roundtable, titled "Creative Economy and Generative AI," was aimed at understanding the impact of generative artificial intelligence on creative industries.

FTC Chair Lina Khan, who has recently taken action against Amazon's alleged monopoly power, acknowledged the growing role of AI in creative fields and expressed the importance of comprehending the potential positive and negative impacts on individuals and industries.

The discussions included various perspectives on how generative AI affects creative workers. For example, Sara Ziff, from the Model Alliance, highlighted concerns in the fashion industry regarding the use of 3D body scans and AI-generated models. Musicians, as presented by Jen Jacobsen from the Artist Rights Alliance, expressed worries about expansive AI models mimicking artists' voices without consent or compensation.

The roundtable also addressed concerns about copyright infringement, unfair competition, and the creation of deepfakes that negatively affect artists and musicians. Participants argued that AI-generated content is always rooted in human creativity and that AI should serve and respect the rights of its human sources.

The discussions underscored the pressing need for regulatory frameworks to ensure that generative AI respects the rights and livelihoods of creative workers. It also highlighted the importance of consent, credit, compensation, and transparency in the creative economy.

Source: VentureBeat

Google Promises Legal Protection for Users in AI Copyright Lawsuits

Google has taken a proactive step in addressing concerns over copyright infringement related to its generative AI products. The company has pledged to protect customers who use specific AI-powered tools and may face legal challenges. This move aims to alleviate growing fears surrounding the potential copyright violations associated with generative AI technology.

In a recent blog post, Google specifically mentioned seven products that it would legally cover, including Duet AI in Workspace, Vertex AI Search, and Codey APIs, among others. The company assured users that if they encounter copyright challenges related to these products, Google will take responsibility for the associated legal risks.

Google's commitment includes a "two-pronged, industry-first approach" to intellectual property indemnification. This approach covers both the training data used in its AI models and the results generated from its foundation models. Essentially, Google will bear legal responsibility if a user is sued due to the use of training data that contained copyrighted material.

Furthermore, Google has clarified that its protection also extends to users who face legal action for content generated by its foundation models. However, this protection applies under the condition that users did not intentionally create or use generated content to infringe upon the rights of others.

This move by Google is not an isolated case. Other technology giants like Microsoft and Adobe have also announced similar initiatives to protect their users from copyright-related legal challenges. These measures come as copyright issues continue to cast a shadow over generative AI platforms, leading to a rising number of lawsuits from authors and content creators alleging copyright infringement.

Source: The Verge