AI Policy Weekly #36
xAI releases Grok-2, Google integrates Gemini into products, and CNAS studies AI’s biological threats
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
xAI Moves Fast and Skimps on Safety
Elon Musk’s AI company, xAI, just released Grok-2, an advanced AI chatbot with capabilities rivaling top AI models from Google, Meta, OpenAI, and Anthropic.
Grok-2 is available to all X (previously known as Twitter) users who pay $16 per month for a Premium+ subscription. A smaller model, Grok-2 mini, is available at half the price with an ordinary Premium subscription.
With this release, xAI has cemented itself as a genuine player in the frontier AI industry.
This is no small feat, as the largest models cost hundreds of millions of dollars to build.
Furthermore, xAI is building a massive supercomputer in Memphis that boasts 100,000 top-of-the-line chips from NVIDIA. The chips alone are worth billions of dollars.
How did xAI get so much money, after launching just one year ago?
According to The Wall Street Journal and The Information, Musk personally invested $750 million in the company during early fundraising efforts, while also supplying $250 million worth of computing power from his separate social media company, X. That $1 billion sum alone is far more than most AI startups have been able to secure.
About half a year later, xAI announced $6 billion additional dollars of Series B funding, with backing from Andreessen Horowitz, Saudi Arabian Prince Alwaleed Bin Talal, and others.
After raising over $7 billion, xAI is one of the three most well-resourced AI startups alongside OpenAI and Anthropic. OpenAI has received $13 billion from Microsoft, and Anthropic has raised over $8 billion, including $4 billion from Amazon and $2 billion from Google, although all of these deals are under antitrust scrutiny.
Musk recently launched a renewed lawsuit against Sam Altman and OpenAI, arguing that Altman falsely promised to prioritize safety over shareholder value as “the hook for Altman’s long con.” The lawsuit claims that Altman’s “perfidy and deceit are of Shakespearean proportions.”
But at the moment, even OpenAI is doing more for AI safety than xAI.
For instance, the latest Grok-2 model lacks basic guardrails against misinformation. It reliably generates realistic images of presidential candidates conducting terrorist attacks, which users can easily post on the X platform.
Additionally, the Grok-2 announcement has zero information about how xAI plans to prevent users from abusing the model for malicious purposes like phishing attacks.
More generally, xAI has not published any plans for how it will keep increasingly capable AI models under control, despite committing to do so at the AI Seoul Summit.
At the Seoul Summit, xAI also committed to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” Not only has xAI published nothing on this topic, it also appears to have neglected to implement these protections in Grok-2.
xAI’s failure to adhere to voluntary commitments shows why voluntary commitments are insufficient: AI companies can simply ignore them. That’s why the Center for AI Policy believes it is critical to codify safety commitments into real requirements.
Google Integrates Gemini Into Smartphones and Earbuds
Like Apple’s adoption of AI in iPhones, iPads, and Macs, Google is bringing Gemini to its new Pixel 9 smartphones and Pixel Buds Pro 2 earbuds.
Google’s custom hardware powers these AI features.
One notable change is the addition of new AI editing tools for photos. For example, users can select a road in an image, and generate a river in its place.
Arguably the biggest advancement is the addition of Gemini Live, a lifelike AI voice assistant akin to OpenAI’s GPT-4o.
Wall Street Journal columnist Joanna Stern tested the feature and quipped that “I’m not saying I prefer talking to Google’s Gemini Live over a real human. But I’m not not saying that either.”
Notably, the new Pixel Buds allow users to freely converse with Gemini through their earbuds, similar to technology in the 2013 sci-fi movie Her.
In related news, OpenAI expressed concern that its own voice assistant could cause users to exhibit emotional “over-reliance and dependence.”
With AI now literally whispering in our ears, the science fiction of yesterday is rapidly becoming today’s reality, underscoring the need for policymakers to monitor and address emerging risks.
CNAS Studies AI’s Impact on Biosecurity
AI has the potential to exacerbate biological threats. A new report from the Center for a New American Security (CNAS) highlights these risks and offers recommendations for policymakers.
The study identifies four key areas of concern: general-purpose AI providing advanced biological instructions, automations reducing hands-on expertise requirements, progress in understanding genetic susceptibility to diseases, and advancements in precise viral pathogen engineering. Concerningly, future AI tools might enable the creation of bioweapons targeting specific genetic groups or regions.
To address these challenges, CNAS proposes strengthening gene synthesis provider screening, regularly assessing AI models’ bioweapons-related capabilities, investing in biodefense projects and technical AI safety mechanisms, and considering licensing for upcoming biological design tools if their capabilities become too dangerous.
The urgency of these recommendations is underscored by ongoing pandemic threats. Recently, the World Health Organization declared a public health emergency of international concern (PHEIC) for mpox in Africa due to the spread of a new virus strain.
As AI technology evolves rapidly, policymakers must act swiftly to address the intersection of AI and biosecurity.
Job Openings
The Center for AI Policy (CAIP) is hiring for three new roles. We’re looking for:
an entry-level Policy Analyst with demonstrated interest in thinking and writing about the alignment problem in artificial intelligence,
a passionate and effective National Field Coordinator who can build grassroots political support for AI safety legislation, and
a Director of Development who can lay the groundwork for financial sustainability for the organization in years to come.
News at CAIP
Claudia Wilson published a research paper titled “The EU AI Act and the Brussels Effect: How will American AI firms respond to General Purpose AI requirements?” The paper finds that large American companies are likely to remain in the EU market and be generally compliant with the EU AI Act, even when operating in the US.
Jason Green-Lowe wrote a blog post titled “You Can’t Win the AI Arms Race Without Better Cybersecurity,” reflecting on his experience at DEFCON 2024.
ICYMI: Episode 10 of the CAIP Podcast features Stephen Casper, a computer science PhD student at MIT researching technical and sociotechnical AI safety.
Quote of the Week
I cannot believe a board of directors of any company would knowingly allow innocent, vulnerable people to lose their life savings to enhance their own corporate profits. [...]
It has been a long battle since the investment scams started in 2019. They have only increased and become more prominent with artificial intelligence.
—Dr. Andrew Forrest, a prominent Australian businessman and philanthropist, speaking out against scam advertisements that misuse his image and name on Facebook
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub