Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
OpenAI and Google Announce AI Advancements and Integrations
What was the biggest AI announcement this week?
One clear candidate is GPT-4o, OpenAI’s upgrade to GPT-4. The model is generally more competent than GPT-4 on a range of tests, although the improvement is not dramatic.
The distinguishing feature of GPT-4o is multimodality (the “o” stands for “omni”). The model absorbs text, audio, image, and video inputs, and processes them all with one neural network before producing any combination of text, audio, and image outputs. GPT-4 already had some of these abilities, but it was significantly slower and less capable in non-textual formats.
This multimodality enables some uncanny behavior. After discussing the evolving features of their surroundings, two GPT-4o models hosted on smartphones can sing together.
Additionally, GPT-4o’s synthetic voice sounds remarkably human-like, prompting numerous comparisons to the AI romantic companion in the 2013 sci-fi movie Her. One OpenAI researcher said that rewatching the movie “felt a lot like rewatching Contagion in Feb 2020.” Indeed, with GPT-4o, human-like digital companions appear to be around the corner.
Alternatively, the week’s biggest announcement may not have been GPT-4o, but rather a collective slew of announcements from Google at its 2024 I/O Conference:
Veo is a rival to OpenAI’s Sora that can generate 1080p videos from textual prompts;
Imagen 3 is Google’s latest text-to-image model;
Gemini Flash is a lightweight language model optimized for speed, while being capable of processing 700,000 words at once;
Gemini 1.5 Pro, arguably Google’s most capable chatbot, now has the ability to process approximately 1.4 million words at once, the most of any major chatbot by far;
Gemma 2 is an upgraded version of the company’s flagship open-weight AI model;
Project Astra is a new program that aims to build a “universal AI agent,” with a prototype already demonstrating multimodal reasoning akin to GPT-4o;
Music AI Sandbox is a suite of AI-powered music generation tools currently under development;
Trillium is the latest version of Google’s AI-specialized computing chip, which is claimed to be nearly five times the speed of its predecessor;
Google Search, Google Photos, Google Docs, and many other Google products will gain new AI features.
In total, Google released a blog post listing 100 different announcements, over half of which pertained to AI.
“More happened in months than has happened in years,” said a promotional video during the event’s keynote speech. Given the snowstorm of announcements, it looks like that trend will repeat itself before the 2025 Google I/O Conference rolls around. These improvements are incremental, but they add up.
Whether this week’s most impressive AI advancement was GPT-4o or Google’s latest models, or some other development like Eleven Labs’ music generator, or Anthropic’s AI prompt engineer, or Google’s AI-enabled 3D mapping of every cell and every connection in a sliver of the human brain, the speed of AI progress is quite remarkable. Accordingly, the technology’s societal impacts are evolving rapidly, which increases the urgency of establishing appropriate government oversight.
House Moves on Bills to Extend AI Export Controls
The Export Control Reform Act of 2018 (ECRA) strengthened and updated the US government’s authorities to control the export, reexport, and in-country transfer of a wide range of items, including dual-use and emerging technologies. These controls are aimed at preventing actions that could compromise US national security or foreign policy objectives.
Traditionally, export controls have focused on controlling physical items. But in today’s digital age, sensitive items can be accessed and potentially transferred to foreign entities in intangible forms such as cloud computing power and AI model software.
Two new bills seek to augment ECRA in order to strengthen the government’s ability to control non-physical items.
First, Representatives Mike Lawler (R-NY), Jeff Jackson (D-NC), Rich McCormick (R-GA), and Jasmine Crockett (D-TX) introduced the Remote Access Security Act, in an effort to control China’s access to cloud computing.
The bill would amend ECRA to give the President and the Commerce Department the power to regulate remote access of US-controlled items by foreign persons. This includes the ability to create regulations that could impose licensing requirements, penalties, and other controls on remote access.
This authority is likely necessary for a bill like the Closing Loopholes for the Overseas Use and Development of AI (CLOUD AI) Act, which the same quartet of representatives introduced last summer. The CLOUD AI Act would direct Commerce to prohibit US entities from supplying advanced AI chips’ computing capabilities to China through the cloud. The Commerce Department already restricts physical exports of these chips to China.
In a separate effort to control non-physical items, Representatives Michael McCaul (R-TX), Raja Krishnamoorthi (D-IL), John Moolenaar (R-MI), and Susan Wild (D-PA) introduced the Enhancing National Frameworks for Overseas Restriction of Critical Exports (ENFORCE) Act.
The ENFORCE Act would amend ECRA and the International Emergency Economic Powers Act (IEEPA) to target emerging and foundational technologies as well as “covered AI systems” that are deemed essential to US national security. The Commerce Department would ultimately define the scope of “covered AI systems”; in the interim, the definition focuses on AI that enables weapons of mass destruction or permits “the evasion of human control or oversight through means of deception or obfuscation.”
The House Foreign Affairs Committee (HFAC) is currently holding a markup meeting to consider the Remote Access Security Act, with another markup scheduled for next week to consider the ENFORCE Act. HFAC’s attention towards these bills reflects the growing global importance of AI and the new challenges it poses for national security.
Schumer Quartet Presents an AI Roadmap
Senators Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN) have been thinking carefully about AI together for about a year, including hundreds of stakeholder meetings and nine AI “Insight Forums” for Senators to study the technology.
On Wednesday, the quartet unveiled their comprehensive current recommendations for AI policy in a “Roadmap for AI Policy in the US Senate.”
Many of the Roadmap’s proposals focus on innovation, such as encouraging appropriators to ramp up federal funding to at least $32 billion per year on non-defense AI research and development by 2026.
Additionally, several recommendations in the Roadmap would help reduce risks from AI.
For example, the Roadmap encourages investment in the US AI Safety Institute at the National Institute of Standards and Technology (NIST). It also encourages funding to strengthen the Bureau of Industry and Security (BIS) in the Department of Commerce, which would help to prevent US adversaries from using AI to threaten national security.
Another important component encourages measurements and tests to evaluate AI capabilities and risks, such as assessing “chemical, biological, radiological, and nuclear (CBRN) AI-enhanced threats.”
However, the Roadmap generally steers away from advocating for regulations to require safer AI systems, which significantly limits its ability to reduce AI risks.
News at CAIP
Save the date: we’re hosting an AI policy happy hour on Thursday, May 30th, from 5:30–7:30pm at Sonoma Restaurant & Wine Bar in DC.
We published a new blog post: “What’s Missing From NIST's New Guidance on Generative AI?”
ICYMI: watch a video recording of Jason Green-Lowe speaking at an AI-focused panel discussion as part of the Social Security Administration’s National Disability Forum.
We’re hiring for two different roles: External Affairs Director, Government Relations Director.
Quote of the Week
AI will exacerbate the threats of cyberattacks—more sophisticated spear phishing, voice cloning, deepfakes, foreign malign influence and disinformation. [...]
We know that adversaries like Russia, like China, like Iran, and others, are intent on interfering, manipulating, influencing our elections to undermine American confidence in the integrity of those elections and to stoke even more partisan discord. And we know that those efforts will be exacerbated by generative AI capabilities.
—Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA), in an interview with Axios
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub