Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
Commerce Department Institutes Global Export Controls on AI Chips and Model Weights
On Monday, the U.S. Commerce Department’s Bureau of Industry and Security (BIS) announced sweeping new export controls on AI models and the computer chips needed to create them.
Titled the “Framework for AI Diffusion,” this 168-page policy builds on preceding chip-related export controls under the Biden and Trump administrations in 2018, 2019, 2020, 2021, 2022, 2023, and 2024. Those restrictions focused heavily on China.
The 2023 rules expanded the control regime to 43 additional countries to prevent chips from reaching China through indirect routes and subsidiary firms.
This week’s Diffusion Framework goes much further, with distinct rules for companies headquartered in three different country groups:
U.S. allies such as Canada, France, Germany, Australia, Japan, Sweden, and South Korea—eighteen in total.
Most other countries in the world, including Mexico, Brazil, Israel, Switzerland, India, Malaysia, and Poland.
Arms-embargoed countries such as China, Iran, Sudan, Cuba, Haiti, Venezuela, North Korea, and over a dozen others.
Companies based in Group 3 cannot receive any advanced AI chips.
Companies based in the U.S. or Group 1—for instance, Oracle—can acquire as many advanced AI chips as they want, so long as they deploy them in the U.S. or Group 1. When they deploy their chips in a Group 2 country, by default they must adhere to export quotas. However, they can exceed those quotas if they implement security measures and earn a Universal Validated End User (UVEU) authorization.
Companies based in Group 2—for instance, the Emirati giant G42—must respect export quotas when deploying chips to both Group 1 and Group 2 countries. But they can exceed quotas if they implement security measures and earn a National Validated End User (NVEU) authorization for the country where they’d like to deploy.
The Diffusion Framework goes beyond physical chips. For example, cloud computing companies must prevent their servers from being used to train AI models with over 10^26 computing operations (costing tens of millions of dollars) for countries in Groups 2 and 3.
Notably, the Framework also imposes “software export controls” on AI model weights—the numerical parameters that define how an AI model behaves—for closed-source models trained with over 10^26 operations. These weights cannot travel to entities in Group 3. They can go to foreign entities in Groups 1 and 2, but only if those entities implement security measures for storing the weights.
The Framework is complex. It contains many details not discussed here.
And it’s not alone. Just this week, BIS released separate rules on biotech and connected vehicles, plus another lengthy update to AI chip export controls that closes various loopholes.
The Biden administration’s final week has not been boring.
Sync Releases New Lip Sync AI Model
A Bay Area AI startup named Sync has released a new AI model called lipsync-1.9-beta.
This is a lip syncing model. It takes a video of a person as input, along with an audio file of someone speaking. Then it generates the same video, but with the person’s lips synced to the audio file (as if the person in the video were saying those words).
Alternatively, users can upload text to the Sync platform and use a text-to-speech AI model to generate corresponding speech, then use this speech as the audio input.
Sync claims that lipsync-1.9-beta is “the most natural lipsyncing model in the world.”
In one demonstration of the model, an AI-generated avatar says “I could literally be sleeping, and you could put words into my mouth by simply uploading an audio file or a script.”
Another demonstration from Sync takes an interview with Ukrainian president Volodymyr Zelenskyy and lip syncs it to match an English translation of his words:
This demonstrates a real benefit of lip sync: videos can be made accessible in many languages. But it also shows the potential for serious risk: bad actors can create deceptive impersonations of political leaders.
Biden Orders Federal Push to Build AI Computing Infrastructure
President Biden signed an executive order on Tuesday directing federal agencies to expedite the development of AI data centers.
For context, White House technology adviser Tarun Chhabra stated that top AI developers may need data centers by 2028 that individually wield five gigawatts of power capacity. That’s more than twice the maximum electricity production of the entire Hoover Dam.
To meet this demand, the Department of Defense (DoD) and the Department of Energy (DOE) will each identify at least three suitable federal sites (if possible) by February 28, 2025—giving a total of six or more—where private companies can build and operate advanced AI data centers capable of training high-performance AI models.
Private developers can apply to lease these sites and build new data centers, with some strings attached. New data centers must be powered by new clean-energy capacity, stay secure from cyberattacks, work with the U.S. AI Safety Institute to evaluate on-site AI models, and avoid driving up local utility costs. Federal authorities will expedite permitting requirements so that these data centers can begin operating by the end of 2027.
Additionally, the Department of Interior will try to designate five “Priority Geothermal Zones” that are promising locations for building geothermal power generation. And the DoD and DOE will attempt to identify ten priority sites for potential nuclear power deployment aimed at serving AI data centers.
The Diffusion Framework encourages countries to build AI data centers in America. The infrastructure order clears the path for that to happen.
News at CAIP
Claudia Wilson led CAIP’s comment on the DoD’s proposal to amend the Defense Federal Acquisition Regulation Supplement (DFARS).
Kate Forscey wrote a blog post: “Congress Should Renew the Bipartisan AI Task Force.”
Jason Green-Lowe wrote a blog post on best-of-N jailbreaking: “AI Will Be Happy to Help You bLUid a Bomb.”
Gabriel Weil has joined CAIP’s board. He is an Assistant Professor at Touro University Law Center and a Non-Resident Senior Fellow at the Institute for Law & AI.
Quote of the Week
It’s totally feasible that an AI agent could simulate more [soccer] in 24 hours than has ever been played professionally in the real world in the entire 150-year history of the game.
—Lee Mooney, former head of data insights at City Football Group and founder of MUD Analytics, a firm advising English Premier League soccer teams
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub