Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
Commerce Department Expands Export Controls on AI Hardware
This week, the Commerce Department’s Bureau of Industry and Security (BIS) issued new AI-focused export controls targeting China. This is the third major tightening of such controls in the past three years, with earlier actions occurring in 2022 and 2023.
The new controls add 140 Chinese firms to the Entity List. They restrict exports of semiconductor-relevant software tools and manufacturing equipment. They also target high-bandwidth memory (HBM), a key component of AI chips.
According to AI computing expert Lennart Heim, “only SK Hynix, Micron, and Samsung currently produce [HBM] at scale. All current leading data center AI chips use HBM.”
Additionally, the new controls close some loopholes in earlier rules. One provision targets less advanced facilities that are physically connected to more advanced facilities, for instance by a “wafer bridge.” This addresses an issue that House Foreign Affairs Committee Chairman Michael McCaul highlighted in a letter to BIS last month.
China quickly responded to the news by targeting the U.S. with export controls on critical minerals like antimony, gallium, and germanium.
AI hardware controls have already slowed China’s top AI companies. “Our problem has never been funding; it’s the embargo on high-end chips,” said DeepSeek CEO Liang Wenfeng in a recent interview.
However, a ChinaTalk podcast conversation between Jordan Schneider of ChinaTalk, Dylan Patel of SemiAnalysis, and Gregory Allen of CSIS identified some flaws in the current U.S. approach:
Year-long gaps between updates give China time to adapt and stockpile equipment.
The regulations are long and complex, creating more potential for subtle loopholes.
Major Chinese companies like Huawei, SMIC, and CXMT are arguably receiving inadequate restrictions.
Regarding the last point, the Eurasia Group’s Xiaomeng Lu notes that the controls may have forgone stringent measures on CXMT as a concession to Japan, whose suppliers sell materials to CXMT. In contrast, Lu expects the incoming Trump administration to “be less likely to make concessions per requests of allies.”
As AI continues to grow in geopolitical importance, export control decisions could have enormous consequences for the next decade of U.S.-China competition.
Amazon Advances AI Efforts with Nova Models and Trainium Chips
On Tuesday, Amazon released a new family of AI models called “Amazon Nova.” Here are the family members:
Micro is a fast, cheap, text-to-text model.
Lite is a low-cost text generator that can process image, video, and text inputs.
Pro is a more capable and more expensive version of Lite.
Canvas generates images in response to text and image prompts.
Reel generates videos in response to text and image prompts.
Amazon is still working on Amazon Nova Premier, another multimodal model even stronger than Lite and Pro, with plans to finish it in Q1 2025. And later in 2025, Amazon plans to introduce a speech-to-speech model and an “any-to-any” model that can process and produce text, image, audio, and video content.
“With this release I think Amazon may have earned a spot among the top tier of model providers,” said software engineer Simon Willison in his initial review, which highlighted the Nova family’s competitive pricing and solid performance.
This does not mean that Amazon has earned the top spot, of course. For instance, Anthropic’s Claude 3.5 Sonnet model significantly outperforms Nova Pro on the popular GPQA test (58% versus 47%), though Nova Pro comes close to Sonnet on several other tests.
It is therefore understandable that Amazon recently announced a $4 billion investment in Anthropic, bringing its total investment to $8 billion in just 14 months. Additionally, Anthropic will use Amazon’s AI training and inference chips, Trainium and Inferentia, to build and deploy its upcoming AI systems.
SemiAnalysis suspects that Amazon is building a datacenter campus in Indiana with 400,000 Trainium2 chips for Anthropic. Indeed, Amazon says that “Anthropic will be using hundreds of thousands of Trainium2 chips—over five times the size of their previous cluster.” However, even 400,000 Amazon Trainium2 chips would be less powerful than competitors’ construction projects with 100,000 NVIDIA GB200 chips.
2025 looks to be a pivotal year for Amazon’s AI ambitions.
Chegg Struggles as ChatGPT Spreads
Chegg is a prominent educational technology company that offers various services like homework solutions, online tutoring, and textbook rentals.
Chegg IPO’d in November 2013 at $12.50 per share. After years in single digits, the stock surged past $40 by 2019 and topped $100 during COVID lockdowns. But it fell back to pre-pandemic levels in late 2021, and it has declined steadily since then to its current price of under $3 per share.
AI appears to be an important force behind Chegg’s recent fall, since students can now consult with AI chatbots instead of human tutors and human-generated guidance. Someone who invested $1,000 in Chegg at the time of ChatGPT’s release would today have approximately $100, a $900 loss.
In May 2023, former Chegg CEO Dan Rosensweig admitted that ChatGPT was “having an impact on our new customer growth rate.” The next day, Chegg shares plummeted more than 40%.
Rosensweig joked about the incident at a conference a couple months later. “The reason I was invited on is I’m the poster child for getting your ass kicked in the public markets by AI. So for those of you who didn’t want to take that, I took it for you. My pleasure.”
Chegg is transforming its business to incorporate generative AI, pursuing partnerships with companies like OpenAI and Scale AI. But overall, the Wall Street Journal reports that Chegg has lost over 500,000 subscribers since ChatGPT.
“What we’ve seen from Chegg,” says the Wall Street Journal’s Miles Kruppa, “is that there aren’t any really easy answers when something like ChatGPT is going directly at your value proposition.”
News at CAIP
Claudia Wilson led CAIP’s comment on the U.S. AI Safety Institute’s request for information on Safety Considerations for Chemical and/or Biological AI Models.
Jason Green-Lowe wrote a blog post titled “Finding the Evidence for Evidence-Based AI Regulations.”
The Korea Herald’s Yoo Choon-sik quoted CAIP in an op-ed titled “What changing US AI policy means to South Korea.”
ICYMI: In an open letter organized by CAIP, twelve leading organizations urged Congressional leadership to pass responsible AI legislation.
ICYMI: Kate Forscey discussed her takeaways from episode 13 of the CAIP Podcast in a blog post: “A Playbook for AI: Discussing Principles for a Safe and Innovative Future.”
Quote of the Week
Had a no-show on a call today but their AI notetaker showed up, so I gave my spiel to the bot instead. Is this the future of work?
—Olivia Look, Owner of The Automated Agency
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub