Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy (CAIP).
This issue is different.
After two years of advocating for strong AI risk management, the Center for AI Policy is shuttering operations due to a lack of funding. There’s still a chance that new donations will revive CAIP, and CAIP may still support some small-scale projects. But sadly, our small and hard-working nonprofit is essentially closed for the foreseeable future.
Though we’re winding down, we’ve worked hard to finish important projects in our last normal week:
Claudia Wilson completed a careful mathematical estimate of the total cost for the public and private sector to implement CAIP’s model legislation and effectively curb catastrophic risk from advanced AI.
Episode 17 of the CAIP Podcast will be released soon, featuring Peter Wildeford from the Institute for AI Policy and Strategy.
Joe Kwon published an article in AI Policy Bulletin advocating for AI companies to adopt shared, transparent evaluation frameworks.
ICYMI: He also wrote a detailed research report titled “AI Agents: Governing Autonomy in the Digital Age,” just in time for our panel discussion on AI agents in the Rayburn House Office Building.
Jason Green-Lowe articulated “An Activist View of AI Governance” in great detail, explaining why AI advocates deserve funding.
Like CAIP, this newsletter’s future is uncertain. Jakub Kraus, the primary author, hopes to continue publishing similar analyses for this audience in his personal capacity. Stay tuned.
To commemorate the history of AI Policy Weekly, this edition will be special. Rather than examining the three biggest stories of the past week, the sections below will reflect on three broader trends from this newsletter’s lifetime—trends that encapsulate core beliefs informing CAIP’s perspective on AI.
Powerful Capabilities: Sci-Fi Machines Are Already Here
Current AI models are powerful. To people living 75 years ago, many AI systems would look like machines from science fiction.
To illustrate, here’s a list of things that AI can already do. Most of these capabilities arrived in the last three years.
Understand what shape comes next in this visual sequence: triangle, square, pentagon.
Generate a photo of a luminous winter gingerbread castle.
Process a 44-minute silent movie in seconds and answer questions about specific plot points and minute details of the film.
Learn to translate English to Kalamang, a language spoken by fewer than 200 people, after one prompt.
Look at protein, DNA, and RNA sequences, along with chemical structures of small molecules, ions, and modified residues. Then predict detailed 3D models of how these components likely assemble into large biomolecular complexes.
Have a telephone conversation.
Help create a 3D mapping of every cell and every connection in a sliver of the human brain.
Solve International Math Olympiad (IMO) problems at the level of a silver medalist.
Autonomously conduct simplistic AI research, including idea generation, code execution, and paper writing.
Smooth out foreign accents of customer service agents.
Support Amazon’s software codebase updates, automating work that would take thousands of years for one developer to do manually.
Preserve Darth Vader’s voice for posterity.
Give a good answer to just about any basic economics question.
Transform multiple documents into an engaging audio podcast summary.
Beat experienced bank executives in a computer game simulating the role of a car company CEO.
Taste a drink to see if it’s still fresh.
Write approximately 90,000 words per minute.
Recommend relevant products to buy online based on a photograph—i.e., “snap to shop.”
Convert a text prompt into a catchy pop song.
Outperform top weather forecasters.
Create and simulate physics-based environments for training robots and self-driving cars.
Identify where photos were taken by analyzing what’s in them.
Book a restaurant reservation.
Study hundreds of online sources to write a research paper with citations in minutes.
Design a new video game combining Tetris and Bejeweled
Talk with another AI model using efficient computer bleeps instead of English.
Play Pokémon Red.
Sprint at a fast pace of over ten miles per hour.
Complete high school homework assignments.
Consult with patients—through a synchronous chat interface—at a level comparable to board-certified primary care physicians.
Convert text prompts into videos with corresponding audio, such as musicals, commercials, standup, dialogue, and ASMR.
Do nearly real-time speech translation between English and Spanish during a live video call.
If policymakers are to effectively prepare America for AI’s future, they must start by understanding AI’s current capabilities and where they’re heading.

Hardware Empires: Trillions of Dollars Will Fuel Stronger AI
In 2012, researchers trained one of the world’s best AI models with only two computer chips.
Today, top AI models use tens of thousands of chips and stretch the limits of the power grid.
That’s because AI performance improves significantly by using larger quantities of computing power to process large volumes of data. Additionally, engineers can use greater computing power to run more experiments, enabling them to discover algorithms that train AI models more efficiently.
In short, high-performance chips—and the energy that fuels them—are a major driver of modern AI progress.
This simple fact has prompted major efforts to advance and acquire AI hardware, as this newsletter has documented:
Meta bought hundreds of thousands of cutting-edge AI chips from NVIDIA in 2024.
NVIDIA started shipping its new generation of “Blackwell” AI chips, which are more than twice as fast as the previous generation.
NVIDIA ran a contest focused on using AI to automate important stages of the chip design process.
Microsoft signed a 20-year power purchase agreement with Constellation Energy to revive Three Mile Island’s Unit 1 nuclear reactor.
xAI, Elon Musk’s AI company, built the world’s most powerful computing cluster in Memphis, Tennessee.
Lawrence Berkeley National Laboratory projected that data centers will consume “approximately 6.7 to 12% of total U.S. electricity by 2028.”
President Trump declared a national energy emergency, one week after former president Biden signed an executive order on AI infrastructure.
Taiwan Semiconductor Manufacturing Company (TSMC), the manufacturer of NVIDIA’s top AI chips, announced plans to spend $165 billion on new U.S. facilities.
French President Emmanuel Macron announced that over €109 billion euros ($114 billion dollars) will go into France’s AI sector over the next few years.
The European Union announced a new “InvestAI” initiative to “mobilize €200 billion for investment in AI, including a new European fund of €20 billion for AI gigafactories.”
The state-owned Bank of China announced an “Action Plan” to support China’s AI industry with no less than ¥1 trillion yuan ($137 billion dollars) over the next five years.
Amazon, Google, Meta, and Microsoft shared plans to spend over $300 billion total on capital expenditures in 2025, with a focus on AI.
OpenAI announced the Stargate Project, which “intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States.”
Subsequently, OpenAI announced a 1-gigawatt Stargate cluster in the UAE, which seems poised to quintuple in size over the coming years.
NVIDIA reported $39.1 billion in data center revenue from its most recent quarter, up from $3.3 billion just over three years ago.
As powerful supercomputers cover the planet, AI’s sci-fi capabilities will only continue to advance.

Acts of Congress: Urgent Preparations Remain Unfinished
Since this newsletter started in December 2023, the U.S. Congress has passed some AI-related legislation:
The National Defense Authorization Act (NDAA) for FY 2024 and 2025 established some defense-relevant programs and projects, like the Cyber Threat Tabletop Exercise Program.
FY 2024 appropriations bills included funding for AI adoption at the Department of Defense, while reducing funding for the Cybersecurity and Infrastructure Agency.
The ADVANCE Act aimed to streamline regulations and make it easier to build nuclear power in the U.S.
The Building Chips in America Act exempted certain CHIPS Act projects from National Environmental Policy Act (NEPA) regulatory review.
The Federal Aviation Administration (FAA) Reauthorization Act directed the FAA to establish the Unmanned and Autonomous Flight Advisory Committee.
The Protecting Americans from Foreign Adversary Controlled Applications Act might cause a TikTok ban or divestment.
The Take It Down Act curbed the publication and spread of nonconsensual illicit imagery (NCII) online.
Additionally, the Senate’s bipartisan AI working group published a roadmap for AI policy, and the House’s bipartisan AI task force published a 273-page report with more recommendations for AI governance.
However, many bipartisan bills were introduced that never passed. Here are some that the Center for AI Policy endorsed publicly:
The AI Whistleblower Protection Act would protect workers involved in AI systems from retaliation when they report security vulnerabilities, legal violations, or unaddressed AI dangers.
The Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act would support the development of standards for third-party assessments of AI systems.
The CREATE AI Act would expand access to AI computing infrastructure.
The Secure AI Act would improve government monitoring of AI security incidents.
The Nucleic Acid Standards for Biosecurity Act would improve standards for screening synthetic DNA orders.
The Chip Security Act would require the Commerce Department to mandate location verification mechanisms on advanced AI chips before they can be exported.
The Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats Act would direct the Secretary of Health and Human Services to address biological threats from the misuse of AI.
The Preserving American Dominance in AI Act would establish federal oversight over frontier AI models to mitigate chemical, biological, radiological, nuclear (CBRN) and cyber risks.
The Center for AI Policy also published its own model legislation, the Responsible Artificial Intelligence Act of 2025 (RAIA), which would defend America against severe AI threats through independent verification, hardware security requirements, and a dedicated federal oversight body.
Congress should prioritize passing these bipartisan bills without delay to ensure America is prepared for highly capable AI systems.

Quote of the Week
These newsletters have always ended with a quote of the week, and this edition will be no different.
As the Center for AI Policy’s last Substack word, we offer this line from AI Policy Weekly #4—or, as it was known then, “3-Shot Learning #4” (IYKYK). It underscores our longstanding belief that the most powerful AI companies need better incentives to prevent their own creations from causing catastrophic harm to the American public.
Incentives are superpowers; set them carefully.
—Sam Altman, in a personal blog post shortly after being reinstated as OpenAI CEO
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub
Yeah, also sorry to hear about the struggles of your funding, seems fairly unfortunate, would be interested to learn more (I‘m also fundraising for an AI governance project).
Wow. Very unfortunate to see you guys go. At least I was able to send some music recommendations for someone to enjoy.