AI Policy Weekly #42
Microsoft + Three Mile Island, Google’s NotebookLM, and House SS&T’s second September markup
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
AI’s Nuclear Future: The Resurrection of Three Mile Island
Once an infamous symbol of nuclear risk, Three Mile Island is set for rebirth as Microsoft harnesses its power to support data centers and AI efforts, signaling a seismic shift in energy and technology paradigms.
The tech giant plans to utilize the entire 835-megawatt output from the plant’s Unit 1 reactor—not the one involved in the 1979 meltdown, but its safe and reliable neighbor. Constellation Energy is investing $1.6 billion to transform the dormant facility into the (renamed) Crane Clean Energy Center.
Constellation is targeting a 2028 launch, pending regulatory approval. If the revival succeeds, Microsoft could end up paying over $700 million annually for the reactor’s energy. Given that Microsoft signed a 20-year power purchase agreement with Constellation, that electricity bill will quickly add up.
Microsoft’s initiative mirrors Amazon’s recent $650 million acquisition of a nuclear-powered data center, indicating a broader trend of AI companies seeking reliable, low-carbon energy sources to fuel their expanding computational needs. For years, nuclear plants like Three Mile Island have struggled to compete with cheap natural gas, particularly in shale-rich regions like Pennsylvania. Now, AI’s voracious energy appetite is breathing new life into the nuclear sector.
The scale of these ambitions is staggering. Following a White House meeting with industry CEOs, OpenAI proposed building multiple data centers that each consume 5 gigawatts. That energy total would be equivalent to about six Three Mile Island reactors and capable of powering millions of homes, all dedicated to a single AI computing cluster.
Recognizing nuclear energy’s strategic importance, policymakers have responded. The recently enacted ADVANCE Act promises to streamline regulations and offer financial incentives to the nuclear industry.
Interestingly, the relationship between AI and nuclear energy is proving symbiotic. Microsoft is reportedly developing an AI to navigate complex nuclear regulations, potentially expediting approval processes.
The resurrection of Three Mile Island, once a symbol of nuclear energy’s risks, now stands as a beacon of AI’s transformative impact on U.S. energy markets. In this evolving landscape, the future of AI power may well be atomic.
Google’s NotebookLM: AI-Generated Radio at Your Fingertips
Google’s NotebookLM, featuring the new Audio Overview feature, heralds a new era in AI-driven content creation. With a single click, this tool transforms documents into engaging audio discussions between AI hosts, seamlessly summarizing complex information and drawing insightful connections.
Listen to this sample of AI hosts discussing the Center for AI Policy’s new AI explainability report:
This innovation could profoundly reshape content creation and consumption. Traditional media makers face potential disruption as AI-generated audio becomes widely accessible. In education and research, information dissemination and comprehension could accelerate significantly. Further, the media landscape may soon be awash with AI-generated content, blurring the lines between human and machine-created material.
The pace of change could intensify further. Google’s Illuminate tool, similar to NotebookLM, converts research papers into digestible audio dialogues, currently focusing on computer science. As this tool improves, AI researchers could use it to quickly digest new research results, propelling further AI advancement in a reinforcing loop.
With the advent of AI-powered audio summaries, the way we process information is evolving rapidly. Airwaves are being reshaped by artificial intellects, one AI-generated conversation at a time.
House Science Committee Maintains AI Momentum with Second Markup
The U.S. House Committee on Science, Space, and Technology continues to demonstrate bipartisan leadership in AI policy, holding its second AI-relevant markup in just one month. On Wednesday, the committee approved four bills, with two squarely focused on AI.
First, the AI Incident Reporting and Security Enhancement Act directs NIST to update the National Vulnerability Database for AI systems and study the need for voluntary reporting of AI security and safety incidents, similar to the Senate’s Secure AI Act. The bill calls for establishing common definitions for AI vulnerabilities and developing processes for managing them.
Second, the Department of Energy AI Act establishes a cross-cutting R&D program at the DOE to advance AI tools and capabilities. The bill authorizes $300 million annually from 2025 to 2030, directing research in areas such as large-scale simulations, applied mathematics, and the development of trustworthy AI systems.
While many forms of AI innovation are important, the Center for AI Policy (CAIP) would like to see more explicit prioritization of AI safety innovation within this budget, such as the AI risk and evaluation program in the Senate’s version of the bill. Another benefit of the Senate’s version is that it would formally establish the Office of Critical and Emerging Technologies at the DOE, with a mission that includes providing for “rapid response to emerging threats and technological surprise.”
CAIP calls on the Senate and House to align their bills on AI security and AI at the DOE and send both bills to the President’s desk for signature.
News at CAIP
Mark Reddish led CAIP’s new research report on AI explainability: “Decoding AI Decision-Making: New Insights and Policy Approaches.”
Jason Green-Lowe wrote a blog post responding to Gavin Newsom’s latest remarks on SB 1047: “There’s No Middle Ground for Gov. Newsom on AI Safety”
Claudia Wilson wrote a blog post reviewing funding for global AI safety institutes: “The US Has Committed to Spend Far Less Than Peers on AI Safety.”
Jason Green-Lowe traveled to New York City to hear what local AI professionals have to say about AI’s risks and rewards. He wrote a blog post reflecting on the trip: “Reflections on AI in the Big Apple.”
ICYMI: A video recording is now available for our recent panel discussion titled “Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies.”
Quote of the Week
We’ll see more technological change, I argue, in the next 2 to 10 years than we have in the last 50 years.
Artificial intelligence is going to change our ways of life, our ways of work, and our ways of war. It could usher in scientific progress at a pace never seen before. And much of it could make our lives better.
But AI also brings profound risks, from deepfakes to disinformation to novel pathogens to bioweapons.
We have worked at home and abroad to define the new norms and standards. [...]
But let’s be honest. This is just the tip of the iceberg of what we need to do to manage this new technology.
Nothing is certain about how AI will evolve or how it will be deployed. No one knows all the answers.
—Joe Biden, President of the United States, discussing AI in his final address to the United Nations General Assembly as president
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub