AI Policy Weekly #30
Google’s electricity consumption, Runway’s text-to-video model, and the Department of Homeland Security’s report on AI’s CBRN threats
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Google Elevates Energy Consumption to Fuel AI
In 2020, Google proudly announced goals to use carbon-free energy “everywhere, at all times” in its business by 2030.
But four years later, Google seems poised to fail at achieving its voluntary commitments. Its 2024 greenhouse gas emissions are 13% higher than in 2023, and 48% higher than in 2019.
“This result was primarily due to increases in data center energy consumption and supply chain emissions,” explained Google in its 2024 Environmental Report. “As we further integrate AI into our products, reducing emissions may be challenging.”
Indeed, AI is driving a significant increase in electricity consumption throughout the tech industry. A recent Goldman Sachs report concluded that due to AI, “global data center power demand is poised to more than double by 2030 after being flattish in 2015–20.”
Data centers are large warehouses of computer hardware. In the case of AI, they contain tens of thousands of specialized computer chips such as NVIDIA’s graphics processing units (GPUs). These AI chips are necessary for both training AI models and running them.
For example, SemiAnalysis reports that OpenAI’s GPT-4 model “required 20,000 A100s for 90–100 days.” The A100 is an older generation of NVIDIA’s AI-specialized chips.
Today, NVIDIA’s premier AI chip is the H100 GPU. To compare with A100s, SemiAnalysis estimates that 100,000 H100s could train GPT-4 in just three days.
The power draw for such a training run would be enormous. Operating all 100,000 H100s simultaneously would require approximately 150,000 kilowatts, a measure of energy per second.
Accordingly, H100 data centers require approximately 1.5 kilowatts per GPU. About half of this electricity (0.7 kilowatts) goes to the GPU itself, whereas the rest goes to other computing hardware like networking, storage, and non-GPU chips. (Note that data centers separately expend extra electricity on facility operations equipment like lighting, cooling, and power delivery.)
For reference, 1.5 kilowatts is the power draw of a standard electric tea kettle, enough to charge over a dozen MacBooks simultaneously.
Thus, one can loosely visualize the power draw of AI training as tens of thousands of tea kettles, working furiously in tandem to boil water for three months straight.
Energy requirements can be even higher for deploying an AI system, since millions of users may be using it simultaneously.
Looking ahead, AI companies’ hunger for electricity will only intensify as they build ever-larger AI models to improve AI capabilities.
For example, Microsoft and OpenAI are reportedly planning a $115 billion data center construction project, including a “Stargate” supercomputer that might use over 100 times the electricity of today’s largest AI training runs.
Indeed, some analysts expect AI’s energy demand to grow significantly faster than Goldman Sachs’ estimate of a doubling by 2030. For example, Arm CEO Rene Haas recently claimed that demand could quintuple by that time.
AI-driven electricity demand is just one of AI’s numerous societal impacts, underscoring the need for Congress to pay careful attention to this technology and mitigate its risks.
Runway Releases Gen-3 Alpha Text to Video
In February, the world was stunned by the capabilities of OpenAI’s new Sora system, which could generate highly realistic videos from text prompts. No other text-to-video system came close to Sora’s capability at the time.
Several months later, Runway, a generative AI startup, is hot on OpenAI’s tail.
The company recently unveiled its newest image and video creation system, Gen-3 Alpha. This model represents “a major improvement in fidelity, consistency, and motion” over its predecessor.
On Monday, Runway made its Gen-3 Alpha Text to Video system available to everyone willing to pay a subscription fee.
With remarkable video generation capabilities, AI systems like Sora and Gen-3 Alpha demonstrate that AI progress is continuing at a rapid pace.
Department of Homeland Security Shares Findings on AI’s CBRN Risks
The Department of Homeland Security (DHS) recently published its full report on AI’s intersection with chemical, biological, radiological, and nuclear (CBRN) threats.
The report was prepared in response to the White House’s Executive Order from last fall.
It primarily focuses on AI-enabled chemical and biological threats, with less emphasis on radiological and nuclear aspects.
DHS finds that “as AI technologies advance, the lower barriers to entry for all actors across the sophistication spectrum may create novel risks to the homeland from malign actors’ enhanced ability to conceptualize and conduct CBRN attacks.”
Furthermore, “known limitations in existing US biological and chemical security regulations and enforcement, when combined with increased use of AI tools,” could increase risk.
DHS outlines many steps the government can take to mitigate these risks.
For example, DHS recommends reinforcing “policies, norms, and codes of conduct” based on existing voluntary commitments that leading AI companies have made to the White House.
As AI models grow increasingly intertwined with CBRN threats, the Center for AI Policy thinks it is increasingly necessary to go beyond voluntary commitments and begin establishing regulatory requirements for cutting-edge AI companies.
News at CAIP
Episode 9 of the CAIP Podcast features Kelsey Piper, Senior Writer at Vox. Jakub and Kelsey discuss OpenAI’s recent incident involving exit documents, the extent to which OpenAI’s actions were unreasonable, and the broader significance of this story.
We issued a statement on the Supreme Court’s decision to overrule the Chevron doctrine. Given rapid AI progress, Congress needs to expressly delegate the authority for a technically literate AI safety regulator to adjust US AI policies.
Jason Green-Lowe commented on the absence of AI concerns in last week’s presidential debate. The absence of any AI-related questions represents a significant oversight, depriving voters of crucial information on this pressing issue.
A video recording and text transcript are now available from our recent panel discussion on AI and privacy.
On Wednesday, July 24th from 5:30–7:30 p.m. ET, we are hosting an AI policy happy hour at Sonoma Restaurant & Wine Bar. Anyone working on AI policy or related topics is welcome to attend. RSVP here.
On Monday, July 29th at 3 p.m. ET, the Center for AI Policy is hosting a webinar titled “Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future.” The event will feature a presentation and audience Q&A with Professor Stuart Russell, a leading researcher in artificial intelligence and the author (with Peter Norvig) of the standard text in the field. RSVP here.
Quote of the Week
The single most important thing today, if I were back in the frontline of politics, would be to engage with this revolution and to understand it.
—Sir Tony Blair, former Prime Minister of the United Kingdom between 1997 and 2007, commenting on the potential for technological change to rewrite the principles of successful political leadership
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub