AI Policy Weekly #50
Musk’s 150 megawatts, Perplexity shopping, DHS framework for AI in critical infrastructure
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
TVA Approves Massive Electricity Deal for Elon Musk’s Memphis Datacenter
Colossus. A Gigafactory of Compute. The Memphis Supercluster.
These are some of the names being used to describe xAI’s new datacenter in Memphis, Tennessee.
xAI is Elon Musk’s AI startup, founded in July 2023. It has raised $7 billion dollars since then, and the Wall Street Journal reports that it just raised $5 billion more. That brings the total to $12 billion, on par with competitors like OpenAI and Anthropic.
That money is crucial for buying AI chips. Musk reports that Colossus already has 100,000 NVIDIA H100 GPUs, which could easily cost over $2 billion alone, before accounting for additional costs of constructing and operating a datacenter.
One operating expense is electricity. Semianalysis reports that a 100k H100 cluster requires over 150 megawatts of electricity to fuel critical IT, costing over $100 million per year. That’s enough to power over 100,000 U.S. households (setting aside technicalities like temporary surges in power demand).
In line with that estimate, the Tennessee Valley Authority (TVA) met earlier this month and approved an agreement to supply 150 megawatts to Colossus. However, xAI still needs to build an electrical substation before it can actually wield that power. At the moment, xAI likely has only 50 megawatts available.
xAI moved quickly in constructing the datacenter. Musk claims the project came to completion in 122 days, whereas typical datacenter buildouts can take over a year. The speed impressed even NVIDIA CEO Jensen Huang, who supplied chips and networking for Colossus.
The Information reports that top AI companies are anxious about xAI’s rapid progress. One anonymous competitor went so far as to fly a spy plane over Musk’s Memphis datacenter, stealthily taking pictures to “gain whatever insights they could.”
Additionally, after seeing a social media post about Colossus, OpenAI CEO Sam Altman allegedly “got into an argument with infrastructure executives at Microsoft, telling them he was concerned xAI was moving faster than Microsoft.” Altman, of course, is the target of an ongoing lawsuit from Musk, so the stakes are somewhat personal.
For top AI companies, it is reasonable to fret over the size of competitors’ datacenters, because computational resources are one of the core drivers of AI progress.
xAI’s top AI system, Grok-2, trained with “roughly 15,000” H100 GPUs according to Musk. The Memphis datacenter will be used for training Grok-3, which may be ready as soon as December.
“Grok-3 should be the most powerful AI in the world,” said Musk in July.
In preparation for Grok-3, xAI is hiring AI safety engineers. This indicates a general principle: as AI systems grow more capable, AI safety becomes indispensable.
Perplexity AI Can Now Spend Money Online
Perplexity AI is an AI startup that offers an AI-powered search engine of the same name. The Perplexity system combines the functionality of traditional search engines with advanced language models, competing with products like Google’s AI Overviews and OpenAI’s ChatGPT search.
On Monday, Perplexity unveiled new shopping features. For example, users can “snap to shop” by taking a photo of an item they want and receiving relevant product recommendations.
Further, Perplexity introduced one-click checkout. Premium users in the U.S. can now “check out seamlessly right on our website or app for select products from select merchants.”
Under the hood, this feature may be using new tools that the financial services company Stripe recently released for “adding payments to your LLM agentic workflows.”
On its own, the Perplexity announcement is a small step forward. But it heralds a future where AI systems will not simply answer questions, but also act in the real world with access to computers, websites, and cash.
DHS Shares Framework for AI in Critical Infrastructure
Last week, the Department of Homeland Security (DHS) released its “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure.” This complements AI risk management guidance in DHS’s Safety and Security Guidelines for Critical Infrastructure Owners and Operators.
The framework is the result of “considerable dialogue and debate” among DHS’s AI Safety and Security Board, which includes CEOs of major AI companies like OpenAI, Anthropic, and Google.
“As we move into the AI era, our foremost responsibility is ensuring these technologies are safe and beneficial,” said NVIDIA CEO Jensen Huang, a member of the Board.
The framework offers guidance for five groups: cloud providers, AI developers, infrastructure operators, civil society, and government entities. Each group should take actions to support five objectives: securing environments, designing responsible systems, managing data, ensuring safe deployment, and monitoring impacts.
For example, the framework says “AI developers should create clear reporting and evaluation processes to respond quickly to incidents, including insider threats.”
Importantly, this guidance is entirely voluntary. Given the importance of protecting critical infrastructure, the Center for AI Policy believes that AI developers should be not just nudged, but required to take basic safety and security measures.
News at CAIP
Marta Sikorski Martin shared takeaways from our recent Hill briefing on AI and the Future of Music: Innovation, IP Rights, and Workforce Adaptation.
A video recording is available on the CAIP YouTube channel.
ICYMI: Tristan Williams published an accompanying research report.
Mark Reddish wrote a blog post about the U.S. AI Safety Institute’s recent report on pre-deployment evaluations of Anthropic’s Claude 3.5 Sonnet.
Claudia Wilson wrote a blog post about military use of AI: “Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg.”
Jason Green-Lowe wrote a blog post about claims of an AI scaling slowdown: “Slower Scaling Gives Us Barely Enough Time To Invent Safe AI.”
ICYMI: Claudia Wilson published an article on U.S.-China competition in Tech Policy Press: “The U.S. Can Win Without Compromising AI Safety.”
Quote of the Week
Having spent a lifetime trying to speak what I believe to be the truth, I am profoundly disturbed to find that these days my identity is being stolen by others, and greatly object to them using it to say whatever they wish.
—Sir David Attenborough, the 98-year-old naturalist who has been documenting Earth’s wildlife since the 1950s, reacting to an AI-generated imitation of his voice
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub