AI Policy Weekly #39
Billion-dollar datacenter projects, Magic’s massive context window, and US AISI’s agreement with OpenAI and Anthropic
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Billions of Dollars Continue Flowing Into Data Centers and AI
About ninety years ago, a consortium of construction companies joined forces to build the Hoover Dam on the Colorado River, between the states of Arizona and Nevada.
“At the time of its construction, Hoover Dam was mankind’s most massive masonry structure since the Great Pyramids,” says Bechtel, a company that participated in the consortium.
The dam cost “nearly $49 million—a staggering amount in the early 1930s.” In today’s dollars, $49 million would be worth approximately $1 billion.
This is a paltry sum compared to the lofty investments that companies are announcing every week in AI datacenters, large warehouses of computers for building and deploying AI models. To illustrate, let’s review some of the latest updates.
First, Meta announced an $800 million datacenter project in South Carolina. This is Meta’s 22nd datacenter in the US and 26th in the world, and it will be “optimized for AI workloads.”
Not to be outdone, Amazon is reportedly contemplating an additional $2 billion investment into data centers in Telangana, India. According to a local official, the main reason for this expansion is AI.
Next, atNorth, an Icelandic company that operates seven datacenters in the Nordics, announced plans to build a “mega” datacenter in Ølgod, Denmark. The site will draw 250 megawatts of energy to power “companies that run AI and High-Performance Computing workloads.”
Back in India, another company is planning to build “gigawatt-scale AI-ready data centers in Jamnagar.” One gigawatt would be enough to power hundreds of thousands of high-performance AI chips at once.
That level of computing power is not theoretical. On Monday, Elon Musk reported that his AI company, xAI, began operating a datacenter in Memphis containing 100,000 top-of-the-line chips from NVIDIA.
It’s unlikely that Musk’s datacenter currently draws enough electricity from utility companies to operate all those chips simultaneously. In the meantime, xAI appears to be sourcing energy from nearly twenty natural gas turbines. Local environmental groups allege that Musk failed to obtain a permit for this activity, which they criticize for polluting the Memphis air.
Setting electricity aside, the 100,000 AI chips in this datacenter could easily cost several billion dollars in total.
Other AI companies are also stockpiling expensive AI chips. Analysts at SemiAnalysis report that “the leading frontier AI model training clusters have scaled to 100,000 GPUs this year, with 300,000+ GPUs clusters in the works for 2025.” And earlier this year, Microsoft and OpenAI reportedly began plotting a long-term project to construct a $100 billion supercomputer with millions of AI chips.
As an intermediate step towards its $100 billion vision, Microsoft is building a smaller datacenter in Mount Pleasant, Wisconsin. The tech giant plans to spend over $3 billion on this project by the end of 2026, and it just bought 28.6 more acres of land in the area.
Microsoft is not the only business willing to spend $100 billion. On Tuesday, The Information reported that two separate companies have approached North Dakota officials about $125 billion supercomputing projects.
Many AI supercomputers rely on NVIDIA hardware, and it shows. In NVIDIA’s most recent financial quarter, the company earned over $26 billion in datacenter revenue. To be clear, that is twenty-six billion dollars in three months, continuing a rapid rise from less than three years ago, when NVIDIA was earning about $3 billion per quarter from datacenters.
Funding for AI software is nothing to scoff at either. After launching just three months ago, the AI startup Safe Superintelligence Inc. (SSI) recently raised $1 billion in funding, about the same as the cost of the entire Hoover Dam. SSI currently employs ten people.
Meanwhile, OpenAI is talking with investors about raising billions of dollars in a new funding round. According to Axios, over 200 million people are now using the company’s ChatGPT service every week, up from 100 million last November.
Thus, as Congress contemplates rules of the road for AI, the clock is ticking. In a few years, new datacenters will be powering new AI systems that are even more capable, prevalent, and influential than they are today. Now is the time to begin preparing for this future with policies like the Center for AI Policy’s 2024 Action Plan.
Magic Dectuples AI’s Contextual Capacity
Magic, a cutting-edge AI company with just 23 employees, has achieved a remarkable leap in AI capabilities with their LTM-2-mini model.
Their system boasts a 100-million-token context window, equivalent to processing about 750 novels or millions of lines of code simultaneously. To grasp this scale, imagine a chatbot that can hold and analyze the entire works of Shakespeare and the complete Harry Potter series at once, with room for hundreds more books.
For reference, Google’s Gemini 1.5 Pro model, released earlier this year, processes over 1 million tokens at once—a major upgrade from 2023’s top AI models with context windows about a tenth as large. Google also reported success testing models with up to 10 million tokens.
Thus, Magic’s breakthrough represents at least a tenfold improvement over the cutting edge.
However, Magic’s CEO tempered expectations, noting that the current LTM-2-mini system is “far away from frontier-scale compute budget and thus low IQ. Just wanted to share the update on research progress while we scale.”
To continue growing, Magic has secured $465 million in funding, including a recent $320 million investment round from funders like Eric Schmidt, Sequoia Capital, and Jane Street. The company is now training a larger LTM-2 model on a new supercomputer containing 8,000 NVIDIA H100 GPUs.
Magic plans to build two state-of-the-art supercomputers in partnership with Google and NVIDIA. According to Google, “these computers will be able to achieve 160 exaflops, a measure of computing performance so large, it’s roughly equal to 160 billion people each holding one billion calculators and running a computation at the same exact moment.”
To contextualize this power, 160 exaflops translates to over 10^20 mathematical operations per second. Over three months, that would be enough to train an AI model with well over 10^26 total operations, surpassing the threshold for reporting requirements set by the 2023 AI Executive Order.
Despite the excitement, Magic ends their announcement with a sobering note that “sufficiently advanced AI should be treated with the same sensitivity as the nuclear industry.” This stance underscores AI’s dual nature: immense potential coupled with the need for careful management.
Anthropic, OpenAI Pave Way for Safety Collaboration with US AI Safety Institute
The US AI Safety Institute (AISI) recently established formal agreements with AI industry leaders OpenAI and Anthropic. These partnerships grant AISI access to cutting-edge AI models before and after public release, allowing AISI to provide safety testing and feedback.
While OpenAI and Anthropic have embraced this proactive approach, other tech giants appear reluctant to collaborate with AI safety institutes in the US and UK.
“Everybody in Silicon Valley is very keen to see whether the US and UK institutes work out a way of working together before we work out how to work with them,” said Meta’s president of global affairs, Nick Clegg, in April.
It remains unclear what specific clarity Meta is awaiting, considering that the US and UK institutes already signed a memorandum of understanding on April 1st, formalizing their collaboration.
In contrast, Anthropic and OpenAI evidently view the US AISI as sufficiently developed for meaningful partnership. Anthropic already demonstrated the feasibility of such collaborations by subjecting its Claude 3.5 Sonnet model to pre-deployment testing with the UK AISI.
Notably, Anthropic and OpenAI are the first AI companies to establish public, formal agreements allowing the US AISI to support safety assessments of their AI models.
In the coming months, it will become apparent which other companies choose to follow this precedent, and which do not.
Job Openings
The Center for AI Policy (CAIP) is hiring for three new roles. We’re looking for:
an entry-level Policy Analyst with demonstrated interest in thinking and writing about the alignment problem in artificial intelligence,
a passionate and effective National Field Coordinator who can build grassroots political support for AI safety legislation, and
a Director of Development who can lay the groundwork for financial sustainability for the organization in years to come.
News at CAIP
Together with Public Citizen, the Institute for Strategic Dialogue, the Incarcerated Nation Network, and Accountable Tech, we organized and submitted a letter to California Governor Gavin Newsom urging him to sign SB 1047, a major AI safety bill.
Claudia Wilson and Brian Waldrip traveled together to South Dakota to talk with political representatives, academics, and everyday South Dakotans about how they view and approach AI. Read this blog post to understand key takeaways from the trip.
Claudia Wilson wrote a blog post on AI-powered price fixing titled “AI’s Shenanigans in Market Economics.”
Jason Green-Lowe wrote a blog post on Section 230 and the Anderson v. TikTok case titled “TikTok Lawsuit Highlights the Growing Power of AI.”
We endorsed the Nucleic Acid Standards in Biosecurity Act, a bipartisan bill from Representatives Yadira Caraveo (D-CO) and Rich McCormick (R-GA) that would support safety practices in synthetic biology. These practices are critical given AI’s potential to amplify biological threats.
Quote of the Week
Anyone can be a victim.
—Yoon Suk Yeol, President of South Korea, discussing the ongoing harms of AI-generated sexualized content
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub