Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
AI Software Businesses Raise Billions of Dollars
In addition to enormous investments in AI hardware and hardware supply chains, AI software startups are also receiving considerable funding. Consider some of the announcements from the past ten weeks:
$135 million to EvenUp, which builds AI tools for personal injury lawyers. This is “one of the largest funding rounds in legal AI history.”
$491 million to KoBold Metals, which uses AI to find lithium and copper deposits. In Zambia, KoBold has already made what is “likely the largest copper discovery in more than a decade.”
$500 million to Poolside, which builds AI for software engineering. Poolside wants to progress towards artificial general intelligence (AGI), a theoretical form of AI that would be capable of automating most non-physical jobs.
$230 million to World Labs, which advances AI systems that perceive, generate, and interact with the 3D world. The goal is to endow machines with “spatial intelligence,” a kind of human intelligence that allows us to “understand and interact with the world around us.”
$260 million to Glean, which offers an AI platform that “connects and understands all your enterprise data, to generate trusted answers and automate work grounded in company knowledge.” Glean now has over $550 million at its disposal.
$1 billion to Safe Superintelligence Inc. (SSI), which has “one goal and one product: a safe superintelligence,” a theoretical form of AI that would vastly outperform AGI. The company was founded four months ago by Ilya Sutskever, one of the best AI researchers in the world.
$100 million to Sakana AI, which does AI R&D on “new kinds of foundation models [...] based on nature-inspired intelligence.” In August, Sakana introduced an “AI Scientist” prototype that can autonomously conduct simplistic kinds of AI research.
$150 million to Codeium, which offers an AI-powered platform for computer programming. Codeium faces competition from Microsoft and OpenAI’s GitHub Copilot.
$320 million to Magic, which aims to “automate AI research and code generation to improve models and solve alignment,” all on “a direct path to AGI.” The startup recently announced preliminary work on AI models with 100-million-token context windows, enough to process ~750 novels simultaneously.
Most notably, OpenAI raised $6.6 billion in funding, plus a $4 billion credit line from major banks. The company’s latest products are inspiring detailed strategic forecasts from Sequoia Capital and performing improvisational choral arrangements of the Beatles’ “Eleanor Rigby.”
OpenAI needs this money because frontier AI is expensive. According to The Information, the company is projected to spend billions of dollars annually on computing costs. OpenAI is also spending billions on salary expenses and stock compensation to retain talented AI employees. And in 2024 alone, the company will pay about $500 million to acquire training data.
As AI’s capabilities evolve rapidly, so too does the capital fueling its progress. The question now is not if AI will reshape our world, but how soon and in what ways.
U.S. AI Safety Institute Seeks Input on Chem-Bio AI
The 2024 Nobel Prize in Chemistry was awarded to David Baker, Demis Hassabis, and John M. Jumper. The latter two scientists received recognition for their contributions to an AI model from Google DeepMind called AlphaFold 2. In 2022, DeepMind used this model to predict the 3D folded structure of nearly all 200+ million proteins known to science, an amazing achievement that would’ve taken approximately 1 billion years for PhD students to do with manual effort.
AlphaFold has tremendous potential to improve medical research and save lives. But major leaps in computational biology can also be a double-edged sword, since they enable new forms of biological misuse. This dynamic is part of the reason why the U.S. AI Safety Institute (AISI) has issued a Request for Information (RFI) on the responsible development of chemical and biological (chem-bio) AI models.
AISI defines these chem-bio systems as “AI models that can aid in the analysis, prediction, or generation of novel chemical or biological sequences, structures, or functions.” Here are some of the questions that AISI is asking:
“To what extent is it possible to have generalizable evaluation methodologies that apply across different types of chem-bio AI models?”
“How might the research community approach the development and use of public and/or proprietary chem-bio datasets that could enhance the potential harms of chem-bio AI models through fine tuning or other post-deployment adaptations?”
“What areas of research are needed to better understand the risks associated with the interaction of multiple chem-bio AI models [...] into an end-to-end workflow or automated laboratory environments for synthesizing chem-bio materials independent of human intervention?”
The Center for AI Policy commends AISI for working on this critical topic, and calls on Congress to align and pass the Senate and House bills that would formally authorize AISI. Both bills have passed out of committee; it’s time to move them forward into law.
CEO Automation Has Potential, Says Cambridge Study
At least three different companies have adopted AI-powered CEOs in recent years, but the trend has not spread far. According to a new study in Harvard Business Review by Cambridge University professors and leaders at Strategize.inc, that could soon change.
The researchers created a computer game that simulates running a car company in the United States, and they invited 344 people to play this game. These players were a mix of university students and experienced bank executives.
Each player acted as a CEO of a car company and had to make big decisions about things like what cars to make, how to price them, and how to react to market changes. The goal was to make the company as valuable as possible and avoid getting fired by the board of directors.
After the human players finished their games, the researchers let an AI (specifically, OpenAI’s GPT-4o model) play the same game. They then compared how well the AI did against the top-performing human players.
“GPT-4o’s performance as a CEO was remarkable,” said the researchers. “The LLM consistently outperformed top human participants on nearly every metric.”
However, the AI “struggled with black swan events — such as market collapses during the Covid-19 pandemic.” As a result, “GPT-4o was fired faster by the virtual board than the students who played the game.”
While this study doesn’t imply the imminent replacement of all human CEOs, it does underscore the impressive decision-making capabilities of current AI. If these capabilities continue growing, AI has serious potential to help shape executive-level strategy in the near future.
News at CAIP
Jason Green-Lowe responded to Geoffrey Hinton’s Nobel Prize win: “CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics.”
Brian Waldrip wrote a blog post on the need to increase America’s emergency response capabilities for AI: “Preparedness: Key to Weathering Tech Disasters.”
Jason Green-Lowe wrote a blog post on “Sam Altman’s Dangerous and Unquenchable Craving for Power.” CAIP believes that no one man, woman, or machine should have this much power over the future of AI.
Claudia Wilson led CAIP’s comment on the Bureau of Industry and Security (BIS) proposed rule establishing reporting requirements for advanced AI models and supercomputers.
We published two research reports from our summer 2024 policy fellows:
Claudia Wilson spoke at a panel discussion covering AI policy and career development. The event was hosted by Young Professionals in Foreign Policy (YPFP).
Quote of the Week
I’m particularly proud of the fact that one of my students fired Sam Altman.
—Geoffrey Hinton, a pioneering AI researcher and seminal figure in the development of neural networks, speaking at a press conference after winning the 2024 Nobel Prize in Physics
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub