AI Policy Weekly #28
Sutskever’s superintelligence, the G7’s AI commitments, and Peters and Tillis’ PREPARED for AI Act
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Sutskever’s Safe Superintelligence
Ilya Sutskever is 37 years old. He is easily one of the most cited AI researchers of all time.
Sutskever was part of OpenAI's founding team in 2015. About eight years later, he attempted to oust CEO Sam Altman as a member of the OpenAI board, although he quickly backtracked and said he regretted his actions.
Company management duties were not Sutskever’s main focus in 2023. He led the new Superalignment team, which aimed to make research breakthroughs to control “AI systems much smarter than us,” also known as “superintelligence.”
Sutskever added color to this definition of superintelligence during a Q&A event, saying that “It will be possible to build a computer—a computer cluster, a GPU farm—that is just smarter than any person, that can do science and engineering much, much faster than even a large team of really experienced scientists and engineers.”
Last month, the Superalignment team imploded and Sutskever left the company. He foreshadowed that he would soon transition to a “very personally meaningful” project.
Yesterday, the world learned what that project is.
Sutskever is launching Safe Superintelligence Inc. (SSI), a company that will strive to build superintelligence. Safely, of course.
The company has no public plan for how it will ensure safety, but Ilya assured Bloomberg that “we mean safe like nuclear safety as opposed to safe as in ‘trust and safety.’” He elaborated that safe superintelligence “will not harm humanity at a large scale.” Notably, Sutskever signed an open letter last year stating that AI could cause human extinction.
SSI is “an American company” with offices outside of America and inside of America, in Israel and California.
It is “assembling a lean, cracked team of the world’s best engineers and researchers.” SSI will need considerable funding to do that, since top AI researchers command multi-million-dollar pay packages. Moreover, cutting-edge AI models require training data and supercomputers that cost hundreds of millions of dollars.
Sutskever promises that SSI will be different from other frontier AI companies. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then. It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
This is not the first time that a new AI company has claimed to be safer than its competitors. In fact, it is at least the fourth time.
In 2013, Elon Musk learned that Google was planning to acquire DeepMind. He was deeply concerned that Google CEO Larry Page did not take safety seriously enough, and he warned DeepMind CEO Demis Hassabis that “[t]he future of AI should not be controlled by Larry [Page].”
Google went on to acquire DeepMind.
In 2015, Musk received an email from Sam Altman, who said that he’d been “thinking a lot about whether it’s possible to stop humanity from developing AI. [...] If it’s going to happen, it seems like it would be good for someone other than Google to do it first.”
Altman proposed building an organization that would safely develop human-level AI, as a counterpoint to Google DeepMind. Musk eventually agreed to support the group. The organization went on to become OpenAI.
In 2021, a group of OpenAI employees became concerned that the company was overly focused on pursuing profit. They left and formed their own company, Anthropic, which they branded as an “AI safety lab.” Last year, Anthropic raised over seven billion dollars.
In 2023, Musk launched his own AI company, xAI, with a mission to “understand the true nature of the universe.” He said he needed to start xAI because “as a spectator, one can’t do much to influence the outcome [of advanced AI].” Musk claimed his company was “not subject to market-based incentives, or the non market-based, ESG incentives.” Last month, xAI raised six billion dollars.
And now, in 2024, Ilya Sutskever is launching his own company to build general-purpose AI that surpasses human abilities.
The Center for AI Policy is skeptical of Sutskever’s claims about safety. Until seeing further evidence, we expect that SSI will be about as dangerous as the other AI companies racing to create a technology that could aid in chemical and biological weapons development.
G7 Leaders Commit to Hiroshima AI Plan
Last week, President Biden met with leaders of Canada, France, Germany, Italy, Japan, the United Kingdom, and over ten other countries at the annual G7 summit.
One special attendee was Pope Francis, who warned of the dangers of AI. “We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives,” he said.
The White House released a communiqué with takeaways from the summit, including a section on AI.
Importantly, the G7 leaders committed to advancing the International Code of Conduct for Organizations Developing AI Systems, a key document from the “Hiroshima AI Process” that the G7 launched last year.
The International Code of Conduct calls for AI developers to mitigate risks from their technologies, such as:
chemical, biological, radiological, and nuclear (CBRN) risks,
offensive cyber capabilities,
critical infrastructure disruptions,
models “making copies of themselves or ‘self-replicating’ or training other models,”
chain reactions that “could affect up to an entire city,”
threats to democracy, and more.
The communiqué describes plans to build a “reporting framework” for monitoring the effects and implementation of the Code of Conduct.
The G7 will also develop “a brand that can be used to identify organizations that are voluntarily participating” in the reporting framework.
The Center for AI Policy is pleased to see international collaboration towards reducing AI’s risks. However, we caution that voluntary guidelines are insufficient for ensuring responsible AI development.
Tillis, Peters Introduce the PREPARED for AI Act
Senators Thom Tillis (R-NC) and Gary Peters (D-MI) introduced the Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act.
The bill would require agencies to classify risk levels, conduct testing to mitigate potential risks, and establish AI governance structures. It would also require government AI contracts to include specific safety and security terms.
“It is crucial federal agencies have a robust framework for procuring and implementing AI safely and effectively,” said Senator Tillis.
News at CAIP
Jason Green-Lowe authored an op-ed in the Capital Journal, a local South Dakota newspaper serving residents of Pierre, the state capital: “South Dakotans have a unique opportunity to call for safe AI legislation from Congress.”
Episode 8 of the CAIP Podcast features Tamay Besiroglu, Associate Director of Epoch AI. Jakub and Tamay discuss the factors shaping AI progress and analyze AI’s potential trajectories over the coming years.
Jason Green-Lowe wrote a blog post commenting on Senator Hickenlooper’s call for qualified third parties to effectively audit AI systems and verify their compliance with federal laws and regulations.
We’re hiring for an External Affairs Director.
Upcoming Event
On Wednesday, June 26th at 11 a.m., the Center for AI Policy hosts a panel discussion titled “Protecting Privacy in the AI Era: Data, Surveillance, and Accountability.” Join us in the Rayburn House Office Building (Room 2044) for a conversation with four esteemed experts:
Ben Swartz, Senior Technology Advisor at the FTC
Brandon Pugh, Director of Cybersecurity and Emerging Threats Policy at the R Street Institute
Maneesha Mithal, Partner at Wilson Sonsini and co-chair of the firm’s privacy and cybersecurity practice
Mark MacCarthy, Adjunct Professor at Georgetown University and Nonresident Senior Fellow at Brookings
Quote of the Week
From our experiments, we have a lot of ideas about how such tasks are solved, and how the learning algorithms that underlie the acquisition of skilled behaviors are implemented.
We want to start using the virtual rats to test these ideas and help advance our understanding of how real brains generate complex behavior.
—Bence Ölveczky, Professor of Organismic and Evolutionary Biology at Harvard, commenting on research that trained an artificial neural network to control the movements of a simulated rodent and imitate the behaviors of real rats
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub