Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Senate Armed Services Committee Shares NDAA Text
On Monday, Senate Armed Services Committee (SASC) leaders Roger Wicker (R-MS) and Jack Reed (D-RI) filed S. 4638, the National Defense Authorization Act (NDAA) for Fiscal Year 2025.
The bill passed out of SASC on a bipartisan 22-3 vote last month, but the bill’s full text was unavailable to the public until this week. It still requires full Senate approval, and it will eventually need to unify with the NDAA bill that the House passed last month.
“This year’s NDAA results are a testament to the tradition of bipartisanship, vigorous debate, and good working order on which this committee prides itself,” said Senator Wicker.
Several provisions of SASC’s NDAA stand out for reducing AI’s risks.
First, the committee report directs the Director of Operational Test and Evaluation to brief SASC and the House Armed Service Committee (HASC) on testing infrastructure to validate AI in military applications, including AI systems’ ability to handle adverse events like malware, data poisoning, and cyber exploits.
Second, SASC directs the Comptroller General to submit a report to SASC and HASC on the Department of Defense’s (DOD’s) management of AI-related issues, including the DOD’s implementation of its Responsible AI Strategy.
Third, Section 1283 of SASC’s NDAA would establish an international AI working group for US allies to implement an AI initiative, including several lines of effort to test and evaluate AI systems.
The multilateral group would also “advise member countries with respect to export controls applicable to [AI] systems.”
Fourth, Section 242 would expand the duties of the Chief Digital and Artificial Intelligence Officer Governing Council, which was established in last year’s NDAA.
The expanded duties are (1) to identify “advanced artificial intelligence models that could pose a national security risk if accessed by an adversary of the United States,” (2) to “develop strategies to prevent unauthorized access and usage of potent artificial intelligence models” by US adversaries, and (3) to advise Congress and relevant federal agencies on AI-related legislative and administrative action.
The NDAA’s provisions on export controls and national security risks underscore the dual-use nature of advanced AI systems. For example, the Department of Homeland Security recently released a report on AI’s intersection with chemical and biological threats, while the Department of Energy is studying AI’s intersection with radiological and nuclear threats.
Indeed, SASC’s NDAA committee report urges the DOD to recommend feasible actions that would “ensure nuclear system safety, security, and reliability is not negatively impacted by [AI].”
Separately, the House NDAA committee report directs the Secretary of Defense to brief HASC on “the potential impact of artificial intelligence on CBRN threats and threat mitigations.”
Overall, the Center for AI Policy is pleased to see the Senate and House NDAA bills contending with national security risks from advanced, dual-use AI capabilities.
However, most of the bills’ relevant provisions direct the US government to tackle these grave threats, rather than directing the private sector to do its fair share. To adequately protect US national security, Congress needs to require that billion-dollar AI companies adopt safety and security practices.
Amazon Hires Adept’s AI Talent to Build AGI
In April 2022, a new AI startup named Adept emerged from stealth with $65 million in funding.
It aimed to build artificial general intelligence (AGI), an anticipated form of “human-level” AI that can match human brainpower at just about any cognitive task.
In March 2023, Adept announced that it had raised an additional $350 million.
Last week, Adept’s AGI ambitions largely ended, as Amazon hired approximately two thirds of Adept’s staff, including all the remaining members of the startup’s 2022 founding team. These founders have joined Amazon’s AGI team.
Further, Amazon gained rights to “Adept’s agent technology, family of state-of-the-art multimodal models, and a few datasets.”
Adept will now focus on “solutions that enable agentic AI,” rather than being the one to build AI agents. Just twenty employees remain at the company.
This is the second time in 2024 that a big AI company has swallowed a small one.
In March, Microsoft hired most of the 70-person team at Inflection AI, another promising AI startup that had raised over $1.5 billion.
Microsoft paid $620 million for rights to Inflection’s AI model, plus $33 million for “a waiver from claims against it related to hiring Inflection employees.”
When factoring in the compensation packages of Inflection’s AI talent, Microsoft may have spent over $1 billion on the deal. Accordingly, this unconventional move attracted antitrust scrutiny from the FTC and the UK’s Competition and Markets Authority.
With Microsoft and Amazon snapping up Inflection and Adept, Big Tech companies are strengthening their influence over advanced general-purpose AI development.
A Hacker Stole Algorithmic Secrets From OpenAI
The New York Times recently reported that OpenAI suffered a trade secret theft in 2023 and failed to inform law enforcement.
The hacker “stole details about the design of the company’s AI technologies” from “an online forum where employees talked about OpenAI’s latest technologies.” This forum may have been the OpenAI company Slack, which contains over five million messages.
OpenAI shared the incident with its employees and board, but the company otherwise kept the breach private since “no information about customers or partners had been stolen” and the hacker appeared to lack foreign ties.
Accordingly, an OpenAI spokesperson told the New York Times that the company “addressed” this incident.
This claim is debatable, because OpenAI’s AI research is relevant to US national security. Software insights are a major driver of AI progress, and OpenAI has publicly reported that actors in China, Russia, and Iran are already using its models to assist in influence operations and cyber attacks.
OpenAI’s failure to report its cybersecurity incident to the US government underscores the need for Congress to require stronger security at top AI companies.
News at CAIP
Jason Green-Lowe wrote a letter to the editor of Reason regarding Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation.
Claudia Wilson wrote a new blog post about Boeing’s agreement to plead guilty: “What Boeing’s Negligence Reveals About Corporate Incentives.”
We released a statement on the introduction of the Expanding Partnerships for Innovation and Competitiveness (EPIC) Act in the Senate.
Jason Green-Lowe wrote a blog post with thoughts on Google’s latest AI principles. Like the links on the second page of Google’s search results, these principles are something of a mixed bag.
On Wednesday, July 24th from 5:30–7:30 p.m. ET, we are hosting an AI policy happy hour at Sonoma Restaurant & Wine Bar. Anyone working on AI policy or related topics is welcome to attend. RSVP here.
On Monday, July 29th at 3 p.m. ET, the Center for AI Policy is hosting a webinar titled “Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future.” The event will feature a presentation and audience Q&A with Professor Stuart Russell, a leading researcher in artificial intelligence and the author (with Peter Norvig) of the standard text in the field. RSVP here.
Quote of the Week
I faced the issues of AI early, but it will happen for others. It may not be a happy ending.
—Lee Saedol, a legendary Go player who shockingly lost to Google DeepMind’s AI system in 2016
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub
Thanks for your work