Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
Congress Half-Funds the Government, Biden Makes Requests for FY25
On Saturday, the President signed into law the Consolidated Appropriations Act, 2024.
This “minibus” includes six of the twelve appropriations bills that were supposed to be completed last fall. Congress must finish the remaining six appropriations bills by March 22nd to fully fund the government without risking a partial government shutdown.
Several features of the minibus are particularly relevant for AI policy. First, $191M has been appropriated to the Bureau of Industry and Security (BIS) in the Department of Commerce, which is the same as for FY2023, and $31M less than what BIS requested.
This is a shame, because BIS is responsible for controlling US adversaries’ access to AI hardware, a critical piece of the AI supply chain. One think tank report found that increasing the BIS budget is “likely to be one of the best opportunities available anywhere in US national security.”
Next, the bill provides $1.46B to the National Institute of Standards and Technology (NIST), another crucial agency for AI due to its work on measuring and managing AI risks. This is roughly a 10% cut to NIST’s FY2023 budget, which rounded to $1.63B.
However, the bill’s explanatory statement specifies that NIST’s AI research projects—which are only part of the agency’s full portfolio—should receive no less than last year’s enacted funding. Additionally, up to $10M of NIST’s AI funds are available for establishing the US AI Safety Institute.
Third, the National Science Foundation (NSF) budget has increased from approximately $8.84B to $9.06B, a 2.5% boost. But this is more like a cut than a raise, since inflation has stayed above 3% for the past couple years.
A positive sign is that Congress’s explanatory statement encourages the NSF to invest in research on technical methods for interpreting and explaining the behavior of AI systems.
Meanwhile, with Congress working on finishing FY2024 appropriations—which only last until September 30th—the President released a budget request for FY2025.
Several features of the FY2025 request would promote more responsible AI development:
$70M for federal agencies to support Chief AI Officers that oversee the agency’s adoption and use of AI.
$65M for Commerce to “safeguard and promote AI,” including $50M for the US AI Safety Institute.
$37M for the National Nuclear Security Administration (NNSA) within the Department of Energy (DOE) to annually assess AI-assisted production of chemical, biological, radiological, and nuclear threats.
$32M for the US Digital Service (USDS), General Services Administration (GSA), and Office of Personnel Management (OPM) to help bring AI expertise into the federal government.
$30M for the ongoing pilot of the National AI Research Resource (NAIRR).
$5M for the Department of Homeland Security (DHS) to establish an AI office.
However, with an upcoming election and Congress’s skepticism of increasing federal spending, it remains to be seen how many of these features will survive and stay in FY2025 appropriations. Additionally, these investments pale in comparison to industry, where individual companies like Meta may be spending over a billion dollars a month on AI.
Intelligence Leaders Share Annual Threat Assessment
This week, the US Intelligence Community (IC) released its 2024 Annual Threat Assessment, which covers “the most direct, serious threats to the United States primarily during the next year.”
AI made several appearances in the report. For instance, Russia is “developing the capability to fool experts” with deepfakes, and China is pursuing AI for mass surveillance and “intelligent weapon platforms.”
Along with the report release, leaders of the IC testified in four distinct meetings before Congress, as the Senate and House intelligence committees separately held both classified and unclassified hearings on the 2024 Threat Assessment.
At the Senate’s public hearing, the CIA Director shared that Al-Qaeda has used AI to generate videos for inspiring terrorists to conduct lone wolf attacks.
In response to examples like this, Senator Kirsten Gillibrand (D-NY) expressed concern that the IC lacks a plan for preparing the American people for AI-driven manipulation of elections.
Gladstone AI Unveils Action Plan for AI Risks
At the end of 2022, just before ChatGPT, the State Department commissioned a study on the risks of increasingly advanced AI, and awarded a $250,000 contract to Gladstone AI, a for-profit organization that teaches AI courses for the US Government.
Gladstone AI has finished its report, which is titled “An Action Plan to Increase the Safety and Security of Advanced AI.” It includes a historical survey of government interventions for controlling other technologies, an analysis of possible trajectories of future AI development, and concrete recommendations for addressing large-scale threats from upcoming AI systems.
One promising proposal is establishing an “AI Observatory” in the executive branch to track the current state of AI technology and where it’s heading. This office would stay up-to-date on AI capabilities so that the government has adequate time and information to prepare for future AI advancements.
Additionally, the government could build an understanding of warning signs that indicate when AI systems pose catastrophic-level risks, and it could create scenario-based contingency plans for responding to different AI developments that might trigger such warnings.
News at CAIP
We published model legislation for the Responsible Advanced AI Act, which was featured in the Gladstone AI Action Plan. View our legislation here.
We’ve redesigned our website with a fresh new look. Check it out at aipolicy.us.
We released three statements: one on the 2024 State of the Union, another on the President’s FY2025 Budget Request, and a statement on the EU Parliament’s approval of the EU AI Act.
CAIP in the news: in an article on the Musk-Altman legal battle, the Financial Times quoted from our memo on the topic.
Quote of the Week
If we want to build very powerful AI systems but also have this strong constraint that they do not exacerbate catastrophic-level risks, then I think methods like unlearning are a critical step in that process.
—Alexandr Wang, CEO of Scale AI, commenting on the new Weapons of Mass Destruction Proxy (WMDP) benchmark for AI systems
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub