AI Policy Weekly #10
Hardware governance, Scott Wiener’s California bill, and a DoD bug bounty program
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
To Shape AI’s Impact, Govern Its Fuel
A 19-person squadron—including experts from OpenAI, GovAI, and the University of Cambridge—has published a 104-page treatise on compute governance. This broad topic encompasses the norms, laws, and institutions affecting computing power, also known as “compute.”
Compute refers broadly to the mathematical operations (e.g. addition and subtraction) that AI chips execute when AI systems train or run. Along with data, algorithms, and human talent, compute is one of the core inputs behind progress towards increasingly powerful AI systems.
Compute governance considers influencing the full, diverse range of resources that contribute to compute, including fabrication plants, AI supercomputers, cloud services, extreme ultraviolet light, and much more.
Compared to the other inputs to AI progress, compute stands out because of its detectable usage, its physical and quantifiable nature, and its highly concentrated supply chain.
Given the unique appeal of compute governance, it may be unsurprising that the US Government has already taken action on compute governance through activities like the CHIPS Act, the National AI Research Resource (NAIRR) Pilot, the chip export controls, the compute-based Executive Order reporting requirements, and the proposed Know-Your-Customer rules for cloud computing.
To continue beyond this existing work, the compute governance report highlights numerous other promising governance mechanisms.
For instance, the US Government could analyze reports from large-scale AI developers and compute providers to forecast future growth in AI capabilities. Additionally, cloud computing providers could revoke compute access from actors who use their resources for malicious purposes.
Other measures seek to positively reshape the allocation of compute. For instance, policymakers could provide accessible computing resources for beneficial uses like education, agriculture, or AI safety.
The report does not shy from more ambitious proposals. One section outlines how an international “compute reserve” could fuel an AI megaproject akin to the European Organization for Nuclear Research (CERN).
California Eyes Regulation for Powerful Future AI Systems
California State Senator Scott Wiener has introduced an important new proposal for AI regulation with his bill, the Safe and Secure Innovation for Frontier AI Systems Act (SB-1047).
The bill proposes that developers of the largest and most costly AI models must assess their systems for safety. It includes provisions for whistleblower protections, Know-Your-Customer rules for compute providers, a new public computing resource called CalCompute, and the creation of a Frontier Model Division within the California Department of Technology.
The safety requirements aim to prevent severe harms, such as “mass casualties” or cyberattack damages exceeding $500 million. These requirements specifically target systems that train using over 10^26 operations—a threshold that, to date, no system has been publicly confirmed to reach.
Before beginning the training of a massive AI model, an AI company must self-assess the risk of the model causing extreme harm. If the model presents sufficiently large risks, the company must develop a comprehensive safety and security protocol before proceeding with training. Following the training, if the company still finds extreme risks, additional precautions are necessary.
Arguably, the bill might not go far enough—it doesn’t contain any licensing or pre-approval requirements. It also says that developers must be able to shut down models that are causing harm, but only if those models are in their possession—so open source developers are largely off the hook.
Still, the bill contains significant measures to ensure companies don’t irresponsibly cause large-scale harm and is a significant step forward.
DoD is Sponsoring Prizes to Fight AI Bias
The FY2024 NDAA directed the Department of Defense (DoD) to develop a bug bounty program. This program aims to encourage the discovery and resolution of flaws in advanced general-purpose AI systems.
In a move likely connected to this directive, the DoD recently launched the first of two bug bounty projects. This effort focuses on uncovering biases in open-source large language models.
It includes a deadline of March 11th and offers $24,000 in total prize money, sponsored by the DoD, with $9,000 available for first place. For comparison, this sum is a minor portion of the DoD's extensive budget exceeding $500 billion.
Similarly, the FTC recently concluded a prize competition for solutions to address fraud from AI voice clones. Collectively, these competitions are a positive sign, since prize competitions can crowdsource innovation without significant government spending.
News at CAIP
Save the date: we’re hosting a happy hour for AI policy professionals at Sonoma on Wednesday February 28th from 5:30–7:30pm.
We hosted a panel discussion on AI’s electoral impacts in the Rayburn House Office Building, with insightful remarks from panelists Renée DiResta (Stanford Internet Observatory), Eric Heitzman (IOActive), Josh Goldstein (Center for Security and Emerging Technology), and Richard Anthony (Public Citizen). Recording and transcript coming soon at this page.
Learn all about AI-powered misinformation—including a curated list of the most significant incidents to date—in our latest report.
Quote of the Week
You can’t step on the accelerator without a brake.
—Akiko Murakami, the leader of Japan’s new AI Safety Institute, in a recent interview
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub