AI Policy Weekly #21
NIST gives guidance, DHS + DOE defend infrastructure, and Tillis + Warner introduce legislation for reporting AI incidents
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
NIST Drops 232 Pages of AI Guidelines
Partly due to duties from the 2023 AI Executive Order, the National Institute of Standards and Technology (NIST) released four new documents with guidelines for reducing AI risks.
First, NIST applied the principles in its original AI Risk Management Framework (RMF) to focus on generative AI (GAI).
The GAI RMF outlines twelve different risk areas exacerbated by GAI: data privacy, environmental impacts, human-AI interactions, misinformation, cybersecurity, intellectual property, chemical, biological, radiological, or nuclear (CBRN) weapons information, as well as GAI systems confidently stating erroneous content (“hallucinations”), recommending harmful actions, generating obscene content, generating biased content, and utilizing bad data or other third-party components.
The document then recommends straightforward actions to manage these risks for organizations working with GAI models. Here are some examples:
“Disclose use of GAI to end users.”
“Establish policies to identify and disclose GAI system incidents to downstream AI actors.”
“Bolster oversight of GAI systems with independent audits or assessments.”
“Re-evaluate risks when adapting GAI models to new domains.”
“Conduct adversarial role-playing exercises, AI red-teaming, or chaos testing to identify anomalous or unforeseen failure modes.”
And so on. What is remarkable is the sheer number of useful practices for reducing various risks. In total, NIST recommends over 450 different actions.
Next, NIST adapted its Secure Software Development Framework (SSDF) in a second publication targeting GAI and large general-purpose AI models. This view of AI software can be useful, even though there are many important differences between AI and traditional software in the context of risk management.
Here are some sample recommendations from the AI-specific SSDF:
“Store reward models separately from AI models and data.”
“Conduct periodic audits of AI models.”
“Be prepared to roll back to a previous AI model.”
Lastly, NIST’s other two documents offer (1) plans for crafting global AI standards and (2) technical approaches for reducing risks posed by synthetic content.
If it wasn't clear already, the US Government is piling loads of critical AI work on NIST.
But NIST cannot enforce any of its AI suggestions, because NIST is not a regulatory agency.
With Big Tech companies already reneging on their voluntary commitments to the UK AI Safety Institute, it's looking increasingly important to create an AI regulatory authority to ensure that companies actually adopt these best practices.
DHS, DOE Prepare for AI Threats to Critical Infrastructure
Both the Department of Homeland Security (DHS) and the Department of Energy (DOE) delivered guidance on keeping infrastructure safe against AI-related risks.
DHS’s Safety and Security Guidelines for Critical Infrastructure Owners and Operators incorporate NIST's AI Risk Management Framework and aims to reduce three major kinds of risk: attacks using AI, attacks targeting AI systems, and AI system malfunctions.
Among possible attacks using AI, DHS is considering AI-enabled cyber attacks, psychological manipulation using deepfakes, autonomous robotic weaponry, and more.
Similarly, the DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) released an interim assessment outlining beneficial and harmful effects of AI on critical energy infrastructure. It considers risks from misaligned AI systems that “may end up prioritizing economic gain over, for example, grid reliability,” AI-driven autonomous malware, and more.
The DOE also announced that CESER will extend this work over the coming months in collaboration with energy stakeholders and technical experts.
Likewise, DHS will be drawing recommendations from relevant stakeholders through its new AI Safety and Security Board, which will advise “on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure.”
These projects mark the dawn of an era where critical infrastructure sectors must carefully assess their defenses against AI threats. With AI systems continuing to develop new offensive capabilities, that era has no end in sight.
Tillis and Warner Introduce the Secure Artificial Intelligence Act of 2024
A pair of US Senators have a commonsense proposal: build infrastructure to track incidents when AI causes harm or has security flaws.
Specifically, their bill would direct NIST to incorporate AI security vulnerabilities into the National Vulnerability Database (NVD), a comprehensive repository of publicly known cybersecurity vulnerabilities.
Similarly, the Cybersecurity and Infrastructure Security Agency (CISA) would update the Common Vulnerabilities and Exposures (CVE) program, which standardizes the identification and naming of publicly disclosed cybersecurity vulnerabilities.
Building on these adjustments, NIST and CISA would collaborate to create a voluntary database for tracking AI security and safety incidents.
Another key component of the bill would formally establish and extend the role of the National Security Agency’s (NSA’s) AI Security Center, which began operating in fall 2023 and recently issued its first publication.
Under the bill, the Security Center would provide testing resources to AI security researchers, develop guidance for defending AI systems against security exploits, promote secure AI use in national defense systems, and coordinate with NIST’s AI Safety Institute.
Overall, the bill would begin creating procedures for tracking existing AI harms and learning from them, which are critical practices for reducing risks. The Center for AI Policy firmly supports this bill.
News at CAIP
Jason Green-Lowe gave a statement of support for the Secure AI Act of 2024.
ICYMI: watch a video recording of our briefing on AI's workforce impacts, or read our latest research report on the topic.
The latest episode of the Center for AI Policy Podcast features our very own Jason Green-Lowe discussing legal liability for AI harms.
We released a memo in response to the latest AI activity in the Senate: US Senate Gets Ready to Pile More AI Responsibilities on NIST.
We were mentioned in TIME regarding advocacy and lobbying on Capitol Hill.
We replied to the Commerce Department's request for comments to help improve its proposed requirements for US cloud computing providers to identify their customers and report when foreign customers train massive dual-use AI models.
We’re hiring for two different roles: External Affairs Director, Government Relations Director.
Quote of the Week
In AI research, there is a saying: Garbage in, garbage out. If the data that went into training an AI model is trash, that will be reflected in the outputs of the model. The more data points the AI model has captured of my facial movements, microexpressions, head tilts, blinks, shrugs, and hand waves, the more realistic the avatar will be.
Back in the studio, I’m trying really hard not to be garbage.
—Melissa Heikkilä, a senior reporter at MIT Technology Review, describing her experience of being cloned into a digital avatar
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub