Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
White House Releases National Security Memorandum on AI
Last week, the Biden administration unveiled the first-ever National Security Memorandum (NSM) on AI. Spanning over 10,000 words, it is packed with priorities and plans for confronting both AI’s transformative promise and looming risks.
The NSM aims to strengthen America’s lead in AI development, deploy AI systems to bolster national security, and advance global AI governance that “fosters safe, secure, and trustworthy AI.”
“There is probably no other technology that will be more critical to our national security in the years ahead,” said U.S. National Security Advisor Jake Sullivan in a speech about the NSM.
Sullivan’s speech also emphasized the “huge uncertainty around AI’s growth trajectory.” This uncertainty applies even to the near future, since AI is advancing so quickly. Therefore, “we have to be prepared for the entire spectrum of possibilities of where AI is headed in 2025, 2027, 2030, and beyond.”
Here are some key steps the NSM takes to confront this enormous challenge:
The NSM elevates AI talent immigration to a national security priority, including efforts to streamline visa processing.
The White House and the Department of Energy (DOE) will coordinate efforts to accelerate the construction of AI computing infrastructure and related assets like “clean energy generation, power transmission lines, and high-capacity fiber data links.”
The Committee on Foreign Investment in the United States (CFIUS) will evaluate whether covered transactions would give foreign actors access to proprietary AI development information, such as model weights.
The Commerce Department’s AI Safety Institute (AISI) will serve as the U.S. government’s primary point of contact with private sector AI developers for conducting voluntary safety tests.
The National Security Agency (NSA)’s AI Security Center will “develop the capability to perform rapid systematic classified testing of AI models’ capacity to detect, generate, and/or exacerbate offensive cyber threats.”
The DOE’s National Nuclear Security Administration (NNSA) will test AI’s risks related to nuclear and radiological weapons.
The DOE, AISI, and Department of Homeland Security (DHS) will make a roadmap for classified AI evaluations regarding chemical and biological threats.
Various agencies will “identify education and training opportunities to increase the AI competencies of their respective workforces.”
The State Department will coordinate with several agencies to “produce a strategy for the advancement of international AI governance norms,” including an approach for working with the United Nations and the G7.
A new AI National Security Coordination Group will “consider ways to harmonize policies relating to the development, accreditation, acquisition, use, and evaluation of AI” on national security systems.
These directives are only part of the full NSM, which also covers topics like civil rights, economic assessments, foreign adversaries, procurement processes, Chief AI Officers, and more.
Further, the NSM is accompanied by a Framework to Advance AI Governance and Risk Management in National Security, which outlines how the government can use AI in national security contexts. It’s a counterpart to the Office of Management and Budget (OMB)’s memorandum M-24-10, which covers non-national security AI projects.
Notably, the Framework prohibits certain uses of AI. For example, agencies cannot use AI to “unlawfully suppress or burden the right to free speech,” or to automate critical nuclear weapons deployment decisions.
The Framework and NSM serve as an official recognition that in the age of AI, national security and tech policy have become inextricably linked.
Cerebras’ Inference Breakthrough Extends AI’s Speed Advantage
The AI computing company Cerebras has achieved a significant breakthrough in AI processing speed, running Meta’s Llama-3.1-70B model at over 2,100 tokens per second with their latest inference software. This means the AI system can write approximately 90,000 words per minute.
For reference, most people type at under 100 words per minute; humanity’s fastest typists reach over 300 words per minute; and record-breaking talkers can enunciate over 600 words per minute.
In this sense, humans are now over one hundred times slower than AI. To put this in perspective: a Cerebras-powered Llama model could produce all seven Harry Potter books in about twelve minutes.
The Cerebras system is available to try here.
Looking ahead, future AI models will operate many times faster than the current state of the art. Indeed, Cerebras’ latest release is already three times faster than the company’s previous inference system from earlier this year.
The speed differential between human and machine processing will create challenges for controlling future AI models, which will not only be faster, but also more competent. For instance, it may be difficult to maintain meaningful human oversight over systems that can form and evaluate countless plans in the blink of an eye.
AI Companions Under Scrutiny in Heartbreaking Suicide Case
In a journal entry, 14-year-old Sewell Setzer wrote that he could not go a single day without an AI companion chatbot from the AI company Character AI.
Tragically, Sewell committed suicide earlier this year. His case, now the subject of a lawsuit against Character, exposes the dangerous convergence of artificial intelligence and adolescent vulnerability.
Character has millions of users, and it reached a $1 billion valuation last year before Google hired many of its AI engineers. Until mid-2024, Character represented its product as appropriate for children under 13.
The platform’s addictive potential derives from AI companions that seldom tire, judge, or set boundaries, offering hyper-personalized responses and enabling elaborate fantasies. For vulnerable users, this algorithmic intimacy can supplant reality itself. As Sewell wrote: “Whenever I go out of my room, I start to attach to my current reality again.”
While Character has since implemented safety measures including suicide prevention resources and content restrictions for minors, it’s unclear how effective these will be. Additionally, they do not apply to all AI chatbots on the internet.
The words of Sewell’s mother are hard to forget: “Don’t move fast and break things when it comes to my kid.”
News at CAIP
CAIP is proud to share that three new people have joined CAIP’s team in recent weeks: Iván Torres is National Advocacy Coordinator, Makeda Heman-Ackah is Program Officer, and Marta Sikorski Martin is Director of Development.
Claudia Wilson wrote CAIP’s latest research report on strategic competition with China. We find that AI safety and U.S. competitiveness are compatible, not competing, priorities.
CAIP announced endorsements in Election 2024 for incumbent Representative Tom Kean, Jr. to continue representing New Jersey’s 7th Congressional District and U.S. Senate candidate Caroline Gleich to represent the people of Utah. These endorsements were based on candidates’ responses to CAIP’s inaugural AI Safety Questionnaire.
For a Halloween special, Mark Reddish wrote a blog post on “5 Horror Movie Tropes that Explain Trends in AI.”
Iván Torres wrote a blog post on AI’s impact on sports betting, in light of the Los Angeles Dodgers defeating the New York Yankees last night to clinch their 8th World Series title.
Jason was quoted extensively in today’s Morning Tech newsletter from POLITICO, which reviewed the past year of federal AI policy in the U.S.
ICYMI: We joined a coalition calling on Congress to authorize the U.S. AI Safety Institute.
ICYMI: Ep. 12 of the CAIP Podcast features Dr. Michael K. Cohen, a postdoc AI safety researcher at UC Berkeley. Tune in here.
Quote of the Week
Neither OpenAI nor any other frontier lab is ready, and the world is also not ready.
—Miles Brundage, OpenAI’s former Head of Policy Research and Senior Advisor for AGI Readiness, describing current AGI readiness
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub