Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Waymo Expands Its Robotaxi Service, Reports Strong Safety Stats
“Autonomous vehicle fleets will quickly become widespread and will account for the majority of Lyft rides within 5 years.”
That’s a quote from Lyft co-founder John Zimmer. He was writing in September 2016, eight years ago.
Although Zimmer’s prediction ended up being quite inaccurate, autonomous vehicles are making impressive progress.
For example, the New York Times reports that Waymo’s self-driving taxis are “now completing over 100,000 rides each week in San Francisco, Phoenix and Los Angeles.” That’s up from 50,000 in June, and 10,000 in August of 2023.
In other words, weekly robotaxi rides have dectupled in just over a year.
Services will continue to expand. Earlier this year, Waymo began testing its vehicles in Washington, DC and Austin, TX. In early 2025, Waymo will deploy more vehicles in Additionally, Alphabet, the parent company of both Waymo and Google, plans to invest $5 billion in Waymo over the next few years.
However, profitability remains a challenge. Waymo’s autonomous vehicles can cost up to $100,000 apiece, which would be enough to pay a full-time Uber driver for at least two years. Furthermore, modern self-driving taxis still require significant guidance from remote human employees to navigate unforeseen obstacles.
The Waymo rider experience can be a mixed bag. Passengers benefit from standard taxi fares, zero tipping, and freedom from driver-related concerns. A more ambiguous effect is the absence of small talk and human connection. Clearer drawbacks include longer trip times, privacy concerns related to interior cameras, and unwanted attention from curious onlookers.
Perhaps the primary source of passenger discomfort is when robotaxis pause to process unexpected roadway challenges. This hesitation can create an air of anxiety.
Despite these tense moments, Waymo’s latest safety data suggests that its robotaxis are far safer than human drivers.
The reporter Timothy B. Lee scrutinized Waymo’s safety statistics so thoroughly that he identified a few errors, but the corrected numbers still look strong. Waymo vehicles trounce human drivers in safety, with injury-causing crashes occurring less than one-third as often, and airbag-triggering crashes occurring about one-sixth as often. When serious crashes do occur, Lee finds that they are often the fault of reckless human drivers:
“Out of the 23 most serious Waymo crashes, 16 involved a human driver rear-ending a Waymo. Three others involved a human-driven car running a red light before hitting a Waymo. There were no serious crashes where a Waymo ran a red light, rear-ended another car, or engaged in other clear-cut misbehavior.”
Part of the reason that autonomous vehicles are so safe is that they have been subject to legislation that promotes safety, reliability, and testing.
In contrast, very little safety legislation exists for more general-purpose AI systems like ChatGPT, even though these models are already causing problems like financial scams, malicious cyberattacks, and nonconsensual deepfakes.
Critically, AI progress is poised to continue, and upcoming general-purpose models will bring new challenges like biological misuse, persuasive deception, and autonomous decision-making. These threats could cause much more damage than robotaxi accidents.
As AI advances, safety strategies must evolve beyond the road.
House Science Committee Advances AI Safety Legislation
In late July, the U.S. Senate Commerce Committee held a markup meeting and approved numerous AI bills on a bipartisan basis.
Similarly, the House Committee on Science, Space, and Technology held its own AI markup this Wednesday, where it approved nine different AI bills.
The Center for AI Policy (CAIP) expressed approval for many of these bills, including strong endorsement for the Nucleic Acid Screening for Biosecurity Act, which would support safety practices in synthetic biology. These safeguards are critical given AI’s potential to amplify biological threats.
Another notable bill that CAIP strongly endorses is the AI Advancement and Reliability Act, which would establish a new Center for AI Advancement and Reliability within the National Institute of Standards and Technology (NIST). Essentially, this Center would serve as a formal successor to the U.S. AI Safety Institute.
The Center’s purpose is to advance “measurement science for artificial intelligence reliability, robustness, resilience, security, and safety.” Another goal is to support the government’s understanding of these topics, while collaborating with external stakeholders to design voluntary best practices for evaluating AI systems’ safety properties.
More specifically, the Center would research AI safety-related risks “across several timescales,” with a focus on misuse risks. It would also “assess scenarios in which artificial intelligence systems could be deployed to create risks for economic or national security.”
Importantly, the AI Advancement and Reliability Act authorizes $10 million in funding for the Center in fiscal year 2025.
Although $10 million may seem large, it pales in comparison to the United Kingdom, which invested over $120 million in its own AI safety institute last year. Representative Haley Stevens pointed this out, arguing that “we’re missing the mark.”
“The next big breakthrough on AI standard setting [...] could easily just shift abroad if we’re not investing. The rules of the road won’t be set here.”
CAIP supports this call for greater investment in America’s AI safety institute. But we also believe that voluntary guidelines alone are insufficient—Congress must go further by converting common-sense safety recommendations into safety requirements.
BIS Outlines AI Reporting Requirements
This week, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) issued a Notice of Proposed Rulemaking to establish reporting requirements for advanced AI models and computing clusters.
The rule is based on the AI Executive Order (E.O.) from last fall, which directed the Commerce Department to collect reports and specified that the department should invoke its powers under the Defense Production Act to do so.
BIS collected an initial round of reports in late January from companies planning to build “dual-use foundation models.” The E.O. and the proposed rule define such models as large, data-intensive, general-purpose AI systems with capabilities that “pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”
BIS’s proposed rule applies to entities that develop an AI system with over 10^26 mathematical operations during training, a feat that requires heavy-duty supercomputers. Based on public reporting, no AI model has ever achieved this threshold.
Similarly, companies with sufficiently massive computing clusters are also subject to the rule. The specific threshold is 10^20 operations per second, enough to conduct 10^26 total operations within two weeks.
If a company plans to surpass either threshold in the next six months, then it must notify BIS of its plans. In response, BIS will request information about the company’s cybersecurity, testing, safety, and reliability practices affecting dual-use foundation models.
This is the first time the U.S. government has proposed regulations targeting dual-use foundation models.
The Center for AI Policy supports these reporting requirements and urges Congress to explicitly authorize them. Leading AI developers plan to build stronger foundation models with capabilities that would pose catastrophic national security risks, while complex safety and security challenges remain unsolved. This unprecedented situation warrants careful, vigilant oversight.
Job Openings
The Center for AI Policy (CAIP) is hiring for three new roles. We’re looking for:
an entry-level Policy Analyst with demonstrated interest in thinking and writing about the alignment problem in artificial intelligence,
a passionate and effective National Field Coordinator who can build grassroots political support for AI safety legislation, and
a Director of Development who can lay the groundwork for financial sustainability for the organization in years to come.
News at CAIP
Tristan Williams published a research report on AI and education. The report offers an in-depth look at how AI is transforming classrooms, while outlining priorities for policymakers.
On Tuesday, September 10th, 2024 the Center for AI Policy hosted a panel discussion in the U.S. Capitol Visitor Center titled “Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies.” A video recording is coming soon to the CAIP YouTube channel, adding to our events playlist.
We released an AI Policy Scorecard for the 2024 presidential and Senate campaigns, which lets voters know which candidates have an AI position on their website. AI is an important issue, and the public deserves to know where the candidates stand. Read the scorecard here.
We published a blog post expressing disappointment that election campaigns are ignoring Americans’ concerns on AI. CAIP urges the presidential and Senate campaigns to promptly update their websites with clear AI policy positions to inform voters better where they stand.
Ep. 11 of the CAIP Podcast features Ellen P. Goodman, a distinguished professor of law at Rutgers Law School. Jakub and Ellen discuss federal AI policy efforts, the NTIA’s AI accountability report, watermarking and data provenance, AI-generated content, risk-based regulation, and more. To tune in, read the transcript, or peruse relevant links, visit the CAIP Podcast Substack.
Mark Reddish published a blog post about the proposed PREPARED for AI Act and cybersecurity NDAA amendment: “Two Easy Ways for the Returning Senate to Make AI Safer.”
Jason Green-Lowe released a memo before the Harris-Trump debate, including three questions for the moderators to ask regarding AI.
Jason Green-Lowe published a blog post after the Harris-Trump debate: “Presidential Candidates Disappointingly Quiet on AI.”
We released a blog post following the House SS&T markup meeting that advanced several useful AI bills. CAIP calls on House leadership to promptly bring these bills to the floor for a vote so that they can be signed into law this year.
Quote of the Week
Artificial general intelligence. When I started looking at AI from a policy standpoint in DC in February of 2018, AGI was something that people whispered about in dark rooms, and those who did out loud were considered quacks and charlatans. So to hear industry leaders say they think it is actually around the corner absolutely surprised me. I didn’t think we were there. I was definitely not an AI doomer, but at the same time, I didn’t think it was going to be our new reality.
Talking to founders over the course of all these years, seeing them go on to build these AI companies and then report back, has opened my eyes. Talking to the people who are doing the building changed my mind on this.
—Kara Frederick, Director of the Heritage Foundation’s Tech Policy Center, answering the question “What has surprised you the most this year?” in an interview with POLITICO
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub