AI Policy Weekly #34
Congress advances AI bills, Schiffmann sells Friends, and AI earns silver in the IMO
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Congress Acts to Advance Responsible AI Legislation
AI was a priority item on Capitol Hill this week.
To start, the Senate passed the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act with unanimous approval.
The bill would allow individuals to sue in federal court if their likeness is used without permission in AI-generated intimate images.
“These fake explicit images can ruin a person’s life, especially if you are a child or a teenager,” said Senate Majority Leader Chuck Schumer (D-NY) in a speech on the Senate floor. “I’m very hopeful that the House will pass this bill quickly.”
Separately, the Senate Commerce Committee held a meeting to mark up and approve numerous AI bills, including several that the Center for AI Policy (CAIP) supports for promoting AI safety.
First, the Future of AI Innovation Act passed out of committee. The bill would formally authorize the US AI Safety Institute, create AI evaluation programs, develop international coalitions to support AI standards, and more.
Second, Senators Todd Young (R-IN) and John Hickenlooper (D-CO) added an amendment that would establish a Foundation for Standards and Metrology at NIST, akin to the EPIC Act from Representatives Jay Obernolte (R-CA) and Haley Stevens (D-MI) in the House.
Third, the Validation and Evaluation for Trustworthy (VET) AI Act would promote the development of technical guidelines for conducting in-house and third-party evaluations of AI models to mitigate harms, protect privacy, and more.
Fourth, the CREATE AI Act would authorize the National AI Research Resource (NAIRR), which would supply sorely-needed computing resources to researchers conducting AI evaluations and studying trustworthy AI.
Fifth, the amended TEST AI Act would establish AI testing facilities for identifying vulnerabilities in AI systems that could impact critical infrastructure or national security.
CAIP is also pleased to see the AI Research, Innovation, and Accountability Act advancing out of committee, which would require companies to self-certify their compliance with best practices. While CAIP has some concerns over who would check these self-certifications, the bill is a step in the right direction.
Not to be left out, the House is also taking bipartisan action on AI.
Representatives Frank Lucas (R-OK) and Zoe Lofgren (D-CA), leaders of the House Committee on Science, Space, and Technology, introduced the Workforce for AI Trust Act.
The bill would expand the NSF’s AI-related fellowship programs and direct NIST to support AI workforce development, with a focus on fostering interdisciplinary teams to advance trustworthy AI systems and AI risk management.
The Center for AI Policy commends Congress for advancing responsible AI legislation and urges continued action to pass these bipartisan bills.
$99 AI Friends Enter the Market
Avi Schiffmann, a Harvard dropout and AI entrepreneur, intends to “conquer” the market for wearable AI products.
Schiffmann recently unveiled an AI-powered pendant called “Friend” that intends to serve as a supportive AI companion for its users.
The friendly necklace operates in a short sequence of steps. Users simply push a button to speak with the device. Then AI processes the audio and generates a response, which goes straight to the user’s phone.
Anyone can pre-order a Friend for $99 on friend.com.
The first Friends will ship in early 2025.
Whether this particular wearable succeeds or not, AI voice technology is continuing to improve, and these sorts of digital companions will only grow more persuasive, capable, and lifelike.
For example, OpenAI just released a stronger AI voice assistant to a select number of users.
According to OpenAI, “advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”
Ethan Mollick, Co-Director of the Generative AI Lab at the University of Pennsylvania, tested the voice and observed that “It is super weird. Lots of unconscious cues make it feel like talking to a person.”
These developments in AI companions and voice assistants are rapidly propelling us towards a world that once seemed confined to science fiction and Black Mirror. They underscore the critical need for Congress to pay careful attention to AI, ensuring appropriate regulations and safeguards are in place as we navigate these uncharted waters.
AI Earns Silver in International Math Olympiad
Two AI systems developed by Google DeepMind have collectively solved four out of six problems from this year’s International Mathematical Olympiad (IMO), matching the performance of a silver medalist.
“The IMO is the oldest, largest and most prestigious competition for young mathematicians, held annually since 1959,” explained DeepMind. “Each year, elite pre-college mathematicians train, sometimes for thousands of hours, to solve six exceptionally difficult problems.”
DeepMind’s combined AI system scored just one point shy of earning a gold medal, well ahead of many human competitors.
While the AI still required human assistance in translating problems into formal mathematical language, the breakthrough demonstrates rapid improvements in AI capabilities.
These improvements are likely to continue. Jack Clark, co-founder of Anthropic, predicts that “within two years (by July 2026) we'll see an AI system beat all humans at the IMO, obtaining the top score.”
Clark also believes that “AI may successfully automate large chunks of scientific research before the end of the decade.”
If Clark is right, then only a few years remain before AI develops remarkably powerful—and dual-use—capabilities. This underscores the need for Congress to begin preparing for advanced AI with policies like CAIP’s 2024 Action Plan.
Job Openings
The Center for AI Policy (CAIP) is hiring for three new roles. We’re looking for:
an entry-level Policy Analyst with demonstrated interest in thinking and writing about the alignment problem in artificial intelligence,
a passionate and effective National Field Coordinator who can build grassroots political support for AI safety legislation, and
a Director of Development who can lay the groundwork for financial sustainability for the organization in years to come.
News at CAIP
We’re pleased to announce the newest member of the CAIP team: Mark Reddish has joined as Director of External Affairs. Mark is an attorney with more than a decade of experience in advocacy and policy development related to telecommunications and public safety.
We hosted a webinar titled “Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future,” featuring Professor Stuart Russell and Mark Beall. If you missed the event, you can watch a video recording here.
Tristan Williams authored a research report on autonomous weapons, analyzing current capabilities, emerging threats, and essential policy considerations. Read the full report here.
Jason Green-Lowe wrote a blog post about Meta’s limited safety testing of Llama 3.1.
Claudia Wilson wrote a blog post about the DEFIANCE Act of 2024, which the Senate recently passed. More remains to be done to protect victims of AI-powered sexual exploitation.
Brian Waldrip wrote a blog post titled “Cybersecurity Is Critical to Preserve American Leadership in AI.” CAIP proposes that AI companies report their cybersecurity protocols against a set of key metrics.
Jason Green-Lowe wrote a blog post assessing Amazon’s call for alignment on “global responsible AI policies.”
We joined over 45 prominent organizations in a call for Congress to authorize the US AI Safety Institute.
We issued a press statement on the Senate Commerce Committee’s approval of bipartisan legislation promoting responsible AI.
We endorsed the new Workforce for AI Trust Act, introduced by Ranking Member Zoe Lofgren (D-CA) and Chairman Frank Lucas (R-OK) of the House Committee on Science, Space, and Technology.
Quote of the Week
We’re already seeing sort of leaps and bounds in AI innovation with new capabilities, especially in the multimodal space, models that are increasingly scoring higher and higher on different scientific benchmarks used to assess capability. And we know that that pace of innovation is likely to continue. [...]
As we’re thinking about risks to national security and public safety, this is a pretty core government function. And we at the US AI Safety Institute are really the tip of the spear in terms of marshaling that expertise across government and bringing it to bear with the companies.
The last point I’d make is that because this is moving so quickly you’re seeing different standards and best practices coming out of different labs, different research institutions, different nonprofits. There really needs to be a sort of best-in-class gold standard.
And we really need to keep raising the bar so that the pace of AI safety research and guidance is keeping pace with the development that we’re seeing in AI.
—Elizabeth Kelly, Director of the US AI Safety Institute, discussing the need for the US government to build in-house AI safety capacity
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub