AI Policy Weekly #11
Sora and Gemini 1.5 Pro, the Munich Tech Accord, and the House AI Task Force
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
AI Capabilities Surge Forward
In the morning of February 15th, 2024, Google proudly announced its latest AI system, Gemini 1.5 Pro. This model, released just a couple of months after the already-groundbreaking Gemini 1.0, is notable for its ability to contemplate long swaths of input data while producing an output.
In industry jargon, the proper phrase is that Gemini 1.5 Pro has an extraordinarily long “context window.” What this means is that Gemini can consume a boatload of text, audio, and video in a single prompt, and then utilize all this information in seconds to inform its response.
To grasp the full extent of this capability, one can consider some of the Gemini 1.5 tech demos. First, Google fed Gemini a 402-page transcript of the Apollo 11 moon landing; Gemini then answered questions about particular conversations and events within the document. Next, Google crammed in an entire silent movie spanning forty-four minutes; with this context, Gemini was able to answer questions about specific plot points and minute details of the film.
Perhaps the most impressive example involved a language you’ve probably never heard of: Kalamang. This endangered language had little—if any—presence in the AI’s training data, so it was unsurprising that Gemini performed extremely poorly when attempting to translate between Kalamang and English.
But the long context window came to the rescue. Once Google stuffed hundreds of pages of Kalamang resources into a single prompt to Gemini, the system’s translation quality instantly rose to nearly the level of a human language learner.
With these impressive demonstrations, Google’s thunder seemed impossible to steal. But just a few hours after the Gemini 1.5 announcement, OpenAI announced its own creation: Sora, an AI system that converts textual prompts into corresponding videos.
Suddenly, the world’s attention pivoted back to the makers of ChatGPT, for understandable reasons. Many people had last observed AI-generated video in early 2023, when a comically low-quality clip of a spaghetti-eating Will Smith went viral. But in February 2024, less than one year later, OpenAI’s Sora was generating high-quality footage of art gallery tours, Victoria crowned pigeons, and pirate ships battling inside a coffee cup.
These AI-generated videos are remarkably difficult to distinguish from reality. Indeed, when Will Smith posted a real video of himself eating spaghetti as a TikTok joke, the most popular commenter wrote how they initially mistook the video for a Sora spoof.
The Center for AI Policy believes that when pondering AI’s future impacts, it is worth remembering how quickly the technology can improve.
AI Companies Promise They’ll Take Steps to Protect Elections
While the Sora demonstrations were capturing attention, senior figures around the world gathered for the annual Munich Security Conference (MSC), one of the world’s leading forums on international security policy.
The AI highlight from this year’s MSC came when AI companies—including giants like OpenAI, Google, and Meta—signed a voluntary vow to combat AI-fueled threats to elections worldwide. The agreement’s full title is “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections.”
They have agreed to eight commitments, which vary significantly in specificity and significance. For example, one rather hollow commitment is “continuing to engage with a diverse set of global civil society organizations, academics, and other relevant subject matter experts through established channels or events.”
In contrast, a more tangible commitment is to use methods for detecting AI-generated content and tracking its origin, such as SynthID and C2PA. Another concrete pledge is to provide public education so that citizens are prepared to handle deceptive AI content.
This Tech Accord is entirely symbolic and has zero enforcement, but hopefully companies will take it seriously, because more than 40 countries’ elections are at stake in 2024. In total, over four billion people are selecting world leaders this year.
House Launches Bipartisan AI Task Force
“Advancements in artificial intelligence have the potential to rapidly transform our economy and our society.”
Those were the words of House Speaker Mike Johnson (R-LA) as he announced a new Bipartisan Task Force on AI, in collaboration with House Minority Leader Hakeem Jeffries (D-NY). The Task Force includes twelve Republicans, led by Jay Obernolte of California, and twelve Democrats, led by Ted Lieu of California.
The full range of goals for the House Task Force remain unclear, but one definite focus item will be producing a “comprehensive report” with guiding principles and policy proposals for AI. It seems likely that this report will cover AI-generated deepfakes and their electoral impacts, since that was a consensus concern among eight members who spoke with Time.
Until this point, the House has remained relatively inactive on coordinated AI activity compared to the Senate. Last fall, a bipartisan Senate quartet solicited expert opinions on AI in nine separate “Insight Forums,” and that team ended up introducing real federal AI legislation that passed within the FY2024 NDAA. Hopefully the new House Task Force can also lead Congress towards taking action on AI.
News at CAIP
Save the date: we’re hosting a happy hour for AI policy professionals at Sonoma on Wednesday February 28th from 5:30–7:30pm, on the second floor. Anyone working on AI or related topics is welcome to join.
We submitted a comment to the National AI Advisory Committee (NAIAC) in response to its call for public feedback on supporting American workers amidst AI-driven economic change. Check it out here.
Jason Green-Lowe testified in support of HB1062, a Maryland state bill that criminalizes nonconsensual illicit deepfakes, at a meeting of the Maryland General Assembly's Judiciary Committee. Watch a video of the testimony here (starting at 4:25:05), or read a text version here.
Ahead of the Munich Security Conference, we shared three important questions for leaders to discuss. You can read the questions—and our views on them—at this link.
We issued a statement on the creation of the House AI Task Force.
Quote of the Week
We’re investing heavily in that direction, and I imagine others are as well. [...]
Once we get agent-like systems working, AI will feel very different to current systems, which are basically passive Q&A systems, because they’ll suddenly become active learners. Of course, they'll be more useful as well, because they'll be able to do tasks for you, actually accomplish them. But we will have to be a lot more careful. [...]
Maybe it’s going to be a couple of years, maybe sooner. But it’s a different class of systems.
—Google DeepMind CEO Demis Hassabis, commenting on the competition between AI companies to build AI systems that use tools and act more autonomously
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub