AI Policy Weekly #48
Dublin phantom parade, Physical Intelligence (Pi), Anthropic’s case for targeted regulation
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
AI-Generated Article Draws Thousands to Fake Halloween Parade
In Ireland, thousands of people recently flocked to Dublin’s city center to attend what they expected to be a magnificent Halloween parade.
Instead, they got nothing.
They were misinformed by an event-sharing website called MySpiritHalloween.com, which had published a popular, AI-generated article about the parade. WIRED reports that the article “promised ‘spectacular floats to thrilling street performances’ and described the route in detail.”
“We asked ChatGPT to write the article for us,” said the website’s owner, Nazir Ali. “You can easily write a whole website with AI.”
More generally, Ali told the New York Times that “much of the [website’s] information is produced by AI.” This allowed him to promote over 1,400 events in just a few months.
In addition to AI labor, Ali’s team of remote (human) contractors worked hard to optimize Google search results, boosting their website to the top of the first page. They made money by running ads.
On the morning of Halloween, their AI-powered site claimed that the Galway-based theater company Mácnas would host a parade in Dublin on October 31st at 7pm, as Mácnas has in the past. TikTok and Facebook users helped spread the news.
The only problem is that the event was fake. A MySpiritHalloween employee had accidentally mistaken the date of an older parade, and the website team failed to catch the error in time.
That night, a dense crowd assembled in anticipation of festivities, causing disruptions to public transit. Irish state police posted on social media and visited the packed street, asking gatherers to disperse. Meanwhile, Mácnas performed a real parade for happy viewers in Galway.
Irish filmmaker Bertie Brosnan captured footage of the spectacle. “This is actually getting a little bit dystopian,” he said while recording. “To see how easily people are manipulated.”
Bezos’ Robotics Investments Help AI Get Physical
Physical Intelligence (Pi) is a new robotics startup that launched on March 12th, 2024. (Tragically, this is not quite Pi Day, the March 14th celebration of the number π.)
Last week, Pi made its first major announcement: π0, a new AI model that can operate various kinds of robotic equipment (e.g., a pair of arms) to accomplish simple tasks like bussing tables, assembling boxes, and, perhaps most excitingly, folding laundry.
The researchers write that “while recent work has illustrated a number of more complex and dexterous behaviors, such as tying shoelaces or cooking shrimp,” their software can “learn very long tasks, sometimes tens of minutes in length.”
In the longer term, Pi wants to build general-purpose AI models that can “control any robot to perform any task.”
Pi has ample cash to pursue this mission after completing a $400 million fundraising round led by Thrive Capital, Lux Capital, and Amazon chairman Jeff Bezos.
Bezos has joined investment rounds in several other promising robotics startups:
Figure AI aims to “deploy autonomous humanoid workers,” raising $675 million in February.
Similarly, Skild AI wants to build “the first truly intelligent embodied system,” raising $300 million in July.
Swiss-Mile is commercializing a wheeled-legged robot dog, raising $22 million in August.
Bezos’ company Amazon is also investing substantially in this area through its $1 billion venture program, supporting and partnering with companies like Agility Robotics. Moreover, Amazon recently hired many critical employees from the robotics startup Covariant, which had previously raised over $200 million.
While some of these investments point to future possibilities, increasingly independent AI systems are already entering the physical world. For instance, a robot dog from Boston Dynamics is part of the maintenance team at a Michelin tire facility in South Carolina. And the autonomous vehicle company Waymo just raised $5.6 billion as it continues to serve over 100,000 paid robotaxi rides every week in major U.S. cities.
AI is not confined to some faraway cloud—it’s got arms, legs, and a hefty wallet.
Anthropic Calls for Targeted AI Regulations to Prevent Catastrophic Risks
The AI company Anthropic has approximately 1,000 employees. Despite being only three years old and a much smaller business than competitors like Google, Meta, and Microsoft, its “Claude” AI models are among the best general-purpose chatbots in the world.
Last week, Anthropic published an article in support of “targeted regulation,” stating that “surgical, careful regulation will soon be needed.”
The company warns that its AI systems are advancing rapidly in sensitive domains like cyber, biological, and nuclear threats. It wants AI companies to develop preparedness plans for these potentially catastrophic risks, with safety and security measures that scale as AI systems grow more capable.
In terms of specifics, Anthropic advocates for requiring relevant companies to (1) develop and publish these plans, and (2) conduct and publish risk evaluations. Additionally, there should be mechanisms to incentivize truly effective plans and verify the accuracy of public statements, but Anthropic is unsure exactly what those mechanisms should be.
The Center for AI Policy is pleased to see an AI industry leader calling for basic safety requirements. We strongly agree with Anthropic that “governments should urgently take action on AI policy in the next eighteen months.”
News at CAIP
Jason Green-Lowe wrote a memo anticipating AI policy in the upcoming administration: “The AI Safety Landscape Under a New Donald Trump Administration.”
Mark Reddish wrote a blog post calling for Congress to authorize the U.S. AI Safety Institute.
Claudia Wilson led CAIP’s response to the National Telecommunications and Information Administration (NTIA)’s request for comments on “Bolstering Data Center Growth, Resilience, and Security.”
Jason Green-Lowe gave a talk titled “An Overview of Federal Legislative Efforts in AI Policy” at Harvard’s Berkman Klein Center for Internet and Society. Stay tuned for slides.
The San Francisco Chronicle quoted CAIP in an article titled “Bonta proposes warning labels on social media sites but says it’s still too early for AI.”
Decrypt quoted CAIP in an article titled “AI Advocates See Trump Re-election as Industry Boost but Urge Caution on Policy.”
Quote of the Week
You’ll be living in a world that you didn’t agree to, didn’t vote for, that you are co-inhabiting with a superintelligent alien species that answers to the goals and rules of a corporation.
—James Cameron, the filmmaker behind Avatar, Titanic, and The Terminator, describing his concerns about artificial general intelligence (AGI)
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub
Thanks, really interesting. I think I experienced something like the Halloween story before, luckily not going beyond being confused about some conflicting event announcements.