AI Policy Weekly #9
Europe's AI Act, Hickenlooper's AI framework, and a huge deepfake heist in Hong Kong
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
EU Overcomes Tensions, Unveils AI Act Text
The EU cleared another milestone in the passage of its AI Act, which now appears likely to become law without significant alterations.
Although the EU reached a provisional agreement in early December, which moved the text into a typical near-completion stage of legal-linguistic finalization, several members of the Committee of the Permanent Representatives of the Governments of the Member States to the European Union (COREPER) initially said that they could not commit to supporting the Act without reviewing further details.
Some of those same countries had caused drama in November by unexpectedly objecting so strongly to regulating advanced AI models like ChatGPT—partly due to lobbying from big AI companies, one of which came under scrutiny for conflicts of interest—that European Parliament representatives stormed out of a formal meeting two hours early.
Despite these warning signs, COREPER representatives from all 27 EU member countries gave unanimous approval of the Act.
The meeting webpage publicized the revised and potentially final AI Act text. And despite the earlier disputes, the text maintains numerous regulations that target advanced general-purpose AI models.
First, the Act establishes a tier system that applies more stringent regulations to general-purpose models that pose systemic risk. If a general-purpose AI model executes over ten septillion (10^25) mathematical operations while training, then it automatically qualifies as posing systemic risk.
The providers of such models must experimentally evaluate their systems to identify and mitigate systemic risks. They also need to ensure cybersecurity protections and report serious incidents. The Center for AI Policy is pleased to see these requirements outlined in Article 52d.
A scientific panel of independent experts will support the implementation and enforcement of the rules regarding general-purpose AI systems. One of its responsibilities will be monitoring for systemic risks.
If the panel identifies possible systemic risks, it will alert the European AI Office, a new government body that was recently established within the European Commission. In response to the panel’s alert, the AI Office has the power to conduct evaluations of the AI model that is causing concern.
Hickenlooper Announces “Trust But Verify” AI Framework
Senator John Hickenlooper (D-CO) outlined his new AI regulatory framework in a keynote speech at the Silicon Flatirons 2024 Flagship Conference.
Here are some of the principles and policies that Hickenlooper called for:
Be transparent about the data used in AI training, including personal data.
Notify consumers when AI is involved in news generation and hiring processes.
Educate consumers and workers on AI technologies.
Enact a comprehensive national data privacy law.
Develop international agreements and technical standards for global AI governance.
Attach clear, conspicuous labels on AI-generated images.
Require genuine consent from users before collecting their personal data to build AI products.
Hickenlooper also suggested developing criteria and programs for certifying independent auditors. These accredited third party auditors could then be responsible for verifying the compliance of AI models with regulatory requirements.
He was clear about the inevitable need for independent oversight, arguing that “in the long-term we cannot rely on self-reporting alone from AI companies on compliance.”
Towards the end of his speech, Hickenlooper emphasized the urgency and importance of establishing guardrails.
“If we miss this opportunity the consequences will shape generations to come,” he said, warning of the AI industry’s efforts to build “Artificial General Intelligence” without any regulations or accountability.
Deepfake Fraud Drains Millions From Hong Kong Company
Criminals stole over $25M in what was perhaps the largest deepfake swindle to date. The scammers targeted a single employee in the Hong Kong branch of a multinational corporation.
The unlucky employee received a message that claimed to be from one of the company’s senior financial officials in the UK. The message discussed the need for executing a confidential transaction, and the employee was understandably skeptical.
Shortly after, the employee joined video conference calls with what appeared to be colleagues and senior officers in the company. In reality, they were AI clones.
The targeted employee then “made fifteen transactions as instructed” by the deepfake conference attendees, according to a Hong Kong police official. And in that manner, $25M went out the window.
This serious incident resembles a smaller-scale attack in 2019, when fraudsters used AI to mimic the voice of a chief executive and steal $243,000 from a company leader. As AI fakery grows more convincing, incidents like these could grow in prevalence.
News at CAIP
Save the date: we’re hosting a panel discussion in the Rayburn House Office Building (Rayburn 2168) this coming Tuesday, February 13th at 1:30pm. The panel will focus on AI’s impact on elections.
In response to the National Institute of Standards and Technology (NIST) Request for Information (RFI) on NIST’s responsibilities under the AI Executive Order, we submitted comments describing a novel framework for classifying AI systems according to their capabilities and risks.
We’re rolling out a daily newsletter, AI Policy Daily, which curates news and insights on AI for policy pros. Read today’s issue here.
Quote of the Week
When you have a few notable leaders in this industry predicting human-level AI in as little as five years, we should recognize that where we currently are doesn’t necessarily tell us very much about where we might be going in terms of future AI and bioweapon risk.
—Gregory Allen, Director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), in a comment on OpenAI’s finding that GPT-4 provides an insignificant improvement over the internet for bioweapon creation
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub