Welcome to 3-Shot Learning, a weekly newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
Japan Announces AI Safety Institute and Proposes Guidelines
At the seventh meeting of Japan’s AI Strategy Council last month, Prime Minister Fumio Kishida announced plans to launch an “AI safety institute,” akin to emerging government bodies in the UK and US. Kishida mentioned that the institute will create AI standards and research safety evaluations, among other things. But the institute’s precise goals remain unclear, and the corresponding government website tells readers to wait for further information in January.
The meeting materials included a summary of the Hiroshima AI Process, an international collaboration between G7 countries (which met in Hiroshima last year for the 49th G7 Summit) to discuss AI governance. The key result from this collaboration is the Code of Conduct, which provides practical recommendations for organizations to consider when building and releasing AI systems. For instance, G7 leaders recommend evaluating advanced AI systems during development for a range of risks, including risks from AI models “making copies of themselves or ‘self-replicating’ or training other models.”
The meeting also covered a draft set of guidelines for all entities using AI in business activities. The 192-page document first lays a foundation of general principles on ten topics: human-centeredness, safety, fairness, privacy, security, transparency, accountability, education, competition, and innovation. Businesses utilizing more advanced AI systems are further advised to adhere to the G7’s Code of Conduct. The document then provides more tailored suggestions for AI developers, AI providers, and AI users. For Japan, which has few existing regulations on AI, the document signifies a significant step towards more meaningful governance.
Government Restrictions Curb ASML’s Lithography Exports
ASML, a Dutch company that leads the world in lithography machines, stated on Monday that the Dutch government has partially blocked ASML's exports of certain lithography machines to China. The restricted machines are the NXT:2050i and the NXT:2100i, which are deep ultraviolet (DUV) immersion scanners that play a critical role in the semiconductor supply chain. ASML’s statement also describes conversations with the US government to clarify the implications of the latest US export controls. For context, the US and Dutch governments have collaborated repeatedly in recent years to limit Chinese access to advanced lithography equipment.
New York Times Sues OpenAI and Microsoft for Copyright Infringement
The New York Times (NYT) sued AI giants OpenAI and Microsoft last week, arguing that the companies’ chatbots violated copyright law. The detailed complaint, spanning nearly seventy pages, revolves around two main issues: the use of NYT articles for training AI systems, and the generation of AI-created text that closely resembles NYT content. While other lawsuits have raised comparable concerns about AI, the NYT case is notable as the first example of a sizable company focusing a lawsuit on OpenAI.
News at CAIP
Marc Ross has joined the team as our Communications Director. Marc’s experience ranges from political campaigns and grassroots advocacy to public speaking and communications strategy.
We released a blog post arguing that general-purpose AI cannot be regulated solely at the application layer.
Quote of the Week
Incentives are superpowers; set them carefully.
—reinstated OpenAI CEO Sam Altman, in his recent personal blog post