Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy (CAIP). Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
U.S. Copyright Office Report Examines AI Training and Copyright Infringement
The United States Copyright Office (USCO) recently released a pre-publication draft of its report examining whether AI training on copyrighted materials constitutes copyright infringement. This question looms at the center of numerous ongoing lawsuits.
The USCO shared a preliminary draft ahead of the final document due to “congressional inquiries and expressions of interest from stakeholders.” While technically not the final version, the Office expects that the forthcoming official publication will contain “no substantive changes [...] in the analysis or conclusions.”
The report follows an August 2023 notice of inquiry on AI and copyright that received over 10,000 public comments.
After publishing reports on digital replicas and copyrightability of AI-fueled content, this third installment completes the USCO’s three-part series on AI and copyright.
The USCO identifies four stages of AI development that potentially implicate copyright: data collection and curation, model training, retrieval augmented generation (RAG), and model outputs.
The report’s lengthiest section analyzes whether these activities qualify for “fair use” protection under four factors from the Copyright Act of 1976. Here are some USCO findings for each factor:
Character of the use: The USCO believes “training a generative AI foundation model on a large and diverse dataset will often be transformative,” meaning it creates something with a different purpose or character than the original work.
Nature of the copyrighted work: “Where the works involved are more expressive, or previously unpublished, the second factor will disfavor fair use.”
Amount used: “Where there is a need to train on a large volume of works to effectively generalize, the copying of entire works may be reasonable.”
Market effect: “The copying involved in AI training threatens significant potential harm to the market for or value of copyrighted works.”
The USCO concludes that fair use determinations “will depend on what works were used, from what source, for what purpose, and with what controls on the outputs.”
However, “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”
The Computer & Communications Industry Association (CCIA) expressed concern about the report’s conclusions, noting that courts will ultimately decide these issues.
Shortly after the report’s release, the Trump administration dismissed USCO head Shira Perlmutter. It remains to be seen whether this leadership change will affect the direction of future copyright policy.

Senator Cotton Introduces the Chip Security Act
Senator Tom Cotton (R-AR) recently introduced the Chip Security Act, legislation designed to defend America’s advanced chip exports against diversion, theft, and unauthorized use, such as smuggling to China.
The bill would require the Commerce Department to mandate chip security mechanisms on advanced AI chips before they can be exported, with an initial focus on location verification mechanisms.
It has support from Representatives John Moolenaar (R-MI), Raja Krishnamoorthi (D-IL), Rick Crawford (R-AR), Bill Foster (D-IL), Josh Gottheimer (D-NJ), Bill Huizenga (R-MI), Darin LaHood (R-IL), and Ted Lieu (D-CA), who have introduced companion legislation in the House.
Within 180 days of enactment, the legislation would require location tracking technology on advanced chips before export, reexport, or in-country transfer to foreign destinations.
Chip policy expert Tim Fist says this timeframe is feasible for NVIDIA chips through “constraint-based geolocation” using internet pings from trustworthy servers. However, it could take additional time for the Commerce Department’s Bureau of Industry and Security (BIS) to make effective use of the resulting data for export control enforcement.
The bill also mandates that exporters “promptly report” to BIS if they discover that their chips have been diverted to unauthorized locations or users, or if tampering attempts have been made to disable security mechanisms.
Beyond immediate requirements, the Commerce Department would have one year to assess additional security mechanisms that could enhance export control compliance, in coordination with the Department of Defense.
If new mechanisms are deemed appropriate, the Secretary of Commerce would have two years to require them on covered products, while prioritizing confidentiality.
Beginning two years after enactment and continuing annually for three years, the Commerce Department would assess newly developed chip security mechanisms and report findings to Congress. These reports would include recommendations for potentially modifying export controls to allow more flexibility for advanced chips incorporating approved security mechanisms.
The Center for AI Policy supports the Chip Security Act. Tracking the location of advanced AI chips is good policy, and we’re pleased to see Senator Cotton and others pushing hard to make that happen. We note that if Congress sticks with the bill’s implementation timeframe, then BIS will need additional funding to effectively process the resulting data.

AI Governance Researchers Propose Framework for Third-Party Compliance Reviews of AI Risk Policies
Many AI companies have published if-then preparedness plans for how they will respond to emerging threats as their AI models’ capabilities improve and surpass specific thresholds. These plans go by many names, from “Preparedness Framework” (OpenAI) to “Responsible Scaling Policy” (Anthropic) to “Frontier AI Risk Assessment” (NVIDIA).
However, there are currently no established procedures for external stakeholders to verify that companies are actually following these plans.
Researchers from the Center for the Governance of AI, the Institute for Law & AI, METR, and elsewhere recently published a paper that could help close this gap. The title is “Third-Party Compliance Reviews For Frontier AI Safety Frameworks.”
The basic idea is that companies can voluntarily opt to hire an external group to review their compliance with their public plans. Anthropic and G42 have already committed to such reviews, which can improve compliance and provide assurance for both external and internal stakeholders.
But the devil is in the details—there are many possible implementations, and some bring increased challenges like security risks, staff time costs, and false results.
The paper analyzes six key aspects of compliance reviews:
Who conducts the review: Options include Big Four accounting firms, AI evaluation firms, consulting firms, AI audit firms, security audit firms, law firms, or combinations of these.
Information sources: Reviewers could inspect public documents, organizational charts, internal reports, staff interviews, company codebases, and more.
Compliance measurement: Reviewers could consider preparedness plans’ original language verbatim, create measurable yes/no checkpoints, or design more granular scales from “initial” to “optimized.”
External disclosure: Afterwards, companies could disclose nothing, acknowledge a review occurred, publish summary reports, or share detailed results with follow-up actions.
Internal response: If found noncompliant, companies must decide whether to continue business as usual, delay development until compliant in key areas, or delay development until compliant across all areas.
Review frequency: Reviews can be conducted ad-hoc, at regular intervals (e.g., annually), or triggered by specific events like model releases.
The researchers outline three potential concrete approaches with different settings for these factors, ranging from minimalist to comprehensive.
As frontier AI governance evolves, compliance reviews could become standard practice, helping provide meaningful verification of risk management efforts.

CAIP News
On Tuesday, May 20th, from 3:30pm to 4:30pm, the Center for AI Policy will host a panel discussion in the Rayburn House Office Building on the Progress and Policy Implications of Advanced Agentic AI. The panel will address AI agents’ current capabilities, supervision needs, potential impacts on American jobs, legal liability considerations, and more. RSVP here.
Joe Kwon published an article in Tech Policy Press titled “Democracy in the Dark: Why AI Transparency Matters.”
ICYMI: Mark Reddish published an article on Firehouse.com titled “Building Resilience to AI’s Disruptions to Emergency Response.”
ICYMI: We released new model legislation, the Responsible AI Act of 2025. Read a press release here, a two-page executive summary here, a section-by-section explainer here, the full model legislation here, and a policy brief explaining our reasoning here.
From the archives… Last fall, Claudia Wilson wrote a blog post about the societal impacts of AI companions and grief bots: “AI Companions: Too Close for Comfort?”
Quote of the Week
Today, one of the most important challenges is to promote communication that can bring us out of the “Tower of Babel” in which we sometimes find ourselves, out of the confusion of loveless languages that are often ideological or partisan. [...] In looking at how technology is developing, this mission becomes ever more necessary. I am thinking in particular of artificial intelligence, with its immense potential, which nevertheless requires responsibility and discernment in order to ensure that it can be used for the good of all, so that it can benefit all of humanity.
—Pope Leo XIV, in his address to media representatives on May 12th
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub