Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy (CAIP). Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
AI Policy Stakeholders Submit Recommendations for 2025 U.S. AI Action Plan
President Trump’s January AI executive order directed a range of senior White House policy officials to prepare an AI action plan by July 22nd.
Public stakeholders were able to provide input on the action plan through a request for information (RFI) issued by the Networking and Information Technology Research and Development National Coordination Office (NITRD NCO) on behalf of the White House Office of Science and Technology Policy (OSTP). The RFI’s window for comments closed on March 15th.
In total, public stakeholders submitted over 8,000 responses.
At the time of writing, most responses are private. But a few dozen organizations, including the Center for AI Policy (CAIP), have published their comments for the world to see. Here are some notable proposals:
CAIP: Gather cyber incident data by recognizing frontier AI as a new critical infrastructure sector.
Foundation for American Innovation: “Designate a senior member of the National Security Council staff to coordinate the creation of an AI Operations Center—a ‘Situational Awareness’ room—to monitor frontier AI developments.”
Abundance Institute: “Direct the Attorney General to prepare a brief and litigation strategy to support a private dormant commerce clause case.” This would position the federal government to legally back private companies challenging state AI regulations as unconstitutional barriers to interstate commerce.
Center for Security and Emerging Technology: “The Bureau of Industry and Security (BIS) in the Department of Commerce should institute scenario planning assessments before implementing new export controls.”
Future of Life Institute: “Issue a moratorium on developing future AI systems with the potential to escape human control, including those with self-improvement and self-replication capabilities.”
TechNet: Create a National Digital Reserve Corps of individuals “with the credentials to address the digital and cybersecurity needs of Executive Agencies across the federal enterprise both before and when cyber incidents arise.”
American Bankers Association: “Governmental agencies must be transparent about their own use of AI, especially the ways in which it is brought to bear on regulated entities.”
Institute for Progress: “Establish ‘Special Compute Zones’—regions of the country where AI clusters at least 5 [gigawatts] in size can be rapidly built through coordinated federal and private action.”
Electronic Frontier Foundation: “Just as an agency would have to give notice and invite comment in order to change rules for deciding eligibility or action, it should be required to do so when adopting an AI or [algorithmic decision-making] tool that informs such a decision.”
R Street Institute: “Carefully calibrate immigration policies to ensure continued U.S. talent dominance in AI.”
Anthropic: “Elevate collection and analysis of adversarial AI development to a top intelligence priority.”
OpenAI: “Coordinate global bans on CCP-aligned AI infrastructure, including Huawei chips.”
Google: “[Coordinate] federal, state, local, and industry action on policies like transmission and permitting reform to address surging energy needs.”
Business Software Alliance: “Create alternative paths to AI careers that enable workers to develop high-demand technology skills without the need for a bachelor’s degree.”
Business Roundtable: “Make high-impact government datasets more widely available.”
ARC Prize: “The government needs a dedicated, centralized office with experts who can track, interpret, and apply AI benchmarks to support decision making.”
Center for a New American Security: “Prioritize regular engagement with China on AI security concerns through the U.S.-China AI Working Group.”
Center for Democracy & Technology: “Insofar as the Administration may deem restrictions on the export of certain AI models necessary, such restrictions should exclude open models from their scope.”
Stanford Institute for Human-Centered AI: “[Mandate] that AI companies adopt legal and technical safe harbors for good-faith AI safety and trustworthiness research.”
Information Technology Industry Council: Extend the CHIPS Act investment tax credit to cover chip design activities.
Machine Intelligence Research Institute: “Develop comprehensive emergency response protocols for AI-related crises.”
Information Technology & Innovation Foundation: “Coordinate AI-focused economic partnerships with African nations.”
Americans for Responsible Innovation: “Any AI labs that develop state-of-the-art models and receive U.S. government contracts should be required to demonstrate that their systems have robust safeguards against the creation of bioweapons and other weapons of mass destruction.”
U.S. Chamber of Commerce: “The National Science Foundation should routinely develop reports on education guidelines [...] to better prepare students for the use of AI.”
There are many other responses worth reading in full, from organizations like the Institute for AI Policy and Strategy, Open Source Initiative, Open Markets Institute, Federation of American Scientists, Center for AI Safety Action Fund, Nuclear Threat Initiative, Software Information Industry Association, Semiconductor Industry Association, Wilson Center, Institute for Security and Technology, Consumer Technology Association, Bipartisan Policy Center, American Consumer Institute, American National Standards Institute, Middle East Institute, Frontier Model Forum, Future of Privacy Forum, GenAI Collective, News/Media Alliance, NetChoice, Hugging Face, IBM, Mozilla, MITRE, a16z, Palantir, Encode, RAND, and many others.

Roblox Launches Open-Source 3D AI Generator
Roblox, a popular online platform where users create and play games, has unveiled “Cube 3D,” an AI system that converts text descriptions into 3D digital objects.
The company plans to open source a version of the AI model, making it freely available to developers both on and off their platform.
Unlike some AI systems that generate 3D models by constructing them from 2D images, Cube 3D generates objects directly from text prompts. A developer typing “/generate a motorcycle” or “/generate orange safety cone” receives a corresponding digital object within seconds.
The system works through what Roblox calls “3D tokenization,” breaking down 3D shapes into individual datapoints similar to how language models process text. This allows the AI to “predict the next shape token to build a complete 3D object.”
Beyond individual objects, Roblox hopes to expand the technology to “enable creators to generate entire scenes based on multimodal inputs” including text, images, and other media types.
Many people will use these new tools. During the fourth quarter of 2024, over 85 million daily active users spent an average of 2.4 hours per day on the Roblox platform.
Virtual worlds are becoming as easy to create as they are to imagine.

AI-Assisted Cheating Appears Widespread Among Students, Educators Struggle to Respond
A recent Wall Street Journal article from Matt Barnum and Deepa Seetharaman investigated how AI tools are increasingly used for academic dishonesty in schools.
“This is a gigantic public experiment that no one has asked for,” Marc Watkins, assistant director of academic innovation at the University of Mississippi, told the reporters.
The Journal interviewed a 17-year-old New Jersey student who used ChatGPT and Google’s Gemini for dozens of assignments because “work was boring or difficult,” she “wanted a better grade,” or had “procrastinated and ran out of time.” She only got caught once.
AI companies largely deflect responsibility. “OpenAI did not invent cheating,” said Siya Raj Purohit from OpenAI’s education team. “People who want to cheat will find a way.”
AI-powered detection efforts face significant challenges. In a Journal experiment, a detection tool from Pangram Labs correctly identified ChatGPT-generated writing as AI-created. But after processing through “humanizing” software, the same piece later passed as “fully human-written.”
Pangram Labs’ CEO Max Spero said they’re working to “defeat the humanizers.”
AI can complete even university-level assignments. In one experiment, researchers secretly submitted AI-written exam answers at a UK university and found that 94% went completely undetected. Furthermore, the AI submissions received grades that were, on average, several percentage points higher than submissions from real students.
The Center for AI Policy carefully studied AI’s role in education last fall. To learn more about this pressing issue, read our research report here.

CAIP Event
On Tuesday, March 25th at 12pm, CAIP will host a panel discussion on AI and Cybersecurity: Offense, Defense, and Congressional Priorities.
Our distinguished panel brings together leading experts from across industry, academia, and policy research:
Fred Heiding, Postdoctoral Researcher, Harvard University
Daniel Kroese, Vice President of Public Policy & Government Affairs, Palo Alto Networks
Krystal Jackson, Non-Resident Research Fellow, Center for Long-Term Cybersecurity
Kyle Crichton, Cyber AI Research Fellow, Center for Security and Emerging Technology (CSET)
The session will include a demonstration of an automated spear phishing AI agent, followed by discussion of current cybersecurity challenges, AI’s evolving impact, and policy recommendations for Congress.
RSVP here.
CAIP News
On Monday, CAIP shared suggestions with the U.S. Office of Management and Budget (OMB) regarding the revision of OMB Memorandum M-24-18. Our letter urges OMB to maintain and improve requirements for testing AI models as part of the government procurement process.
Claudia Wilson led CAIP’s comment on the U.S. AI Safety Institute (AISI)’s second public draft on Managing Misuse Risk for Dual-Use Foundation Models.
Joe Kwon wrote a blog post on new research from Model Evaluation & Threat Research (METR): “The Rapid Rise of Autonomous AI.”
Just Security’s Clara Apt and Brianna Rosen mentioned CAIP’s recommendations in their overview of responses to the White House OSTP’s AI Action Plan RFI.
ICYMI: Here’s our full response to the RFI.
From the archives… NVIDIA’s 2025 GPU Technology Conference (GTC) is making headlines this week. To refresh your memory of NVIDIA’s 2024 GTC, just wind the clock back to AI Policy Weekly #15.
Quote of the Week
Get your hands dirty, understand the technology deeply, and then prep yourself for a wide range of outcomes.
—Nir Bar Dea, CEO of Bridgewater Associates, on how to prepare for AI’s upcoming impacts
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub