Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for U.S. AI policy professionals.
New York Enacts the LOADinG Act, Regulating Government Use of AI
Last month, New York Governor Kathy Hochul signed into law the Legislative Oversight of Automated Decision-Making in Government Act (LOADinG Act).
“This is the first legislation of its kind that regulates government use of AI, while also protecting government employees from the displacement of these automated systems,” said sponsoring State Senator Kristen Gonzalez.
Here’s the gist of the LOADinG Act, which takes effect later this year:
An automated decision-making system is defined as “any software that uses algorithms, computational models, or artificial intelligence techniques, or a combination thereof, to automate, support, or replace human decision-making.”
Some software is explicitly excluded, such as calculators, spellcheck, and spreadsheets.
State agencies cannot use, procure, or buy an automated decision-making system unless there is “continued and operational meaningful human review.”
Further, agencies cannot use these systems to reduce employees’ rights, wages, or hours. Machines cannot displace humans.
Before an agency uses an automated decision-making system, it must do an initial impact assessment and then follow up every two years.
The assessment must describe the system’s objectives, evaluate its ability to achieve those objectives, summarize the underlying algorithms and training data, and test for bias, discrimination, cybersecurity, privacy, safety, and “any reasonably foreseeable misuse.”
“If an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization [of the system].”
The agency must publish impact assessments on its website. If it redacts information, it must explain why.
For automated decision-making systems that are already in use, agencies must disclose that use to the state legislature.
Some critics of the LOADinG Act argue that it targets too many kinds of software, and guards too strongly against worker displacement.
Perhaps for these reasons, state lawmakers are drafting legislation to remove the law’s human oversight requirements and sunset its worker displacement protections. Bloomberg reports this effort is part of a deal with Governor Hochul “in exchange” for her approval of the LOADinG Act.
Many state legislatures are passing AI laws, such as California, Utah, Colorado, and Tennessee. More than 100 AI bills became law across the country in 2024, a steep increase from previous years.
Lawmaking at the federal level has been slower. Besides some provisions in the annual appropriations and defense bills, the 118th Congress passed very little AI-focused legislation.
While Congress contemplates the optimal AI formula, states are making their own decisions.
AI Data Centers May Be Harming U.S. Household Power Quality
“AI Needs So Much Power, It’s Making Yours Worse,” claims a new analysis from Bloomberg.
Bloomberg tracked power quality issues across the U.S., specifically looking at homes where electrical waves become warped and choppy instead of flowing smoothly—a condition known as harmonic distortion. By mapping these locations, analysts could see how many affected homes were situated near data centers.
The authors find that “more than three-quarters of highly-distorted power readings across the country are within 50 miles of significant data center activity,” and more than one-half are within 20 miles. That’s based on home sensor data from Whisker Labs and data center locations from DC Byte.
Harmonic distortion is a serious problem—it can damage appliances and increase risks of electrical fires.
Two utility companies contested Bloomberg’s analysis, arguing that their own data tells a different story. A Dominion Energy spokesman said Dominion’s harmonic distortion is consistently within industry standards, and a Commonwealth Edison (ComEd) spokesman questioned the accuracy of Whisker Labs’ sensor readings.
Data center energy demand will continue to grow. According to a new report from Lawrence Berkeley National Laboratory, “data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume approximately 6.7 to 12% of total U.S. electricity by 2028.”
As AI eats more power, utility companies may need to solve more than just capacity problems.
OpenAI Describes Plans to Restructure Into a For-Profit Corporation
OpenAI is not a normal company. Currently, it’s a nonprofit controlling a for-profit subsidiary called OpenAI Global, LLC. This subsidiary carries out OpenAI’s commercial activities. Investors have a cap on the maximum return they can receive—surplus returns must go to the nonprofit.
OpenAI created this complicated “capped-profit” structure in 2019. Before then, OpenAI was purely a nonprofit.
More recently, two days after Christmas 2024, OpenAI published a blog post detailing plans to again revamp its corporate structure.
“Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation (PBC) with ordinary shares of stock and the OpenAI mission as its public benefit interest,” wrote OpenAI.
“The PBC will run and control OpenAI’s operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science.”
Importantly, the nonprofit will receive shares in the PBC. The Information reported in October that this stake will likely exceed 25%. Given OpenAI’s most recent $157 billion valuation, that could be worth over $40 billion, making the nonprofit one of the world’s wealthiest charities.
Not everyone is thrilled about OpenAI’s transition. “A well-capitalized non-profit on the side is no substitute for PBC product decisions (e.g. on pricing + safety mitigations) being aligned to the original non-profit’s mission,” warned Miles Brundage, former Head of Policy Research at OpenAI.
Some opponents are taking action. The youth-led nonprofit Encode recently supported Elon Musk’s lawsuit against OpenAI, urging the U.S. District Court in Oakland to block OpenAI’s restructuring. Weeks earlier, Meta urged California’s attorney general to do the same.
This controversy highlights how AI’s future hinges not just on technological breakthroughs, but on the incentives that drive them.
News at CAIP
Jason Green-Lowe wrote an op-ed in The Hill about federal AI legislation: “How Congress dropped the ball on AI safety.”
Ep. 14 of the CAIP Podcast features Anton Korinek, a professor at the University of Virginia’s economics department and business school. Anton and Jakub discuss AI’s economic and workforce impacts.
ICYMI: Brian Waldrip and Jason Green-Lowe wrote a blog post on AI provisions in the National Defense Authorization Act: “FY 2025’s NDAA Is a Valuable Yet Incomplete Accomplishment.”
Quote of the Week
Ultimately, I want to see a robot dunking like Michael Jordan.
—Tomohiro Nomi, leader of Toyota’s CUE project, which built a robot that can dribble and shoot a basketball
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub