Welcome to 3-Shot Learning, a weekly newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for AI policy professionals.
CNAS Proposes Promising Hardware Governance Mechanisms for AI
On Monday, the Center for a New American Security (CNAS) released a critically important report outlining a new modality of AI policy: physical mechanisms on AI hardware that enable regulators to securely verify that these powerful (and dual-use) devices are being used lawfully, and to restrict access if not. For instance, these physical “on-chip mechanisms” could allow the Department of Commerce to check that AI chips are actually situated in a claimed location, enabling effective enforcement of the elaborate US export controls, including mitigation of chip smuggling to China.
The CNAS researchers do not just flex their technical chops (since on-chip mechanisms are a rather technical topic); they also provide a set of clear, nontechnical action items for policymakers and other stakeholders. For example, the State Department should discuss on-chip governance mechanisms with international allies that control chokepoints in the chip supply chain, like Japan. Additionally, CNAS recommends that a NIST-led interagency working group should begin urgent work on securing the hardware supply chain. One particularly fascinating recommendation is to guarantee financial opportunities for chipmakers that implement on-chip mechanisms, as a variation of the “advance market commitments” that can incentivize vaccine production; specifically, the Bureau of Industry and Security should relax export controls on chips that have specific security features.
Building and implementing an entire novel class of AI governance techniques does not come without challenges. The elephant in the room is that a significant amount of work will be required outside the government, especially for building and adopting on-chip mechanisms that can resist—or at least show evidence of—physical tampering aimed at disabling the mechanisms. Although intermediate steps will be beneficial, the full establishment of a robust on-chip governance scheme will probably require at least five years. As a result, and as is often the case in AI policy, the root difficulty is the dwindling time that rapid AI progress leaves for policymakers and researchers to act. If the US government is to rise to the occasion, now is the time.
A Shift in Expectations: Experts Predict Earlier Development of Human-Level AI
Last Thursday, research organization AI Impacts released results from its October 2023 survey of AI researchers who have published in top-tier venues. The survey is “probably the biggest ever survey of AI researchers,” according to AI Impacts. Arguably the most striking finding is that AI experts believe machines will surpass human performance far earlier than they predicted in summer 2022, before ChatGPT. There is, of course, an important caveat: these predictions come from a survey, and may differ from the opinions of the full AI community. But given the magnitude of the change (see below) and the large number of researchers who responded, it seems likely that many experts have shortened their AGI timelines.
Beyer, Eshoo Introduce Bill Requiring Transparency From Major AI Companies
Just before Christmas, Representatives Don Beyer (D-VA) and Anna Eshoo (D-CA) introduced their AI Foundation Model Transparency Act. This new bill would require the disclosure of specific kinds of information from companies building foundation models like GPT-4. The corresponding one-pager emphasizes the bill’s goals of reducing harms from biased decisions and copyright infringement. However, the bill would be beneficial in other ways; for instance, information about compute usage would help the government monitor AI progress and assess emerging risks. Indeed, information reporting requirements are a flexible policy tool that can be used for a variety of purposes, as seen in section 4.2 of the US Executive Order and in company disclosures for the UK AI Safety Summit.
News at CAIP
We hosted a briefing in the House to discuss the impact of the AI Executive Order. Jason Green-Lowe moderated a panel featuring Samuel Hammond (Foundation for American Innovation), Daniel Colson (AI Policy Institute), Bill Drexel (Center for a New American Security), and Elise Phillips (Public Knowledge). A full recording is available here.
The Texas Tribune interviewed Jason Green-Lowe for an article covering state government usage of AI and the technology’s risks.
Quote of the Week
We respectfully urge you to include $10 million for FY 2024 for the USAISI to support safe US development of AI and effective international standards coordination.
—Senators Martin Heinrich (D-NM), Todd Young (R-IN), Maria Cantwell (D-WA), and Mike Rounds (R-SD), in a letter to the Senate Appropriations Committee