AI Policy Weekly #43
Spruce Pine flooding, Dmytro Kuleba deepfake, and EU AI Pact participation
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
Hurricane Helene Floods Spruce Pine Mines, Testing Resilience of AI Chip Supply Chains
Hurricane Helene has caused immense harm and economic damage to the United States since making landfall in Florida last week.
One particular impact has been in Spruce Pine, North Carolina, which received 24 inches of rainfall in a three-day period. This caused a local river to flood, inflicting heavy damage on the town.
The hurricane’s tragic devastation has implications for the semiconductor industry, as Spruce Pine is home to two high-purity quartz mines owned by Sibelco and The Quartz Corporation. According to Bloomberg, these sites “account for more than 80% of the world’s supply of commercial high-purity quartz.” Both companies halted operations on September 26th.
Although quartz is widely available as a common form of sand, only extremely pure varieties are suitable for several critical processes in computer chip manufacturing. For example, Sibelco’s top-of-the-line “Iota 8” quartz is 99.9992% pure and can sell for $10,000 per ton, thousands of times more expensive than regular construction sand. This material is essential for building quartz crucibles used in the Czochralski method, a key technique for producing semiconductor wafers.
The pivotal Spruce Pine mines are currently closed, and it’s unclear when they will resume operations. Sibelco has allegedly “sent ‘force majeure’ notices to customers—freeing it from liability if it can’t fulfill orders.” Depending on the extent of the flooding damage, a full recovery could take weeks or months.
Wafer companies have inventory stockpiles that will last for at least a couple months. In a pessimistic recovery scenario, those companies would turn to alternative sources of high-purity quartz, such as synthetic techniques or lower-quality mines. In turn, new suppliers would need to rapidly ramp up production, since Spruce Pine is currently responsible for 80% of global supply.
Time will tell whether the Spruce Pine flooding causes significant challenges for the chip industry, which experienced global shortages during the COVID-19 pandemic. At the very least, it’s possible that prices will increase.
“The folks I’ve spoken to in the industry in recent days seem relatively sanguine,” says Ed Conway, author of Material World. “But they’re certainly spending a lot of time checking their stockpiles and talking to suppliers. It’ll be a nervy few weeks.”
The Spruce Pine flooding serves as a stark reminder of the intricate global supply chains that underpin modern AI research. As AI companies reach for the stars, their feet remain planted firmly on Earth.
Deepfake Clone Targets U.S. Senator, Underscoring AI Risks
Senate Foreign Relations Committee Chairman Ben Cardin was recently targeted by a sophisticated deepfake operation. Last month, an unknown actor impersonated former Ukrainian Foreign Minister Dmytro Kuleba in a Zoom call, using advanced AI technology to create a convincing audio and video likeness.
According to a notice from the Senate’s security office, Cardin grew suspicious when the fake Kuleba “began acting out of character and firmly pressing for responses to questions like ‘Do you support long range missiles into Russian territory? I need to know your answer.’”
“After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities,” said Senator Cardin in a statement.
The FBI is investigating the incident.
“We have seen an increase of social engineering threats in the last several months and years,” said the Senate’s security office, adding that “this attempt stands out due to its technical sophistication and believability.”
This deepfake attack highlights AI’s potential for political manipulation and disinformation. As AI advances, protecting the integrity of political communications and democratic processes from AI-enabled threats must be a top priority.
EU Announces Initial AI Pact Pledges
The European Commission has unveiled the EU AI Pact, a voluntary initiative with over 100 initial signatories from various sectors. The Pact encourages early adoption of EU AI Act principles before the law’s full implementation over the next few years.
All participants pledge to adopt AI governance strategies, map potential high-risk AI systems, and promote AI literacy among their staff. Additionally, participants can voluntarily commit to further measures, such as:
“Put in place processes to identify possible known and reasonably foreseeable risks to health, safety and fundamental rights.”
“Clearly and distinguishably label AI generated content including image, audio or video constituting deep fakes.”
“Ensure that individuals are informed, as appropriate, when they are directly interacting with an AI system.”
The EU AI Office will publicly share the commitments that organizations intend to meet, and organizations will report on their implementation progress twelve months after that publication.
Current participants include leading AI companies like OpenAI, Google, Amazon, Microsoft, IBM, and Cohere. Some important AI companies have not yet signed, namely Meta, Anthropic, Nvidia, and Mistral.
The Pact remains open for new signatories, offering companies a chance to voluntarily implement safety measures. In the coming months, the public will find out which AI companies are willing to do that, and how much they are willing to do.
News at CAIP
Brian Waldrip and Makeda Heman-Ackah wrote a blog post about recent AI hearings on Capitol Hill: “Ignoring AI Threats Doesn’t Make Them Go Away.”
Tristan Williams and Brian Waldrip wrote a blog post about lobbying expenditures by OpenAI and Microsoft: “AI’s Lobbying Surge and Public Safety.”
Kate Forscey wrote a blog post highlighting the latest public comments on AI from Joe Biden and Ivanka Trump: “The Need for AI Safety Has Bipartisan Consensus at the Highest Ranks.”
Jason Green-Lowe released a statement on the SB 1047 veto in California: “CAIP Condemns Governor Newsom’s Veto of Critical AI Regulation Bill.”
Jason Green-Lowe was featured three times in Politico’s coverage of the SB 1047 veto: in a news article, in the Digital Future Daily newsletter, and in Tuesday’s Morning Tech newsletter.
Jason Green-Lowe published a memo ahead of the vice presidential debate: “Walz-Vance Debate and the Hope for Hearing AI Policy Positions.”
ICYMI: Mark Reddish led CAIP’s new research report on AI explainability: “Decoding AI Decision-Making: New Insights and Policy Approaches.”
Quote of the Week
Leopold Aschenbrenner’s SITUATIONAL AWARENESS predicts we are on course for Artificial General Intelligence (AGI) by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks.
This is an excellent and important read.
—Ivanka Trump, businesswoman and daughter of former U.S. President Donald Trump, in a social media post
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub