AI Policy Weekly #38
Disruptions to the Philippine BPO industry, software speedups from Amazon Q, and more OpenAI safety exits
Welcome to AI Policy Weekly, a newsletter from the Center for AI Policy. Each issue explores three important developments in AI, curated specifically for US AI policy professionals.
AI Begins Disrupting the Outsourcing Industry
You can see the future first in the Philippines.
“The World’s Call Center Capital Is Gripped by AI Fever—and Fear,” reads the title of an in-depth investigation from Bloomberg, which explores how AI is impacting the country’s Business Process Outsourcing (BPO) industry.
BPO is a big deal in the Philippines, which competes with India as the best country in the world for BPO services. The industry accounts for 1.7 million Philippine jobs and 8% of nationwide GDP.
“All of the major players [...] are rushing to rollout AI tools to stay competitive and defend their business models,” reports Bloomberg.
For example, “most have introduced some form of ‘AI copilot’” to handle tasks like summarizing the history of a customer’s contact with a company.
Others are using AI to onboard employees. In Metro Manila, an OpenAI product simulates upset customers so that new hires can practice handling difficult calls. “It does so in the guise of many different personas, whether that’s a Gen Z female, a boomer male, an irate caller or a tough bargainer.”
The company using this tool, [24]7.ai, employs 5,000 people in the Philippines. Its CEO, PV Kannan, says that “The unknown is at what speed AI will disrupt the industry. Two years?”
“Half the players are in denial and pretending there will be zero impact.”
Concentrix, a California-based Fortune 500 company, is also using AI in its main Manila office, which has 23 floors and tens of thousands of (human) agents. An AI copilot “listens to conversations, summarizes chats for other agents, analyzes negative sentiments and stands ready to flag to managers any non-compliant staff utterances.”
Meanwhile, the AI startup Sanas builds software that “helps harmonize accents and eliminates background noise—crowing roosters, ambulance sirens, office chatter—helping conversations flow smoothly.”
This particular application reveals a subtler form of AI disruption in the Philippines. One source of the country’s BPO dominance is its abundance of high-quality English speakers. If anyone in the world can have a “harmonized” accent, then this linguistic advantage could disintegrate.
The plainest form of AI disruption is potential job loss. Up to 300,000 Philippine BPO jobs could vanish within five years, according to the outsourcing advisory firm Avasant. The technology could also create up to 100,000 new jobs, but that leaves a net loss of 200,000.
As an illustrative example, the Swedish fintech company Klarna announced in February that its OpenAI-based customer service copilot was “doing the equivalent work of 700 full-time agents,” with customer satisfaction scores “on par” with human agents. Furthermore, the machine resolved errors both faster and more accurately than the humans, while working 24/7 in over 35 languages.
Klarna plans to cut its 3,800-person workforce down to 2,000 in the coming years. The firm employed 5,000 people a year ago.
CEO Sebastian Siemiatkowski told the BBC that it is “too simplistic” to brush aside job disruption concerns due to the creation of new jobs. “I mean, maybe you can become an influencer, but it’s hard to do so if you are 55 years old.”
Back in the Philippines, Bloomberg reports that “multiple training initiatives are underway, some government supported, others industry backed.” And last month, the country’s National Economic and Development Authority (NEDA) launched a Center for AI Research (CAIR) and AI Strategy Roadmap 2.0 to “support the country’s social and economic transformation.”
NEDA Secretary Arsenio Balisacan stressed the need for workforce adaptation. “If you don’t upskill, obviously, AI will replace you. That’s the challenge for us.”
Other officials, like Philippine Senator Risa Hontiveros, believe the government should “be more concerned and draw up a range of scenarios—including worst-case scenarios.”
As the Philippines grapples with AI’s implications for its outsourcing industry, the US government must also remain vigilant about potential impacts on its own labor market, a point the Center for AI Policy emphasized in our recent research report on AI and the future of work.
AI Assistants Write the Future’s Code
Amazon CEO Andy Jassy recently shared striking results from integrating the tech giant’s AI assistant, Amazon Q, into their internal systems.
The AI automation allowed Amazon to rapidly modernize more than half its production Java codebase in under six months. According to Jassy, Java software upgrades that once consumed 50 developer-days now take mere hours.
“We estimate this has saved us the equivalent of 4,500 developer-years of work (yes, that number is crazy but, real).”
This productivity is valuable. At standard Amazon engineer salaries, 4,500 developer-years of manual effort could have resulted in a wage bill exceeding $1 billion.
While making the Java upgrades, Amazon’s developers often approved the AI’s code reviews without making further changes. Jassy reports this was the case almost four times out of five.
In addition to saving time for developers, the Q-assisted Java upgrades have “enhanced security and reduced infrastructure costs.” These improvements will save an estimated $260 million per year.
Amazon’s experience with AI-assisted coding is not unique in the tech industry. While Amazon Q has demonstrated impressive results internally, other AI coding assistants are also making waves among developers. For instance, Andrej Karpathy, a renowned AI engineer who previously led teams at OpenAI and Tesla, reports similar productivity gains using a different AI setup based on Anthropic’s Claude model.
“Programming is changing so fast…” said Karpathy. The AI transition is “a bit like learning to code all over again but I basically can’t imagine going back to ‘unassisted’ coding at this point, which was the only possibility just ~3 years ago.”
Importantly, AI coding tools are continuing to improve. Earlier this month, the AI startup Cosine claimed that its new AI system was “the best AI software engineer in the world by far,” with a score well ahead of its competitors, including Amazon Q, on a popular coding benchmark.
Thus, as software continues to “eat the world,” AI assistants appear poised to accelerate the ongoing digital revolution.
OpenAI’s Safety Exodus Continues
In December 2023, former Google CEO Eric Schmidt posted on X about his fresh $5 million donation to OpenAI’s new grant program for safeguarding “superintelligent” AI:
“This group from OpenAI are among the smartest people i have ever met. I’m very pleased to be one of their supporters, please review and apply to work with them !!!!!!!!!!!!”
Eight months later, many of those employees have left OpenAI, according to a Fortune interview with Daniel Kokotajlo, a former OpenAI researcher who may have been the first employee to outright refuse the company’s restrictive exit documents.
“The departures include Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and cofounder John Schulman.”
According to LinkedIn and X, at least four of those employees have since joined Anthropic, a rival company that appears to invest more in future-oriented safety research.
These exits add to a growing list of safety-minded employees who have left OpenAI since the company’s November board debacle.
Some of these employees have spoken out publicly against OpenAI. For example, Jan Leike, one of the leaders of the team that Schmidt funded (which OpenAI has since dissolved entirely), wrote that “safety culture and processes have taken a backseat to shiny products.”
This ongoing exodus of safety-focused researchers from OpenAI raises questions about the company’s commitment to AI safety. To address these concerns, the Center for AI Policy believes that basic safety practices should be required, strengthening responsible AI development as a priority across the entire industry.
Job Openings
The Center for AI Policy (CAIP) is hiring for three new roles. We’re looking for:
an entry-level Policy Analyst with demonstrated interest in thinking and writing about the alignment problem in artificial intelligence,
a passionate and effective National Field Coordinator who can build grassroots political support for AI safety legislation, and
a Director of Development who can lay the groundwork for financial sustainability for the organization in years to come.
News at CAIP
Claudia Wilson and Brian Waldrip traveled to South Dakota to engage in an AI roundtable at Dakota State University. For more details on the event, check out this coverage from Dakota News Now.
Claudia Wilson wrote a blog post on the FCC’s proposed rule requiring disclosure regarding AI usage in political ads: “Somebody Should Regulate AI in Election Ads.”
At the Democratic National Convention (DNC) last week, the Center for AI Policy sponsored a mobile billboard to highlight the need for democratizing AI governance in the US. The 15-second ad—which can be viewed here—makes a simple point: “Sam Altman is not Uncle Sam. Don’t put him in charge of AI safety.”
ICYMI: The New York Times published Jason Green-Lowe’s letter to the editor in response to David Brooks’ recent op-ed about AI.
Quote of the Week
I’d always thought of creativity as the more intimate and human part.
—Gianfranco Sampo, a gelateria owner in Milan, commenting on his decision to use ChatGPT to invent new gelato flavors for his stores
This edition was authored by Jakub Kraus.
If you have feedback to share, a story to suggest, or wish to share music recommendations, please drop me a note at jakub@aipolicy.us.
—Jakub