In race to dominate AI, US researchers debate collaboration with China
Loading...
| Seattle
When does collaboration hurt competition? Every industry wrestles with that question. But it feels particularly high-stakes for artificial intelligence work today as U.S.-China partnerships come under scrutiny. Washington and Beijing view one another increasingly as strategic competitors and cast the race for AI dominance as a critical battleground, particularly amid China’s military buildup and intensifying political repression under President Xi Jinping.
Technology with military potential has caused concern, but so do other ethical issues around AI-powered surveillance and censorship in the world’s most populous authoritarian state. In the United States, researchers are “the first line of defense” to weigh AI advances in light of the public good, says University of Washington researcher Bernease Herman. In China, in contrast, debate over AI ethics remains “completely off limits” for scientists, says Jeffrey Ding of the University of Oxford.
More oversight is needed, experts say, but it’s also essential to boost teamwork on AI’s many beneficial uses. “U.S. companies and researchers need to be having uncomfortable conversations internally: What is the nature of the technology and the field? How do we have these conversations without cutting off collaboration?” says Samm Sacks, a researcher at the think tank New America.
Why We Wrote This
Military uses of artificial intelligence have raised concerns about working with Chinese researchers. But some U.S. experts also feel a duty to consider AI’s potential role in human rights abuses, in a society less free than their own.
From a corner office on the top floor of the University of Washington’s Physics and Astronomy Tower, data scientist Bernease Herman looks out on Seattle’s Portage Bay as it flows toward the city’s high-tech hub and Amazon headquarters.
A former Amazon employee, Ms. Herman decided to join dozens of prominent artificial intelligence (AI) researchers last month in urging Amazon to stop selling a facial recognition technology to law enforcement. She was troubled by a study by a Massachusetts Institute of Technology researcher that found that Amazon Rekognition technology is biased because it is less accurate in identifying women and people of color and risks being misused by police to infringe on civil liberties.
After Amazon disputed the studies, Ms. Herman felt compelled to join other AI researchers in speaking out. Raising concerns about possible risks from AI “is a primary part of all of our roles,” says Ms. Herman, who researches bias in the AI field of machine learning at the UW’s eScience Institute.
Why We Wrote This
Military uses of artificial intelligence have raised concerns about working with Chinese researchers. But some U.S. experts also feel a duty to consider AI’s potential role in human rights abuses, in a society less free than their own.
“We are the first line of defense,” she says, underscoring AI researchers’ sense of responsibility to weigh AI advances in light of the public good.
Across the Pacific Ocean in China, in contrast, debate over topics such as Chinese security forces’ use of facial recognition technology to target minority groups remains “completely off limits” for AI scientists, says Jeffrey Ding, the lead China researcher at the Center for the Governance of AI at the University of Oxford’s Future of Humanity Institute.
That’s not to say there aren’t signs of defiance. In March, for example, China’s overworked software developers made use of GitHub, an encrypted code-sharing platform owned by Microsoft, to demand relief from their grueling work regimen known as “996” – 9 to 9, six days a week. The protest went viral. Dozens of Microsoft employees rallied in support, issuing a petition urging their company to keep the platform open even if it came under pressure from Beijing. (China’s government is reluctant to censor GitHub, however, because of the critical code-sharing service provided by the platform.)
“There is more dissent than we can see on the surface level, and sometimes it bubbles up,” Mr. Ding says. Still, unlike in the United States, in China “there is no robust civil society pushing very strongly on this,” he says.
As the U.S. and China forge ahead as world powerhouses in the development and application of AI, the cautionary voices of researchers – and their choices about collaboration – could hold the key to promoting beneficial cooperation while preventing malicious or dangerous uses of the revolutionary and often unwieldy new technology, AI experts say.
In turn, their ability to discern between constructive and harmful AI sharing could help prevent a widening technological schism between the U.S. and China that, if allowed to grow, could spread globally as nations are forced to decide whether to align with the world’s leading democracy or its most populous authoritarian state.
High-tech battleground
AI collaboration between the two tech leaders is coming under scrutiny in Congress and elsewhere, as Washington and Beijing view one another increasingly as strategic competitors, casting the race for AI as a critical battleground. Both countries enjoy unique strengths. The U.S. leads in research talent, critical hardware, and AI companies. China has accumulated far more of the data needed to fuel AI and seeks to lead the world in AI by 2030.
China’s military buildup and intensifying political repression under President Xi Jinping have increased concern in Washington. “Artificial intelligence as a technology presents enormous economic benefits but also potentially enables military capabilities and ... surveillance,” says Elsa Kania, an expert on Chinese military technology at the Center for a New American Security, a D.C. think tank. “It’s clear there are a number of ethical and security concerns that come into play when we are talking about research collaborations.”
U.S. universities and companies have collaborated, sometimes unknowingly, with Chinese scholars who are actually military officers or affiliated with Chinese military universities. “There is a shockingly small amount of due diligence and oversight,” says Alex Joske, a researcher with the International Cyber Policy Center of the Australian Strategic Policy Institute. China has sent approximately 500 military scientists to U.S. universities since 2007, an estimate based on analysis of peer-reviewed publications co-authored by China’s People’s Liberation Army (PLA) scientists and overseas scientists, Mr. Joske says.
Last month, reports surfaced that academics at Microsoft Research Asia in Beijing had co-written papers with researchers affiliated with China’s National University of Defense Technology on AI methods that can be used for surveillance. Over two decades, Microsoft Research Asia has trained hundreds of top Chinese IT professionals and academics and graduated 7,000 alumni, many in the field of AI.
While analysts say most research collaboration doesn’t have specialized military applicability, they agree that U.S. researchers should avoid working with PLA scientists.
“Anything involving the Chinese military should be a bright red line,” Ms. Kania says.
Risks for citizens
Another area of concern is China’s use of artificial intelligence in surveillance, targeted propaganda, and enhanced censorship. For example, China’s nearly ubiquitous WeChat app last year began using an AI technology called optical character recognition to filter and delete images containing sensitive words, stifling a method that Chinese netizens used to evade censorship.
“Without artificial intelligence, you need a big army of human censors to identify and delete,” says Sarah Cook, a senior research analyst for East Asia at Freedom House, an independent watchdog organization that promotes democracy. “This is a way to refine censorship and cut off ways netizens have been able to circumvent censorship of keywords, and this is much cheaper, too.”
Given AI’s potential for military, surveillance, and censorship use, “U.S. companies and researchers need to be having uncomfortable conversations internally: What is the nature of the technology and the field? How do we have these conversations without cutting off collaboration?” says Samm Sacks, an expert on communication technology policy in China at New America, a nonpartisan think tank in Washington.
Indeed, Ms. Sacks and other AI experts note that amid concerns over malicious AI uses, many examples of benign and positive AI cooperation are overlooked.
“What doesn’t get reported are AI researchers in the U.S. and China working on collaborative projects that are beneficial to society, for example on the health care front,” says Baobao Zhang, a research affiliate at the Center for the Governance of AI and a doctoral candidate in political science at Yale University. One joint project used AI to help diagnose illness in children, including toddlers, according to a study published this year.
Food security is another area of fruitful AI cooperation. For example, Microsoft, Intel, and the Chinese tech giant Tencent last year took part in a cucumber-growing contest, exploring how AI could raise greenhouse productivity and advance indoor farming.
Oversight – and a welcome mat
As U.S. policymakers look for tools to prevent the leakage of sensitive AI know-how, they face unique challenges in efforts to regulate this and other related emerging technologies.
Congress has called for the modernization of U.S. export controls that originated during the Cold War and the strengthening of oversight of technology transfer to China. Yet this is challenging due to the two countries’ deep economic connections and the transnational and commercial nature of most AI research and innovation. Given this “technological entanglement,” Ms. Kania says that blunt tools that risk cutting the two-way flow of expertise are counterproductive. “Applying export controls to algorithms is antithetical.”
More effective tools to prevent unwanted AI transfer include improving cybersecurity protections, using expanded foreign investment rules to block risky foreign acquisitions of critical technology, and taking legal action against technology theft. Universities, moreover, should enforce regulations on foreign nationals conducting research in sensitive areas, AI experts recommend.
But stark responses, such as denying U.S. visas to Chinese scholars and researchers, are likely to backfire, they warn, especially given that by some measures more than half of the U.S. top talent pool in AI is made up of foreign nationals. “News about Chinese students and researchers whose visas are denied has a chilling effect on the U.S. ability to draw top AI talent,” says Ms. Zhang.
Instead, the U.S. should leverage its strengths by boosting investments in AI research while expanding innovation and openness as well as diversity and inclusion. This includes taking steps to ensure talent from China stays in the U.S. by welcoming students and scholars and pushing back against organizations such as Beijing-backed student associations that experts say surveil and coerce Chinese academics in the U.S. Indeed, the overworked Chinese tech workers who turned to GitHub to evade China’s censorship “firewall” would clearly appreciate both more freedom of expression and fewer working hours.
“At a time when China is becoming more repressive under Xi Jinping, there is an opportunity for welcoming Chinese students, scientists, and entrepreneurs who don’t feel they can pursue research or build companies with as much freedom,” says Ms. Kania. “It’s a tragedy for China but can be framed as an opportunity for the U.S. to welcome some of the more free-thinking entrepreneurs.”
Working on machine-learning models at her UW office tower, Ms. Herman emphasizes that broad discretion in research and knowledge of who the end users will be lie at the heart of protecting U.S. AI technology and preventing misuse.
She points to a recent example: When the San Francisco nonprofit OpenAI developed an AI system capable of writing articles on any subject based upon a brief prompt, it broke with its usual practice of releasing the full model so as not to unleash a capability that could flood the internet with fake news.
“It’s certainly something that academics think about a lot – what’s the use of their technology,” says Ms. Herman. “It’s a pretty painstaking decision process.”