Can AI be ‘democratic’? Race is on for who will define the technology’s future.

|
Daniel Cole/Reuters
An aerial view shows construction underway on a Project Stargate AI infrastructure site, a collaboration between three large tech companies – OpenAI, SoftBank, and Oracle – in Abilene, Texas, April 23, 2025.

In January, OpenAI launched Stargate, a $500 million investment in artificial intelligence infrastructure for the United States. On Wednesday, it announced a plan to bring this type of investment to other countries, too.

OpenAI, the company behind ChatGPT, says that by partnering with interested governments, it can spread what it calls “democratic AI” around the world. It views this as an antidote to the development of AI in authoritarian countries that might use it for surveillance or cyberattacks.

Yet the meaning of “democratic AI” is elusive. Artificial intelligence is being used for everything from personal assistants to national security, and experts say the models behind it are neither democratic nor authoritarian by nature. They merely reflect the data they are trained on. How AI affects politics worldwide will depend on who has a say in controlling the data and rules behind these tools, and how they are used.

Why We Wrote This

The U.S. and China want to lead the way in artificial intelligence development. But governments could use the technology more for their own ends than for the public good. Whose values will AI ultimately reflect?

OpenAI wants as many people as possible using AI, says Scott Timcke, a research associate at the University of Johannesburg’s Centre for Social Change. “I don’t necessarily get the sense [they are] thinking about mass participation at the level of design, or coding, or decision-making.”

Those sorts of decisions are shaping how AI permeates society, from the social media algorithms that can influence political races to the chatbots transforming how students learn.

He says people should consider, “What is our collective control over how these big scientific instruments are used in everyday life?”

“A challenge ... about exporting values”

In a blog post announcing the new initiative, OpenAI for Countries, democratic AI is defined as artificial intelligence that “protects and incorporates long-standing democratic principles,” such as freedom for people to choose how they use AI, limits on government control, and the free market.

Working with the U.S. government, OpenAI is now offering to invest in the AI infrastructure of countries wishing to partner. That means building new data centers, providing locally customized ChatGPT, and opening national business startup funds, while promising security controls in line with democratic values.

The Trump administration has been adamant about winning the AI race against China, which already has some of the leading firms in the field. Through the expansion of “friendly” AI in allied nations, OpenAI is becoming a major player in U.S. efforts to beat China and Russia in the technological race.

Kevin Wolf/AP
Sam Altman, co-founder and CEO of OpenAI (left), stands in front of photographers before the start of a Senate Committee on Commerce, Science, and Transportation hearing, May 8, 2025, in Washington.

“The challenge of who will lead on AI is not just about exporting technology, it’s about exporting the values that the technology upholds,” wrote OpenAI CEO Sam Altman in a Washington Post op-ed last year.

While the project may prove attractive to some governments, it also raises concerns about building an AI ecosystem whose infrastructure is controlled by American interests.

Others wonder whether the technology can be as democratic as companies like OpenAI hope. One foundation of democracy is transparency, where people have access to information to help them understand the processes behind decision-making.

Many AI models are opaque, operating as “black boxes” whose inner workings are a mystery to most users. In some cases, these processes are concealed to protect intellectual property. And some algorithms are so complex that even developers don’t understand exactly how the machines arrive at their results.

That can make it difficult to trust the output of these models or to hold anyone accountable when things go wrong.

How transparent should AI technology be?

Companies could choose to make the code behind the systems available to everyone.

While OpenAI, Google, and Anthropic do not share their models, other companies have chosen the open-source path. The Chinese DeepSeek-R1 model, released this January, has enabled many developers around the world to build small-scale, cheap AI models. Some see this as a way of democratizing the development of AI technology, making it more accessible to more people.

These tools can also bolster democratic participation. During Kenya’s mass protests last year, demonstrators created chatbots to explain complex legislation in plain language to help their peers understand its impact.

Others worry that without the right regulations, making AI widely accessible may do more harm than good. They point to AI-generated disinformation campaigns sowing division and confusion. And given how quickly the technology advances, private companies are setting their own rules of conduct and doing so faster than regulators can keep up – similar to how the internet has developed.

Just a handful of companies, such as OpenAI, Microsoft, Google, and Nvidia, control much of the critical hardware and software for AI’s current expansion. That has led to a push from researchers, nonprofits, and others for more public input and oversight.

The Collective Intelligence Project, which describes itself as a lab that designs “new governance models for transformative technology,” is partnering with leading AI companies and governments wanting to “democratize” AI by bringing a broader range of voices into the conversation. They worked with Anthropic, maker of the chatbot Claude, to create a model trained on rules and values offered by 1,000 Americans from all walks of life – not just a group of software engineers.

Analysts also point out that many AI tools can be used to strengthen democracy, from digital identity documents to government service delivery.

“I don’t think we need to be scared of AI,” says Dr. Timcke, the research associate. “I think we need to be scared when there’s just so much power concentration in AI. ... Who’s controlling it? And do they have anyone overseeing them?”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Give us your feedback

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.

 
QR Code to Can AI be ‘democratic’? Race is on for who will define the technology’s future.
Read this article in
https://www.csmonitor.com/Business/2025/0510/ai-democracy-us-china
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe