What turmoil over a CEO tells us about the future of AI
Loading...
If ever someone makes a movie about how not to fire a CEO, they could base their script on the playbook of OpenAI. On Friday, the San Francisco artificial intelligence company fired its chief executive, triggered a revolt from employees who threatened to leave, and early Wednesday announced a deal to reinstate him.
Because OpenAI owns ChatGPT, a leading AI language generator, or chatbot, all this got big attention. While the full details behind the firing of CEO Sam Altman are still not known, the turbulent events highlight wider societal questions over who will control this powerful transformative technology.
Why We Wrote This
A story focused onThe company behind ChatGPT embodied a key question surrounding artificial intelligence: Will the profit motive face any constraints, for a technology that carries risks as well as benefits?
Will it be a few billionaire-owned corporations? A nonprofit consortium? The government through regulation?
OpenAI had tried a novel structure, as a nonprofit controlling a for-profit company – and with its board pledged to the mission of benefiting humanity. The upheaval at OpenAI represents, at least in part, an ongoing battle between the fear of AI’s potential dangers and the lure of its expected benefits and profits.
And the outcome there may signal the powerful role that capitalists and entrepreneurs will play in shaping AI’s future.
“Money wins a lot,” says Lilly Irani, a professor of communication at the University of California, San Diego.
If ever someone makes a movie about how not to fire a CEO, they could base their script on the playbook of OpenAI. On Friday, the San Francisco artificial intelligence company fired its chief executive, subsequently triggered a revolt from employees who threatened to leave, and, early Wednesday, announced it had reached an agreement to reinstate him.
Because OpenAI owns ChatGPT, a leading AI language generator, or chatbot, each of the company’s head-spinning moves got plenty of attention. While the full details behind the firing of CEO Sam Altman are still not known, the turbulent events highlight wider societal questions over who will control this powerful transformative technology.
Will it be a few billionaire-owned corporations? A nonprofit consortium? The government?
Why We Wrote This
A story focused onThe company behind ChatGPT embodied a key question surrounding artificial intelligence: Will the profit motive face any constraints, for a technology that carries risks as well as benefits?
OpenAI had tried a novel structure, as a nonprofit controlling a for-profit company – and with its board pledged to the mission of benefiting humanity. The upheaval at OpenAI represents, at least in part, an ongoing battle between the fear of AI’s potential dangers and the lure of its expected benefits and profits.
The outcome, with Mr. Altman reinstated as CEO and new people on the company’s board, may signal the powerful role that capitalists and entrepreneurs will play – at least in the United States – in shaping the future of this emerging technology.
“This is an early skirmish in a war for the future,” says Tim O’Reilly, founder and CEO of O’Reilly Media and a visiting professor at University College London’s Institute for Innovation and Public Purpose.
AI sprang into the public consciousness almost exactly a year ago when OpenAI released ChatGPT to the public. It surpassed all expectations as an overnight sensation. People around the world couldn’t wait to interact with a super-knowledgeable computer that talked the way they did.
Less than two months later, OpenAI backer Microsoft announced it was plowing $10 billion into the company and would incorporate ChatGPT into its products. That set off a corporate spending race as Google, Amazon, and other tech giants sped up their own AI projects and investments. Capitalism was outrunning ethical concerns – again – in a period of disruptive technological change.
But this time it came with a twist. The companies themselves began raising the specter of super-intelligent machines causing harm if regulators didn’t provide guardrails.
In the rush for capital, OpenAI’s nonprofit structure came under pressure. The board came to feel it couldn’t trust Mr. Altman, a co-founder as well as CEO, who pushed for rapid development and deployment of AI, releasing the technology to the public. In his view, it was the best way to democratize the technology, expose its faults, and accelerate its benefits. His reinstatement and the overhauling of the board suggests that this techno-optimism has won out at OpenAI.
The ongoing struggle between techno-optimism and doomerism gets exaggerated in every period of rapid technological change, says Benjamin Breen, a historian at the University of California, Santa Cruz and author of an upcoming book on utopian science in the mid-20th century. No one knows where AI will take humanity. If history is any guide, he adds, the extremists on both sides tend to get it wrong.
For the foreseeable future, then, the battle over AI may not be whether the machines control people, but who and what controls the machines.
“Money wins a lot,” says Lilly Irani, a professor of communication at the University of California, San Diego. “Techno-optimism and techno-doomerism both miss the point about who has the voice at the table and gets to decide how the technology is designed, developed, and deployed.”