Nieman Foundation at Harvard
HOME
          
LATEST STORY
The media becomes an activist for democracy
ABOUT                    SUBSCRIBE
Nov. 20, 2023, 1:25 p.m.

What does OpenAI’s rapid unscheduled disassembly mean for the future of AI?

Swinging from an $80 billion valuation to an existential crisis, in less time than it takes to rewatch five seasons of “The Wire”? That’s Tronc-level management.

Forgive media companies if they felt a little schadenfreude this weekend. For the past two decades, they’ve been criticized (often rightly, sometimes wrongly) for making terrible management decisions in the face of digital disruption. They’ve seen a few tech giants devour what used to be their revenue streams and be praised as geniuses for building a new generation of corporations.

But I struggle to remember a three-day span in which any media company has set itself on fire as profoundly as OpenAI just did. In less than 72 hours, it accomplished the impossible: making Tronc seem well run by comparison.

And it was a complete own goal. There was no extant crisis demanding a risky reaction. One day, it’s a company worth $80 billion (maybe $90 billion) and the most exciting new tech company in a decade. And now — well, if OpenAI announced in a few days that 95% of its employees had resigned and it was winding up business, would anyone be shocked?

To be fair, no one knows what even the nearest-term future of OpenAI will bring. But that’s damning itself: Important companies aren’t supposed to be ephemeral creatures whose next few days’ existence are up for debate. When you’ve got crypto companies taunting your terrible management — “the board just torched $80B of value, destroyed a shining star of American capitalism” — you know things are bad.

For anyone who missed all the roller coaster’s twists, read a tick-tock (not a TikTok) from The New York Times. Or just scan these headlines from The Verge:

Friday:

Sam Altman fired as CEO of OpenAI.

OpenAl co-founder Greg Brockman is leaving, too.

What happened to Sam Altman?

Saturday:

OpenAl’s COO told employees that Sam Altman wasn’t fired for “malfeasance.”

Sam Altman says he has a new venture in mind.

OpenAl board in discussions with Sam Altman to return as CEO.

The OpenAl board is waffling on resigning, and that might push Sam Altman to start a new company after all.

Sunday:

It’s the endgame for Sam Altman’s potential return to OpenAl.

Monday:

The deal to bring Sam Altman back to OpenAl appears to be going sideways.

Sam Altman isn’t coming back to OpenAI.

Microsoft hires former OpenAI CEO Sam Altman.

OpenAl employees are openly criticizing the company’s leadership.

We’re all trying to find the guy who did this.

Hundreds of OpenAI employees threaten to resign and join Microsoft.

The latest this morning is that fired OpenAI CEO Sam Altman and many of his top deputies have reunited at Microsoft, a.k.a. OpenAI’s biggest customer/funder/partner. More than 500 of OpenAI’s roughly 700 employees have threatened to resign and join Altman at Microsoft.

That list includes, astonishingly, Ilya Sutskever, the OpenAI co-founder who led the coup against Altman just three days ago. Sutskever, just tweeted: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” (Cue the hot-dog-car memes.)

Most difficult about all this is that we don’t really know what the coup was about. Sure, there’s high-minded talk of the philosophical differences between AI optimists and AI doomers, between those anxious to build AI quickly and those worried about the Borg. But the stated reason the board gave Friday afternoon was that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

And when pressed for further detail, the board provided…none. An executive was left to tell employees that the ouster decision was “not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.” If the board was so concerned about the company’s direction under Altman, why was it willing to negotiate his return barely a day later? Even the Times is willing to say, in a news story, that the board “looks silly” now.

OpenAI has always been a strange animal — a nonprofit company that owns a for-profit company with $1 billion in annual revenue. So rather than maximizing shareholder value, the OpenAI board is tasked with advancing the organization’s mission, which includes safety and broadly distributed benefits (“to ensure [artificial general intelligence] is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power…Our primary fiduciary duty is to humanity.”) That divide was sharpened by the huge success of ChatGPT, as The Atlantic’s Karen Hao and Charlie Warzel report. Illustrating the split: OpenAI still calls ChatGPT a “research preview” rather than a billion-dollar product.

What impact will all this have on publishers, who produce some of the information these AI models are trained on and who are increasingly looking to OpenAI and their ilk for revenue? It’s too early to say with any certainty — but in general, a competitive AI marketplace with multiple players should generate better returns for publishers than one dominated by a single company. So an implosion at the most successful AI company would seem, at some level, beneficial. But that’s complicated by the reality that OpenAI’s strengths haven’t vanished — they’ve simply been delivered to Microsoft, a little mom-and-pop firm worth a mere $2.78 trillion. OpenAI’s decline would also hinder the strongest AI competitor that isn’t a pre-existing tech giant (Google, Meta, Apple, Microsoft) — making it more likely that the next generation of winners will look a lot like the last one.

Then there’s the question of projects like OpenAI’s $5 million partnership with the American Journalism Project “to explore ways in which the development of artificial intelligence (AI) can support a thriving, innovative local news field.” Will there even be an OpenAI left to write the checks? What about the partnership with the Associated Press that gives AP access to OpenAI’s technology in exchange for access to AP’s archives? Or the $395,000 it gave NYU to support “workshops and discussions on existing and emerging journalism ethics issues”? OpenAI was already several pages through the Google/Facebook playbook, throwing money around the news industry to try to counteract media complaints — will any of that continue? Or will Microsoft follow suit?

In their letter threatening to quit, the 500-plus OpenAI employees write something remarkable. They report that the OpenAI board had “informed the leadership team that allowing the company to be destroyed ‘would be consistent with the mission.'” In other words: The mission of OpenAI is to produce beneficial technology. If OpenAI is going to produce harmful technology, the correct response is to self-abort. A rapid unscheduled disassembly, you might say.

Whether or not that was board members’ intent Friday morning, they seem to have accomplished it.

Photo of then-OpenAI CEO Sam Altman at TechCrunch Disrupt, October 03, 2019, by TechCrunch used under a Creative Commons license.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Nov. 20, 2023, 1:25 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
The media becomes an activist for democracy
“We cannot be neutral about this, by definition. A free press that doesn’t agitate for democracy is an oxymoron.”
Embracing influencers as allies
“News organizations will increasingly rely on digital creators not just as amplifiers but as integral partners in storytelling.”
Action over analysis
“We’ve overindexed on problem articulation, to the point of problem admiring. The risk is that we are analyzing ourselves into inaction and irrelevance.”