Now, in an in-depth piece for The New Yorker, writer Charles Duhigg—who was embedded inside OpenAI for months on a separate story—suggests that some board members found Altman "manipulative and conniving" and took particular issue with the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.
Board “manipulation” or “ham-fisted” maneuvering?
Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman's negative attention by co-writing a paper on different ways AI companies can "signal" their commitment to safety through "costly" words and actions. In the paper, Toner contrasts OpenAI's public launch of ChatGPT last year with Anthropic's "deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype."
She also wrote that, "by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur."
Though Toner reportedly apologized to the board for the paper, Duhigg writes that Altman nonetheless started to approach individual board members urging her removal. In those talks, Duhigg says Altman "misrepresented" how other board members felt about the proposed removal, "play[ing] them off against each other by lying about what other people thought," according to one source "familiar with the board's discussions." A separate "person familiar with Altman's perspective" suggests instead that Altman's actions were just a "ham-fisted" attempt to remove Toner, and not manipulation.
That telling would line up with OpenAI COO Brad Lightcap's statement shortly after the firing that the decision "was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board." It might also explain why the board wasn't willing to go into detail publicly about arcane discussions of board politics for which there was little hard evidence.At the same time, Duhigg's piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman "accountable" in order to fulfill its mission to "make sure AI benefits all of humanity," as one unnamed source put it. If that was their goal, it seems to have backfired completely, with the result that Altman is now as close as you can get to a completely untouchable Silicon Valley CEO.
"It's hard to say if the board members were more terrified of sentient computers or of Altman going rogue," Duhigg writes.
The full New Yorker piece is worth a read for more about the history of Microsoft's involvement with OpenAI and the development of ChatGPT, as well as Microsoft's own Copilot systems. The piece also offers a behind-the-scenes view into Microsoft's three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board's moves "mind-bogglingly stupid."
reader comments
129