Ilya Sutskever sitting alongside Sam Altman at Tel Aviv University in 2023.

Ilya Sutskever breaks silence on OpenAI departure: “I had a big new vision”

In a newly unsealed deposition, the AI pioneer reveals years of tension, doubts over Sam Altman’s leadership, and how it all led to his new venture, Safe Superintelligence.

Ilya Sutskever, the elusive scientist who co-founded OpenAI and helped build ChatGPT, has spoken publicly for the first time about the events that tore apart the world’s most influential AI lab. In a deposition taken on October 1, 2025, as part of Elon Musk’s lawsuit against OpenAI and Sam Altman, Sutskever described the breakdown of trust inside the company, his role in Altman’s temporary ouster, and the vision that ultimately drove him to leave.
“Ultimately, I had a big new vision,” Sutskever said. “And it felt more suitable for a new company.” That company is Safe Superintelligence (SSI), the secretive AI startup he founded last year, now valued at $32 billion after raising more than $3 billion in funding.
1 View gallery
מוסף שבועי 8.6.23 מימין ד"ר איליה סוצקבר ומנכ"ל OPEN AI סם אלטמן באוניברסיטת תל אביב
מוסף שבועי 8.6.23 מימין ד"ר איליה סוצקבר ומנכ"ל OPEN AI סם אלטמן באוניברסיטת תל אביב
Ilya Sutskever sitting alongside Sam Altman at Tel Aviv University in 2023.
(Photo: Avigail Uzi)
At several points in the testimony, Sutskever acknowledged authoring detailed memos to OpenAI’s independent directors accusing Altman of “a consistent pattern of lying” and “pitting his executives against one another.” He admitted recommending Altman’s termination in late 2023 and sending the documents through disappearing emails out of fear they might leak.
“I wanted them to become aware of it,” he said of the board. “But my opinion was that action was appropriate.”
The deposition also confirmed, for the first time under oath, that OpenAI’s board considered a merger with rival AI company Anthropic immediately after Altman’s removal in November 2023. Sutskever recalled a board call with Anthropic’s founders, Dario and Daniela Amodei, and said he was “very unhappy” about the proposal.
“I really did not want OpenAI to merge with Anthropic,” he testified. “I just didn’t want to.”
Other board members, he said, were “a lot more supportive,” particularly Helen Toner, then a key director. The discussions ended quickly after Anthropic raised “practical obstacles.”
Sutskever’s testimony revisits an extraordinary week in Silicon Valley, when the OpenAI board’s abrupt firing of Altman triggered near-rebellion inside the company. Sutskever admitted that he had expected a muted reaction from staff: “I had not expected them to cheer, but I had not expected them to feel strongly either way.” Instead, hundreds of employees threatened to quit unless Altman was reinstated.
Within days, the board reversed itself, Altman returned, and Sutskever, once his ally, was effectively sidelined. He resigned the following spring.
Asked why he left, Sutskever’s answer was characteristically understated but telling: “I had a big new vision, and it felt more suitable for a new company.” He said SSI was founded to “do a new and different kind of research,” declining to elaborate further.