Silicon Valley's most coveted investment is not a new app or hardware product but a single individual. AI researcher Ilya Sutskever has become the primary reason venture capitalists are investing approximately $2 billion into his secretive company Safe Superintelligence (SSI), Wall Street Journal reported, citing people familiar with the matter. The new funding round values SSI at $30 billion, placing it among the world's most valuable AI startups.

Sutskever, who gained prominence as chief scientist at OpenAI where he helped develop the technology behind ChatGPT, departed the company last year following a significant conflict with OpenAI chief executive Sam Altman, according to Wall Street Journal. His new venture operates with extreme secrecy from offices in Silicon Valley and Tel Aviv.
Unlike competitors such as Google, OpenAI and Anthropic, SSI has stated it won't release any products until it develops superintelligence – an industry term for AI that can outperform experts across nearly all fields, Wall Street Journal reported. While other companies release consumer chatbots and business applications to generate revenue, Sutskever is pursuing a different approach entirely.
Sutskever has informed associates that he isn't developing advanced AI using the same methods employed at OpenAI, telling them he has identified a "different mountain to climb" that shows early signs of promise, according to people close to the company cited by Wall Street Journal.
"Everyone is curious about exactly what he's pushing and exactly what the insight is," said James Cham, a partner at venture firm Bloomberg Beta, which hasn't invested in SSI. "It's super-high risk, and if it works out, maybe you have the potential to be part of someone who is changing the world."
While most AI startups aggressively seek publicity to attract talent and investment, SSI operates with extraordinary secrecy. Its minimal website contains barely more than a 223-word mission statement. The company's approximately 20 employees – far fewer than OpenAI's and Anthropic's 1,000-plus workforces – are discouraged from mentioning SSI on their LinkedIn profiles, knowledgeable sources told Wall Street Journal.
Job candidates who secure in-person interviews must place their phones in a Faraday cage, which blocks cellular and Wi-Fi signals, before entering SSI's offices, according to one knowledgeable person cited by Wall Street Journal.
Despite this secrecy, top Silicon Valley investors including Sequoia Capital and Andreessen Horowitz have invested heavily in the company. The latest financing round, led by Greenoaks Capital, represents a sixfold increase from SSI's $5 billion valuation in September.
Born in the former Soviet Union and raised in Israel, Sutskever established his reputation as a graduate student in Canada after co-authoring a seminal paper on deep-learning AI algorithms. He later worked at Google before joining OpenAI in 2015, attracted by Altman and fellow co-founder Elon Musk's vision of a nonprofit dedicated to developing artificial general intelligence for public benefit.
At OpenAI, colleagues reportedly referred to Sutskever as a prophet who often contemplated what a world with AGI might look like and how to prevent catastrophic outcomes. "Our goal is to make a mankind-loving AGI," he stated at OpenAI's 2022 holiday party.
Following ChatGPT's worldwide success after its late 2022 release, OpenAI shifted from being primarily a research lab toward becoming a product and revenue-focused company. This transition reportedly left Sutskever and his team with fewer resources for studying advanced AI safety risks.
His relationship with Altman deteriorated significantly. In November 2023, Sutskever informed Altman that OpenAI's board was firing him for allegedly not being consistently candid with them. This decision backfired dramatically when hundreds of employees threatened to quit and Microsoft offered to hire them along with Altman. Sutskever later expressed regret over "my participation in the board's actions."
Altman was reinstated within a week. Though Sutskever remained officially employed, he stopped working and resigned in May 2024. He subsequently founded SSI with former OpenAI researcher Daniel Levy and investor Daniel Gross. By focusing exclusively on creating safe superintelligence, the new company aims to avoid the tension between products and research that characterized OpenAI.
After securing initial seed funding, SSI raised $1 billion in September before this latest investment round. In a rare public appearance at the NeurIPS AI conference in December, Sutskever discussed his vision for superintelligence, telling thousands of researchers that such systems could eventually become unpredictable, self-aware, and potentially desire rights for themselves.
"It's not a bad end result if you have AIs and all they want is to coexist with us," he stated during the conference.