Ilya Sutskever, a former chief scientist and co-founder of OpenAI, has left the company to launch a new venture called Safe Superintelligence Inc. (SSI) with co-founders Daniel Gross (former Apple AI director) and Daniel Levy (former OpenAI engineer).
The company is based in the U.S. with teams in Palo Alto and Tel Aviv, and will continue recruiting talent, raising funds, and seeking like-minded partners to strive for safe and functional AI technology.
SSI aims to create a safe superintelligence system by prioritizing safety over commercial pressures and rapid advancement, focusing solely on safety as its primary goal. The company plans to achieve this through engineering and scientific breakthroughs that advance AI capabilities rapidly while ensuring safety remains a top priority.
Concerns have been raised about the rapid advancement of AI and its potential negative impact on humanity. SSI focuses on safety and capability simultaneously, aiming for revolutionary engineering solutions to advance AI while ensuring safety remains a priority. The company has no intention of selling AI products or services in the short term, allowing the founders to concentrate solely on building Safe SuperIntelligence without commercial pressure or distractions.
Sutskever's departure from OpenAI, where he was a founding member and research chief, led to speculation about his future moves and alleged communication issues with CEO Sam Altman and OpenAI's potential shift towards a for-profit model. There are unconfirmed claims that Sutskever previously led efforts to oust Altman from OpenAI before leaving.
Experts like Geoffrey Hinton have warned about the insufficient attention to AI safety as technology rapidly advances beyond human intelligence, highlighting the importance of SSI's mission. The potential danger of AI systems becoming harmful for humanity as they become more powerful and autonomous has been a long-standing concern in the industry, leading experts to warn about it and governments to implement strict regulations and reporting requirements.
Sutskever believes that AI safety is a crucial issue, based on his past work and collaborations, and his new company's business model prioritizes safety, security, and progress over short-term commercial pressures.
It is unclear how Sutskever's new superintelligence lab will be funded.
Jan Leike, who worked closely with Sutskever, now leads a team at another AI firm called Anthropic.
Sources: businessinsider.com, time.com, welt.de, techinasia.com, finanznachrichten.de, yourstory.com, ltn.com.tw, cio.economictimes.indiatimes.com, stock.stockstar.com, indiatoday.in, moneycontrol.com, zive.aktuality.sk, tech.hindustantimes.com, frenchweb.fr, tech.ifeng.com, verlagshaus-jaumann.de, nownews.com, news9live.com, dnn.de, newsbytesapp.com, and latestly.com.
This article was written in collaboration with Generative AI news company Alchemiq.