Founding OpenAI member Ilya Sutskever recently announced the establishment of a new artificial intelligence company dedicated to developing “safe superintelligence.” The company, called Safe Superintelligence Inc. (SSI), has two other co-founders: former OpenAI employee Daniel Levy and former Apple AI executive Daniel Gross.
Safe Superintelligence Inc. believes that the emergence of “superintelligence” is imminent and that ensuring its safety is the most important technological challenge of our time. Their mission is to be a laboratory focused on safe superintelligence, prioritizing security while conducting research and development.
“We are assembling a lean and efficient team comprised of the world’s best engineers and researchers, solely dedicated to building safe superintelligence and nothing else,” stated Safe Superintelligence Inc. in their announcement on X.
According to Bloomberg, Safe Superintelligence Inc. is a pure research organization that currently has no intention of commercializing AI products or services, aside from creating a secure and powerful AI system.
During an interview with Bloomberg, Sutskever declined to disclose the names of the company’s financial backers or the total amount of funds raised, while Gross stated that fundraising would “not” be a problem for the company. It is known that Safe Superintelligence Inc. is headquartered in Palo Alto, California, with an office in Tel Aviv, Israel.
Prior to establishing his own company, Sutskever left OpenAI in May 2024 due to an internal dispute within the company. He played a key role in the “coup” initiated in November 2023, which resulted in the removal of OpenAI CEO Sam Altman. Sutskever leans towards promoting pure scientific research and technological innovation, prioritizing AI public safety over purely commercial interests. Altman, on the other hand, excels in technological business applications and market promotion, transforming OpenAI’s research into tangible products and services. The two ultimately disagreed on strategic direction and technological development.
In addition, Vox reported that OpenAI researchers Jan Leike and Gretchen Krueger recently left the company due to concerns about AI safety. Vox also revealed that at least five “safety-conscious employees” have left OpenAI since November of last year.
With the establishment of Safe Superintelligence Inc., Sutskever once again draws attention to the issue of AI safety. Building a powerful and secure AI system poses a significant challenge for technological innovation and is a crucial preparation that must be made before AI becomes a part of everyday human life.
Sources:
CryptoSlate, Bloomberg, Vox