Ilya Sutskever launches Safe Superintelligence Inc., an AI startup that will prioritize security over 'commercial pressures'.
OpenAI co-founder and former chief scientist Ilya Sutskever is building a new security AI startup.
The announcement describes SSI as a startup that “approaches security and capabilities in tandem,” allowing the company to rapidly develop its AI system. Openai also cites external pressures that AI teams at companies such as Google and Microsoft often face, saying the company's "special focus" allows it to avoid "being distracted by management costs or product development cycles."
"Our business model means that security and progress are protected from short-term commercial pressures," the startup's announcement said. "That way we can scale in quiet conditions." In addition to Sutskever, SSI's co-founders are Daniel Gross, Apple's former head of artificial intelligence, and Daniel Levy, who previously worked as a technical staff member at OpenAI.
Last year, Sutskever pushed to oust OpenAI CEO Sam Altman. Sutskever left OpenAI in May and signaled the start of a new project. Shortly after Sutskever's departure, artificial intelligence researcher Jan Leike announced his resignation from OpenAI, citing security processes that "take a back seat to brilliant products." Gretchen Krueger, a policy researcher at OpenAI, also cited security concerns when announcing her departure. As OpenAI pursues partnerships with Apple and Microsoft, we likely won't see SSI anytime soon.