Where is Ilya?
This burning question, asked by every AI enthusiast since the palace drama at OpenAI late last year, finally has an answer.
Ilya Sutskever has officially unveiled his new company: Safe Superintelligence Inc. (SSI). The company's mission? Developing safe superintelligence - and that's it.
SSI is a pure research institution, aiming to achieve safety through engineering breakthroughs embedded in AI systems, rather than relying on temporary safety measures. They have no immediate plans to sell AI products or services.
Ilya has repeatedly stated in public that his main work is figuring out how to make "superintelligence" safe and controllable. In other words, SSI's establishment allows Ilya to continue his work without the distractions faced by competitors like OpenAI, Google, and Anthropic.
Safe superintelligence is, in a sense, a return to OpenAI's original mission. OpenAI's initial goal was also to develop safe and beneficial artificial intelligence, free from commercial and competitive pressures, ensuring the safety of AI technology and long-term benefits for humanity.
However, as time passed, OpenAI and other companies had to balance safety with market demands under commercial and competitive pressures. The departure of Ilya and members of OpenAI's alignment team was primarily due to irreconcilable differences regarding the priority of safety within OpenAI. Through SSI, he attempts to address these issues through his own company.
The new co-founder: a tech-enhanced Sam Altman?
SSI's other two co-founders are Daniel Gross and Daniel Levy. Levy is Ilya's former subordinate at OpenAI, where they collaborated on several important AI projects. He was also the leader of OpenAI's optimization team and is renowned for his work on GPT-4's architecture and optimization.
Daniel Gross's background is even more dramatic. Born in Jerusalem, Israel in 1991, Gross spent the first eighteen years of his life in his hometown, planning to join the Israel Defense Forces. However, during a pre-enlistment trip to San Francisco, he was caught up in Silicon Valley's startup fever. In 2010, Gross was accepted into Y-Combinator, becoming the youngest founder in YC history. He never returned to Israel, avoiding military service.
Gross subsequently launched the search engine Greplin, later renamed Cue. After Apple acquired Cue in 2013, Gross joined Apple to lead AI work. In 2017, he became a partner at YC, focusing on artificial intelligence and creating the dedicated "YC AI" program.
In 2021, Daniel Gross and former Github CEO Nat Friedman established the early-stage venture capital fund AI Grant. In 2023, AI Grant deployed its own computing cluster, Andromeda, a supercomputer cluster of 2,512 H100 GPUs available to startups in its portfolio.
As an investor, Gross's portfolio is impressive, including investments in Uber, Coinbase, Figma, GitHub, Airtable, Notion, CoreWeave, Character.ai, Perplexity.ai, Pika, and Suno.
Gross's investment portfolio demonstrates a comprehensive understanding of overall tech industry trends, often betting on projects with disruptive potential. His investment choices show a particular focus on technology-driven innovation and the AI and software sectors, tending to support startups that can change market landscapes and have high growth potential, especially those that can profoundly impact human daily life and work methods.
In this newly established company, Ilya takes on the role of Chief Scientist, responsible for revolutionary breakthroughs, while Gross, in some ways, replaces Altman's role.
Comparing the two, we can see some obvious similarities and differences. Both Gross and Altman "graduated" from Paul Graham's school, have strong ties to YC, and possess rich experience in tech entrepreneurship and investment. They have extensive networks and resources in Silicon Valley, making them suitable as the face of a company.
Moreover, compared to the more "politically savvy" Altman, who travels the world meeting legislators, regulators, and investors, Gross is closer in temperament to Ilya. Both are technical geniuses, and his understanding of technology could provide crucial support for the company's strategy, technical direction, and resource integration. While Altman's commercial ambitions for OpenAI are well-known, Gross's collaboration with Ilya might be more suitable for focusing on AI safety and research itself. Gross's experience in business and investment also complements Ilya's shortcomings.
So Gross's joining is a bit like Ilya finding a new Sam Altman after leaving OpenAI. One can only wonder what Altman might think of this.
"Daniel Gross, don't mess this up"
This isn't the first time OpenAI has seen employees leave due to safety disagreements to start their own AI company. In 2021, former OpenAI employees Dario and Daniela Amodei founded Anthropic, which also aims to prioritize safety more than industry competitors. Ilya's SSI appears even more pure and mysterious.
In the AI industry, there are two opposing camps: the cautious and the innovative. One follows Silicon Valley's "move fast and break things" ethos, called "effective accelerationism," while the other wants to ensure the technology developed is safe enough, thoroughly tested in lab environments, and appropriate safeguards are in place before releasing potentially disruptive AI technology, known as "effective altruism." Both sides hope the benefits of AI technology will benefit everyone globally.
Ilya has always been a safety advocate in the AI field, similar to his mentor Geoffrey Hinton, who resigned from Google partly to warn about AI safety and claimed to "regret and fear his life's work." In recent years, Ilya's work has focused on superintelligence alignment. Last year, he created a superintelligence alignment team within OpenAI, aiming to ensure future AI technologies won't harm humans. He's increasingly concerned that AI could become dangerous, potentially destroying humanity, and SSI's establishment allows him to focus on preventing this from happening.
The name Safe Superintelligence Inc sounds like it could come from the Fallout series, a post-apocalyptic game set in a world devastated by nuclear war. This is not only because Ilya mentioned that SSI aims for "nuclear safety" level security, but also because the game often deals with the potential risks and moral dilemmas brought by radical technological development.
This sense of mission is also similar to companies in Fallout. In the game, a company called Vault-Tec is dedicated to protecting human survival in nuclear war. The name Safe Superintelligence Inc also conveys a clear mission. Just as Fallout often shrouds the stories and purposes behind companies in mystery, Safe Superintelligence Inc also leaves much to the imagination.
Vault-Tec is known in the game for building shelters that are ostensibly to protect humans from nuclear annihilation but often hide darker purposes and experiments.
Even the most forward-thinking and protective tech companies can produce unexpected consequences due to internal experiments, decisions, or changes in the external environment. Whether SSI's mission can truly be realized, or whether it will deviate from its original intentions in practice, remains to be seen.
This might depend more on whether Daniel Gross will become another Sam Altman.