OpenAI co-founder and former chief scientist Ilya Sutskever introduced he’s launching a brand new AI agency that can primarily concentrate on growing a “secure superintelligence.”
Former OpenAI member Daniel Levy and former Apple AI lead Daniel Gross are additionally co-founders of the agency, dubbed Secure Superintelligence Inc., based on the June 19 announcement.
Based on the agency, superintelligence is “inside attain,” and guaranteeing that it’s “secure” for people is the “most necessary technical problem of our age.”
The agency added that it intends to be a “straight-shot secure superintelligence (SSI) lab” with expertise as its sole product and security its main purpose. It added:
“We’re assembling a lean, cracked group of the world’s greatest engineers and researchers devoted to specializing in SSI and nothing else.”
Secure Superintelligence Inc. mentioned it goals to advance capabilities as shortly as doable whereas pursuing security. The agency’s centered method implies that administration, overhead, short-term industrial pressures, and product cycles is not going to divert it from its purpose.
“This manner, we will scale in peace.”
The agency added that buyers are on board with the method of prioritizing secure improvement over the whole lot else.
In a Bloomberg interview, Sutskever declined to call monetary backers or state the quantity raised to this point, whereas Gross commented broadly and mentioned that “elevating capital will not be going to be” a problem for the corporate.
Secure Superintelligence Inc. will likely be based mostly in Palo Alto, California, with places of work in Tel Aviv, Israel.
Launch follows security issues at OpenAI
The launch of Secure Superintelligence follows a dispute at OpenAI. Sutskever was a part of the group that tried to take away OpenAI CEO Sam Altman from his function in November 2023.
Early reporting, together with from The Atlantic, prompt that security was a priority on the firm across the time of the dispute. In the meantime, an inside firm memo prompt Altman’s tried firing was associated to a communication breakdown between him and the agency’s board of administrators.
Sutskever left the general public eye for months after the incident and formally left Open AI a number of weeks in the past in Might. He didn’t cite any causes for his departure, however latest developments on the AI agency have introduced the problem of AI security to the forefront.
OpenAI workers Jan Leike and Gretchen Krueger lately left the agency, citing issues about AI security. In the meantime, reviews from Vox suggest that not less than 5 different “safety-conscious workers” have left since November.
In an interview with Bloomberg, Sutskever said that he maintains a great relationship with Altman and mentioned OpenAI is conscious of the brand new firm “in broad strokes.”