A former leader at OpenAI, who resigned this week, expressed on Friday that the company has prioritized flashy products over safety in artificial intelligence.
Jan Leike, who co-ran OpenAI’s “Superalignment” team with a co-founder who also resigned recently, shared on the social media platform X that he initially joined the San Francisco-based company because he believed it was ideal for advancing AI research.
“However, I have been in disagreement with OpenAI’s leadership about the company’s main priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.
Trained as an AI researcher, Leike emphasized the need for more attention to preparing for the next generation of AI models, focusing on safety and analyzing the societal impacts of such technologies.
He emphasized that creating machines smarter than humans carries significant risks and that OpenAI shoulders a huge responsibility on behalf of humanity.
“OpenAI must prioritize safety as an AGI company,” wrote Leike, using the abbreviation for artificial general intelligence, which envisions machines performing tasks as competently as humans.
In response, OpenAI CEO Sam Altman expressed sadness over Leike’s departure and acknowledged the need for further action. Altman pledged to write a detailed post on this issue soon.
The company also confirmed on Friday that it had dissolved Leike’s Superalignment team, formed last year to focus on AI risks, and is integrating team members into its broader research initiatives.