OpenAI Forms Safety Committee As Concerns Mount Over AI Development

OpenAI has established a Safety and Security Committee to address growing concerns about the responsible development of artificial intelligence as the company begins training its next-generation AI model. The move comes amidst high-profile departures and controversies surrounding OpenAI’s approach to AI safety.

The committee, led by CEO Sam Altman and board members Bret Taylor, Adam D’Angelo and Nicole Seligman, aims to evaluate processes and safeguards as OpenAI works towards achieving artificial general intelligence (AGI) – an AI system with capabilities matching or exceeding the human brain in a wide range of tasks.

However, the formation of the committee coincides with the recent departures of Chief Scientist Ilya Sutskever and Jan Leike, who were leaders of the company’s Superalignment team focused on ensuring AI stays aligned with intended objectives. The Superalignment team was disbanded earlier this month, less than a year after its creation, with some members being reassigned to other groups.

Within 90 days, the Safety and Security Committee will share recommendations on how OpenAI is handling AI risks with the full board of directors. The company may later reveal adopted recommendations “in a manner that is consistent with safety and security.”

In the competitive AI industry with players such as Elon Musks xAI making notable fundraising moves OpenAIs advancements in the upcoming GPT iteration have been met with both excitement and scrutiny.