OpenAI's new safety board has more power and no Sam Altman
The company's new board now has the power to pump the brakes on new AI model launches.
OpenAI has announced significant changes to its safety and security practices, including the establishment of a new independent board oversight committee. This move comes with a notable shift: CEO Sam Altman is no longer part of the safety committee, marking a departure from the previous structure.
The newly formed Safety and Security Committee (SSC) will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation.
This new committee replaces the previous Safety and Security Committee that was formed in June 2024, which included Altman among its members. The original committee was tasked with making recommendations on critical safety and security decisions for OpenAI projects and operations.
The SSC's responsibilities now extend beyond recommendations. It will have the authority to oversee safety evaluations for major model releases and exercise oversight over model launches. Crucially, the committee will have the power to delay a release until safety concerns are adequately addressed.
This restructuring follows a period of scrutiny regarding OpenAI's commitment to AI safety. The company has faced criticism in the past for disbanding its Superalignment team and the departures of key safety-focused personnel. The removal of Altman from the safety committee appears to be an attempt to address concerns about potential conflicts of interest in the company's safety oversight.
OpenAI's latest safety initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already reached agreements with the US and UK AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI.