Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman | lpstrkl.com

Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

Sign Up For Our Daily Newsletter

SIGN UP

 See all of our newsletters

The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

“It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

The new committee is filled with OpenAI insiders

OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian WengJohn Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

“We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

The Ins and Outs of Commissioning a Work of Art

The commission conversation often starts with, or gets around to, a client telling the...

How Mega-Collector Ronald Perelman Offloaded Nearly $1B in Artwork

Ronald Perelman, the billionaire investor known for his vast art collection, has in recent...

Why (and How) Gallery and Museum Collections Management Went Digital

Before she opened her gallery in 1999, art dealer Debra Force worked at New York’s Hirschl...

- A word from our sponsor -

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here