Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress | lpstrkl.com

Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress

Just days after OpenAI announced it’s training its next iteration of GPT, the company’s CEO Sam Altman said OpenAI doesn’t need to fully understand its product in order to release new versions. In a live interview today (May 30) with Nicholas Thompson, the CEO of The Atlantic, at the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, Altman spoke about A.I. safety and the technology’s potential to benefit humanity. However, the CEO didn’t seem to have a good answer to the basic question of how GPT works. 

Sign Up For Our Daily Newsletter

SIGN UP

 See all of our newsletters

“We certainly have not solved interpretability,” Altman said. In the realm of A.I., interpretability—or explainability—is the understanding of how A.I. and machine learning systems make decisions, according to Georgetown University’s Center for Security and Emerging Technology. “If you don’t understand what’s happening, isn’t that an argument to not keep releasing new, more powerful models?” asked Thompson. Altman danced around the question, ultimately responding that, even without that full cognition, “these systems [are] generally considered safe and robust.”

“We don’t understand what’s happening in your brain at a neuron-by-neuron level, and yet we know you can follow some rules and can ask you to explain why you think something,” said Altman. By likening GPT to the human brain, Altman reasoned a black-box presence, or a sense of mystery behind its functionality. Like human brains, generative A.I. technology such as GPT creates new content based on existing data sets and can supposedly learn over time. GPT may not have emotional intelligence or human consciousness, but it can be difficult to understand how algorithms—and the human brain—come to the conclusions they do.

Earlier this month, OpenAI released GPT-4o and announced this week that it has “recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence].”

As OpenAI continues its iterative deployment, safety remains a primary concern—particularly as the company recently disbanded its previous safety team, led by former chief scientist Ilya Sutskever, and created a new safety team led by Altman himself. Earlier this week, former OpenAI board members Helen Toner and Tasha McCauley published a joint opinion piece in the Economist on this decision, writing, “We believe that self-governance cannot reliably withstand the pressure of profit incentives.” 

Altman reiterated at the summit that the formation of a new safety and security committee is to help OpenAI get ready for the next model. “If we are right that the trajectory of improvement is going to remain steep,” then figuring out what structures and policies that companies and countries should put into place with a long-term perspective is paramount, Altman said.

The Ins and Outs of Commissioning a Work of Art

The commission conversation often starts with, or gets around to, a client telling the...

How Mega-Collector Ronald Perelman Offloaded Nearly $1B in Artwork

Ronald Perelman, the billionaire investor known for his vast art collection, has in recent...

Why (and How) Gallery and Museum Collections Management Went Digital

Before she opened her gallery in 1999, art dealer Debra Force worked at New York’s Hirschl...

- A word from our sponsor -

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here