- Geoffrey Hinton isn't a fan of Sam Altman's leadership.
- Hinton said in a press conference on Tuesday that he thought Altman valued profits over safety.
- Hinton's protégé, OpenAI cofounder Ilya Sutskever, played a key role in Altman's brief ouster.
Newly minted Nobel Prize winner Geoffrey Hinton says he's proud that one of his former students had a part to play in Sam Altman's brief ouster from OpenAI in November.
Hinton, who was awarded the 2024 Nobel Prize in Physics on Tuesday, was speaking at a press conference on the same day when he weighed in on Altman and OpenAI.
"I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton said, referencing his protégé and OpenAI's former chief scientist Ilya Sutskever.
Sutskever graduated with his Ph.D. in computer science from the University of Toronto in 2013. Hinton was Sutskever's doctoral supervisor.
"So OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe," Hinton said on Tuesday.
"And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that's unfortunate," he added.
Hinton told Nikkei Asia in an interview published in March that he thought Sutskever had foresight in recognizing AI's potential — and its dangers.
"In 2012, it still seemed that these digital intelligences were not nearly as good as people. They might be able to get about the same as people at recognizing objects and images, but at that time we didn't think they'd be able to deal with language and understanding complicated things," Hinton said.
"Ilya changed his mind before me. It turned out he was right," he added.
Sutskever left OpenAI in May, just months after OpenAI's board moved to fire Altman as CEO.
On November 17, 2023, the company's board said in a statement that it was removing Altman because he "was not consistently candid in his communications with the board." The board, however, didn't give further details about Altman's firing.
Sutskever, who cofounded OpenAI with Altman, later expressed regret for his decision to fire Altman. He joined other employees in calling for Altman's reinstatement.
Following his departure, Sutskever said in June that he was starting a new AI company, Safe Superintelligence Inc.
Representatives for Altman at OpenAI, Hinton, and Sutskever at Safe Superintelligence Inc. didn't immediately respond to requests for comment from Business Insider sent outside regular business hours.
Hinton, who is known as the "godfather of AI" for his pioneering work in neural networks, has long warned about AI's dangers.
During an interview with CBS' "60 Minutes," which aired in October 2023, Hinton said it was possible that AI could eventually manipulate humans.
"They will be able to manipulate people, right?" Hinton told "60 Minutes."
"And these will be very good at convincing people cause they'll have learned from all the novels that were ever written — all the books by Machiavelli, all the political connivances, they'll know all that stuff. They'll know how to do it," he continued.
When approached for comment about the interview, Hinton told BI's Jordan Hart that it could take between five and 20 years before AI becomes a real threat. Hinton acknowledged as well that "it is still possible that the threat will not materialize."
To be sure, Hinton isn't the only one who has questioned Altman's leadership of OpenAI.
Under Altman, OpenAI, which was founded as a nonprofit in 2015, is set to transition into a for-profit company within the next two years.
Altman's fellow cofounder turned rival, Elon Musk, has repeatedly criticized him since he left the company's board in 2018.
"OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft," Musk wrote in an X post in February 2023.
"Not what I intended at all," Musk added.
from Business Insider https://ift.tt/uZ9SCY4
No comments:
Post a Comment