Even as OpenAI CEO Sam Altman has laid out plans for regulation, calling for international licensing and oversight agencies in addition to responsible deployment, his development of the existentially risky technology is not slowing down.
And despite his regular calls for regulation, speaking at a panel in London May 24, he said that OpenAI has many criticisms of the EU’s AI Act, adding that the company would leave Europe if the Act, in its current state, became law.
Altman’s presence drew protesters who passionately criticized the idea of building artificial superintelligence (AGI) — a goal OpenAI is striving for — at all.
“Maybe he’s selling a grift, I sure as hell hope he is,” Gideon Futerman, one of the protesters, said. “And if he’s right and he’s building systems which are generally intelligent, the dangers are far, far bigger. And there’s a very legitimate question as to why they don’t stop.”
Futerman said in a Twitter thread that Altman took a few moments to speak to him and the other protesters, reinforcing his perception of a man who is dangerously misguided on AGI.
“Sam Altman is friendly, passionate and genuinely believes he is on a mission to save humanity,” Futerman wrote. “However, he fails to see that [the] technology isn’t inevitable, and fails to see that we could choose not to build systems with such risks in the first place.”
“Despite his good intentions, this mindset means he and OpenAI are still incredibly dangerous actors,” Futerman added.
The existential risk of AGI centers around “superintelligent machines beyond our control,” AI expert Gary Marcus wrote.
AGI models would have intelligence equal to or greater than human intelligence, and though current AI models are nowhere near that, it’s hard to tell just how much time there is until AGI becomes less of a science fiction and more of a reality.
Futerman did not immediately respond to The Street’s request for comment.