AI Danger Alarmism Hits Overdrive With Provocateur Eliezer Yudkowsky: ‘I Thought We’d All Die!’ | Gateway Experts | by Paul Serran

Barely a week has passed that several respected voices in the field of Artificial Intelligence have not sounded the alarm about the technology’s impending villainy.
And this week is no different. The AI theorist and provocateur known as Eliezer Yudkowsky, who previously called for ‘machine learning data center bombing’ has spoken out.
Yudkowsky has seriously defended his theory for decades, and now his destruction has gone into overdrive.
His latest bold prediction is that artificial intelligence will ‘inevitably’ lead to the death of humanity.
Know reported:
“For decades, Yudkowsky was a firm believer in the ‘AI apocalypse’ scenario. His views have gained traction in recent years as advances in AI technology have accelerated, causing even the most distinguished computer scientists to question the potential consequences.”
Yudkowsky is concerned about the rapidly expanding capabilities of large language models, such as ChatGPT. He views these models as a significant threat, capable of ‘surpassing human intelligence’ and potentially causing ‘irreparable damage’.
“’I don’t think we’re ready, I don’t think we know what we’re doing, and I think we’re all going to die,’ Yudkowsky said in an episode of Bloomberg’s IRL series AI.
“The situation is that we roughly don’t know what’s going on in GPT-4,” he continued. ‘We have the theory but no ability to actually see the large matrix of fractions being multiplied and added in there, and [what those] numbers mean’.”
This latest call of warning has followed many established voices in the field, such as the ‘godfather of artificial intelligence’ Geoffrey Hinton, a British computer scientist best known for his seminal work on neural networks that later formed the basis of today’s machine learning models.
Hinton quit his job at Google so he could speak freely about the matter.
“’Until recently, I think it will be like 20 to 50 years before we have general purpose AI’, said Hinton. ‘And now I think maybe 20 years or less’.”
For now, AGI alerts are frequently requested to power up the current model’s capabilities.
“But regardless of industry bluffs touting his arrival or how long it will be before AGI catches up with us, Hinton says we must carefully weigh the consequences now — which may include the small matter of trying to wipe out humanity.
“It was unimaginable, that’s all I will say,” Hinton told CBS.
[…] “I think it makes a lot of sense for people to be concerned about this issue now, even though it’s not going to happen in the next year or two,” Hinton said in the interview. ‘People should think about that problem’.”Another established voice rising up against the AI industry that is leading us into disaster is Yoshua Bengio, who is considered one of the three ‘godfathers’ of artificial intelligence.
He feels ‘a little blue’ that his life’s work seems to be spiraling out of control.
Futurism reported:
“’You could say I feel lost,’ Bengio told the outlet. ‘But you have to keep going and you have to engage, discuss, encourage others to think with you’.”
His most pressing concern is ‘bad actors’ abusing AI.
“‘Maybe military, maybe terrorist, maybe someone very angry, psychotic’, Bengio told the BBC. “So if it’s easy to program these AI systems to ask them to do something really bad, it could be really dangerous.”
The AI controversy is poised to be with us for now. Let’s hope something gets done while we still can.