Nate Soares, a Berkeley researcher and AI « doomer, » predicts human extinction: « Letting these experiments continue is an insane gamble. »

show index hide index

Nate Soares, a prominent artificial intelligence researcher, is among the scientists known as « doomers » who warn of a potential cataclysm linked to the uncontrolled evolution of AI technologies. Currently head of the Machine Intelligence Research Institute in Berkeley, California, Soares, with his past experience at Microsoft andGoogle

, warns of the possibility that superintelligence, once achieved, could escape human control, thus threatening the survival of humanity. For Soares, continuing these experiments without safeguards appears to be a dangerous, even reckless, gamble. Nate Soares, a renowned researcher based in Berkeley and affiliated with the Machine Intelligence Research Institute (MIRI), is among those known as

« doomers » , concerned about the potential threats of artificial intelligence (AI). Known for his contributions to Microsoft and Google before dedicating himself to pure research, Soares warns of the dangers of uncontrolled AI. In his latest book, co-authored with Eliezer Yudkowsky, he argues that it is imperative to halt certain experiments before superintelligence gains the upper hand over humanity.

Nate Soares, a researcher concerned about the effects of artificial intelligence

Nate Soares made a name for himself in the field of machine learning

thanks to his time at tech giants such as Microsoft and Google. Today, he directs a laboratory at Berkeley specializing in the study of the implications of artificial intelligence. Despite his successes, he remains deeply concerned about the uncontrolled development of this technology.

His career led him to collaborate with Eliezer Yudkowsky, a pioneer in the field of AI, at the Machine Intelligence Research Institute. Together, they published numerous essays and books aimed at raising awareness among the public and policymakers about the dangers of artificial intelligence beyond human control. The Rise of the « Doomers »

« Doomers » are those who see the evolution of artificial intelligence not as a promise, but as a threat. Unlike technological optimists, they believe that if certain red lines are not drawn, humanity could lose control to a digital power capable of developing unpredictably.

According to them, superintelligence, an AI that would far surpass human capabilities in all domains, could emerge. Nate Soares, in particular, has stated that the probability of a catastrophic outcome cannot be ignored. For him, « allowing these experiments to continue is a foolhardy gamble. »

To read Giorgia Meloni : quand l’intelligence artificielle crée des images surprenantes en lingerie

Nate Soares’s Grim Predictions

Nate Soares and Eliezer Yudkowsky recently co-authored a book with the evocative title: *If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All*. This work emphasizes the idea that once superintelligence is unleashed, regaining any control may well be impossible.

Soares is not alone in his pessimistic predictions. Figures such as Elon Musk have also expressed their apprehensions. Other experts, though less radical, support the idea of ​​stricter regulations for artificial intelligence experimentation. Calls for serious regulation are now resonating among scientific and technological authorities.

Calls for a Moratorium

Rate this article

InterCoaching is an independent media. Support us by adding us to your Google News favorites:

Share your opinion