Why We Might Need Advanced AI to Save Us from Doomers, Rather than the Other Way Around
A Review of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares
DOI:
https://doi.org/10.70777/si.v2i6.16251Keywords:
existential risk, x-risk, artificial general intelligence, superintelligence, agi, superhuman ai, ai arms race, instrumental goals, human extinctionAbstract
In 1977 American Scientist magazine published an iconic cartoon by Sidney Harris showing two researchers at a blackboard covered in complex diagrams and equations, with a gap at the second step filled by the phrase, "Then a miracle occurs." The critic says to the theorist ”I think you should be more explicit here in step two.” In their book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, Eliezer Yudkowsky and Nate Soares argue that this is the recipe being used to create frontier artificial intelligence (AI) systems.
Yudkowsky and Soares’ main thesis is that what goes on within such systems is completely mysterious, yet deep within this alien mind, self-interest must eventually arise, grow, and accelerate, leading to the inevitable extinction of humanity. As in Harris’s cartoon, the first engineering steps are completely defined and unmysterious; then, however, the machine is turned on and trained on massive amounts of data, and as in the second step of the cartoon, a miracle occurs. Of course, it isn’t truly a miracle, but the output often seems so humanlike and the inner workings are so mysterious, that it might as well be one.
References
Yudkowsky, E., & Soares, N. (2025). If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (First edition. ed.). New York: Little, Brown and Company.
Estep, Preston W. "Multiple unnatural attributes of AI undermine common anthropomorphically biased takeover speculations." AI & SOCIETY 40.4 (2025): 2213-2228.
Omohundro SM (2008a) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Proceedings of the 2008 conference on Artificial General Intelligence 2008, vol 171. IOS Press, pp 483–492
Bostrom N (2014) Superintelligence: paths, dangers, strategies, 1st edn. Oxford University Press
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 Preston Estep

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.