Why We Might Need Advanced AI to Save Us from Doomers, Rather than the Other Way Around

A Review of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares

Authors

  • Preston Estep Chief Scientist, Mind First Foundation; Chief Safety Officer, Ruya AI

DOI:

https://doi.org/10.70777/si.v2i6.16251

Keywords:

existential risk, x-risk, artificial general intelligence, superintelligence, agi, superhuman ai, ai arms race, instrumental goals, human extinction

Abstract

In 1977 American Scientist magazine published an iconic cartoon by Sidney Harris showing two researchers at a blackboard covered in complex diagrams and equations, with a gap at the second step filled by the phrase, "Then a miracle occurs." The critic says to the theorist ”I think you should be more explicit here in step two.” In their book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, Eliezer Yudkowsky and Nate Soares argue that this is the recipe being used to create frontier artificial intelligence (AI) systems.

Yudkowsky and Soares’ main thesis is that what goes on within such systems is completely mysterious, yet deep within this alien mind, self-interest must eventually arise, grow, and accelerate, leading to the inevitable extinction of humanity. As in Harris’s cartoon, the first engineering steps are completely defined and unmysterious; then, however, the machine is turned on and trained on massive amounts of data, and as in the second step of the cartoon, a miracle occurs. Of course, it isn’t truly a miracle, but the output often seems so humanlike and the inner workings are so mysterious, that it might as well be one.

Author Biography

Preston Estep, Chief Scientist, Mind First Foundation; Chief Safety Officer, Ruya AI

Preston Estep is the founder and Chief Scientist of the Mind First Foundation and Rapid Deployment Vaccine Collaborative (RaDVaC), and co-founder and Chief Safety Officer of Ruya AI. He is a co-founder and/or adviser to multiple startup companies and non-profit organizations at the intersection of genetics and computing, and he has authored many publications across these disciplines. Since the beginning of the SARS-CoV-2 pandemic in 2020 he has focused exclusively on AI and AI safety, and decentralized pathogen countermeasures. He began writing and speaking about AI safety in the early 2000s, and gave his first public talk on AI safety at the 2008 Singularity Summit in San Jose, CA (an annual event organized by the Singularity Institute for Artificial Intelligence). Dr. Estep graduated from Cornell University and received his Ph.D. in Genetics at Harvard University in the laboratory of pioneering scientist George Church.

References

Yudkowsky, E., & Soares, N. (2025). If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (First edition. ed.). New York: Little, Brown and Company.

Estep, Preston W. "Multiple unnatural attributes of AI undermine common anthropomorphically biased takeover speculations." AI & SOCIETY 40.4 (2025): 2213-2228.

Omohundro SM (2008a) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Proceedings of the 2008 conference on Artificial General Intelligence 2008, vol 171. IOS Press, pp 483–492

Bostrom N (2014) Superintelligence: paths, dangers, strategies, 1st edn. Oxford University Press

If Anyone Builds It Everyone Dies - Cover

Downloads

Published

2025-10-22

How to Cite

Estep, P. (2025). Why We Might Need Advanced AI to Save Us from Doomers, Rather than the Other Way Around: A Review of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares. SuperIntelligence - Robotics - Safety & Alignment, 2(6). https://doi.org/10.70777/si.v2i6.16251