Theories of Intelligence

17 Items

Theories of intelligence and mind. Pre-requisites to artificial general intelligence. Evolution of intelligence on earth and elsewhere. Cognitive science.

All Items

  • Critical Review: Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning

    Kris Carlson
    DOI: https://doi.org/10.70777/si.v2i4.15315
  • Evidence Integrity Before Capability: A Prerequisite for Safe Artificial Intelligence

    Jennifer Flygare Kinne
    DOI: https://doi.org/10.70777/si.v2i6.16393
  • International Al Safety Report: First Key Update Capabilities and Risk Implications

    Yoshua Bengio, Benjamin Bucknall, Stephen Clare, Carina Prunkl, Maksym Andriushchenko, Philip Fox, Tiancheng Hu, Cameron Jones, Sam Manning, Nestor Maslej, Vasilios Mavroudis, Conor McGlynn, Malcolm Murray, Shalaleh Rismani, Charlotte Stix, Lucia Velasco, Nicole Wheeler, Daniel Privitera, Sören Mindermann, Daron Acemoglu, Thomas G. Dietterich, Fredrik Heintz, Geoffrey Hinton, Nick Jennings, Susan Leavy, Teresa Ludermir, Vidushi Marda, Helen Margetts, John McDermid, Jane Munga, Arvind Narayanan, Alondra Nelson, Clara Neppel, Sarvapali D. (Gopal) Ramchurn, Stuart Russell, Marietje Schaake, Bernhard Schölkopf, Alvaro Soto, Lee Tiedrich, Gaël Varoquaux, Andrew Yao, Ya-Qin Zhan
    DOI: https://doi.org/10.70777/si.v2i6.16253
  • LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures

    Franciso Aguilera-Martinez, Fernando Berzal
    DOI: https://doi.org/10.70777/si.v2i2.14441
  • Measuring AI Agent Autonomy: Towards a Scalable Approach with Code Inspection

    Peter Cihon, Merlin Stein, Gagan Bansal, Sam Manning, Kevin Xu
    DOI: https://doi.org/10.70777/si.v2i3.15295
  • Multiple unnatural attributes of AI undermine common anthropomorphically biased takeover speculations Eight Fundamental Differences between Biologically Evolved Humans and Digital AI

    Preston Estep
    DOI: https://doi.org/10.70777/si.v2i1.13801
  • On the Limits of Self-Improving in LLMs and Why AGI, ASI and the Singularity Are Not Near Without Symbolic Model Synthesis

    Hector Zenil
    DOI: https://doi.org/10.70777/si.v2i4.17159
  • Responsible Agentic Reasoning and AI Agents: A Critical Survey Proposal for Safe Agentic AI via Responsible Reasoning AI Agents (R2A2)

    Shaina Raza, Ranjan Sapkota, Manoj Karkee, Christos Emmanouilidis
    DOI: https://doi.org/10.70777/si.v2i6.16169
  • Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?

    Yoshua Bengio, Michael Cohen, Damiano Fornasiere, Joumana Ghosn, Pietro Greiner, Matt MacDermott, Soren Mindermann, Adam Oberman, Jesse Richardson, Oliver Richardson, Marc-Antoine Rondeau, Pierre-Luc St-Charles, David Williams-King
    DOI: https://doi.org/10.70777/si.v2i5.15569
  • The Asymptotic Intelligence Thesis: Rethinking the Ceiling of AGI Cognition

    Jeffrey E. Arle, MD, PhD, FAANS, FCNS
    DOI: https://doi.org/10.70777/si.v2i6.16255
  • The Illusion of Thinking Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

    Parshin Shojaee , Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar
    DOI: https://doi.org/10.70777/si.v2i6.15919
  • Thinking Isn’t an Illusion Overcoming the Limitations of Reasoning Models via Tool Augmentations

    Zhao Song, Song Yue, Jiahao Zhang
    DOI: https://doi.org/10.70777/si.v2i6.15961
  • Timeline to Artificial General Intelligence 2025 – 2030+ A prediction of how AI will progress, year by year. Updated Oct 30, 2025.

    Gil Syswerda
    DOI: https://doi.org/10.70777/si.v2i6.16375
  • Understanding Limitations of Large Language Models from First Principles Computational Complexity Circuit Class TCk

    Kris Carlson
    DOI: https://doi.org/10.70777/si.v2i6.16549
  • Why AI Alignment Failure Is Structural: Learned Human Interaction Structures and AGI as an Endogenous Evolutionary Shock

    Didier Sornette, Sandro Claudio Lera, Ke Wu
    DOI: https://doi.org/10.70777/si.v2i4.17163
  • Why Today’s Humanoids Won’t Learn Dexterity

    Rodney Brooks
    DOI: https://doi.org/10.70777/si.v3i3.17351
  • Why We Might Need Advanced AI to Save Us from Doomers, Rather than the Other Way Around A Review of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares

    Preston Estep
    DOI: https://doi.org/10.70777/si.v2i6.16251