The Perilous State of AI Governance, June 2025

Authors

DOI:

https://doi.org/10.70777/si.v2i2.14801

Keywords:

big beautiful bill, ai governance, CA Senate bill S813, agi safety & value alignment, legal loopholes

Abstract

The US is at a crossroads of AI governance. Too much regulation will impede AI development and give a possibly decisive advantage to its adversaries. Too little or ineffective regulation will expose the world to excessive risk. Complex regulation will exclude medium-sized and smaller providers who cannot afford legal staff and consultants to navigate the complex regime.

References

In contrast to the medium-term goal of provably safe systems, which I strongly support. Tegmark, M., & Omohundro, S. (2023). Provably safe systems: the only path to controllable AGI. https://arxiv.org/abs/2309.01933.

Chung, K.-S., & Fortnow, L. (2014). Loopholes. The Economic Journal. doi:10.1111/ecoj.12203.

Omohundro, S. (2014). Cryptocurrencies, Smart Contracts, and Artificial Intelligence. AI Matters, 1(2), 19-21. doi:10.1145/2685328.268533.

Kevin Frazier and Adam Thierer, “1,000 AI Bills: Time for Congress to Get Serious About Preemption,” 9 May 2025.

Kevin Frazier, “We're Not Ready for AI Liability,” AI Frontiers, 4 June 2024.

US AI regulation vs AI risks

Downloads

Published

2025-06-09

How to Cite

Carlson, K. W. (2025). The Perilous State of AI Governance, June 2025. SuperIntelligence - Robotics - Safety & Alignment, 2(2). https://doi.org/10.70777/si.v2i2.14801

Most read articles by the same author(s)