Back to Search Start Over

Artificial General Intelligence, Existential Risk, and Human Risk Perception

Authors :
Mandel, David R.
Publication Year :
2023

Abstract

Artificial general intelligence (AGI) does not yet exist, but given the pace of technological development in artificial intelligence, it is projected to reach human-level intelligence within roughly the next two decades. After that, many experts expect it to far surpass human intelligence and to do so rapidly. The prospect of superintelligent AGI poses an existential risk to humans because there is no reliable method for ensuring that AGI goals stay aligned with human goals. Drawing on publicly available forecaster and opinion data, the author examines how experts and non-experts perceive risk from AGI. The findings indicate that the perceived risk of a world catastrophe or extinction from AGI is greater than for other existential risks. The increase in perceived risk over the last year is also steeper for AGI than for other existential threats (e.g., nuclear war or human-caused climate change). That AGI is a pressing existential risk is something on which experts and non-experts agree, but the basis for such agreement currently remains obscure.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.08698
Document Type :
Working Paper