Back to Search
Start Over
If you worry about humanity, you should be more scared of humans than of AI.
- Source :
-
Bulletin of the Atomic Scientists . Sep2023, Vol. 79 Issue 5, p289-292. 4p. - Publication Year :
- 2023
-
Abstract
- Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology's capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI's harms has distracted from human beings' outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ARTIFICIAL intelligence
*ALGORITHMIC bias
*HUMANITY
*EQUALITY
*DECISION making
Subjects
Details
- Language :
- English
- ISSN :
- 00963402
- Volume :
- 79
- Issue :
- 5
- Database :
- Academic Search Index
- Journal :
- Bulletin of the Atomic Scientists
- Publication Type :
- Academic Journal
- Accession number :
- 171899024
- Full Text :
- https://doi.org/10.1080/00963402.2023.2245242