Back to Search
Start Over
First-person representations and responsible agency in AI
- Source :
- Synthese
- Publication Year :
- 2021
- Publisher :
- Springer Science and Business Media LLC, 2021.
-
Abstract
- In this paper we investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, we identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. We clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.
- Subjects :
- Artificial intelligence
Consciousness
media_common.quotation_subject
Moral agency
Agency (philosophy)
0603 philosophy, ethics and religion
Article
050105 experimental psychology
Philosophy of language
0501 psychology and cognitive sciences
Moral responsibility
First-person representation
Control (linguistics)
De Se representation
media_common
Philosophy of science
05 social sciences
Construals
General Social Sciences
06 humanities and the arts
Epistemology
Philosophy
060302 philosophy
Psychology
Subjects
Details
- ISSN :
- 15730964 and 00397857
- Volume :
- 199
- Database :
- OpenAIRE
- Journal :
- Synthese
- Accession number :
- edsair.doi.dedup.....007f067204f0f5005134509b640556d9
- Full Text :
- https://doi.org/10.1007/s11229-021-03105-8