1. Humans and robots are nearly ethically equivalent
- Author
-
Jorgenson, Corinne, Willems, Jurgen, Ozkes, Ali I., and Vanderelst, Dieter
- Abstract
Social care robots are increasingly used in health care. As the autonomy of these machines continues to increase, they will likely require some capacity for ethical reasoning. Current efforts to imbue robots with ethical decision-making capabilities tend to assume that people perceive robots and human actors as ethically equivalent entities. That is, it is assumed that the ethical behavior of a robot can be modeled based on what is considered ethical human behavior. Despite its ubiquity, this assumption is a hypothesis that warrants empirical investigation. In this article, we partially replicate earlier work that tested this hypothesis. We run two online experiments assessing the ethical equivalence of robots and humans. In addition, we present a new analysis of the data of as reported by Hidalgo et al. (How Humans Judge Machines, MIT Press, 2021). We compare the current (and previous) data with these authors’ work to summarize the tenability of the ethical equivalence hypothesis. Generally, respondents tend to rate robot actions as slightly less ethical than those performed by humans. However, the differences are minor and depend on the scenario. Therefore, we conclude that people treat humans and robots as nearly ethically equivalent. This implies that ethical rules for robots can be derived from existing frameworks and guidelines to regulate human behaviors.
- Published
- 2024
- Full Text
- View/download PDF