11 results
Search Results
2. AI, Suicide Prevention and the Limits of Beneficence.
- Author
-
Halsband, Aurélie and Heinrichs, Bert
- Abstract
In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an issue of beneficence, develop two fictional cases to explore the scope of the principle of beneficence and apply the lessons learned to Facebook’s employment of AI for suicide prevention. We show that Facebook is neither acting under an obligation of beneficence nor acting meritoriously. This insight leads us to the general question of who is entitled to help. We conclude that private companies like Facebook can play an important role in suicide prevention, if they comply with specific rules which we derive from beneficence and autonomy as core principles of biomedical ethics. At the same time, public bodies have an obligation to create appropriate framework conditions for AI-based tools of suicide prevention. As an outlook we depict how cooperation between public and private institutions can make an important contribution to combating suicide and, in this way, put the principle of beneficence into practice. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Human Goals Are Constitutive of Agency in Artificial Intelligence (AI)
- Author
-
Popa, Elena
- Published
- 2021
- Full Text
- View/download PDF
4. On the Duty to Be an Attention Ecologist.
- Author
-
Aylsworth, Tim and Castro, Clinton
- Abstract
The attention economy — the market where consumers’ attention is exchanged for goods and services — poses a variety of threats to individuals’ autonomy, which, at minimum, involves the ability to set and pursue ends for oneself. It has been argued that the threat wireless mobile devices pose to autonomy gives rise to a duty to oneself to be a digital minimalist, one whose interactions with digital technologies are intentional such that they do not conflict with their ends. In this paper, we argue that there is a corresponding duty to others to be an attention ecologist, one who promotes digital minimalism in others. Although the moral reasons for being an attention ecologist are similar to those that motivate the duty to oneself, the arguments diverge in important ways. We explore the application of this duty in various domains where we have special obligations to promote autonomy in virtue of the different roles we play in the lives of others, such as parents and teachers. We also discuss the consequences of our arguments for employers, software developers, and policy makers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. The Future of Work: Augmentation or Stunting?
- Author
-
Furendal, Markus and Jebari, Karim
- Published
- 2023
- Full Text
- View/download PDF
6. Manipulation, Algorithm Design, and the Multiple Dimensions of Autonomy.
- Author
-
Sass, Reuben
- Abstract
Much discussion of the ethics of algorithms has focused on harms to autonomy—especially harms stemming from manipulation. Nonetheless, although manipulation can often be harmful, we suggest that in certain contexts it may not impair autonomy. To fully assess the impact of algorithm design on autonomy, we argue for a need to move beyond a focus on manipulation towards a multidimensional account of autonomy itself. Drawing on the autonomy literature and recent data ethics, we propose a novel account which takes autonomy to supervene on three distinct but related elements: agency, authenticity and individual control over decision-making. By distinguishing autonomy from control, the account can explain the variable effects of manipulation on user autonomy within algorithm-driven systems. In particular, it can explain why improving user control may improve autonomy in some contexts, while in other contexts—such as in some kinds of newsfeeds—certain algorithm designs that instead reduce user control may nevertheless improve autonomy. As a result, the account can accommodate the sometimes convoluted interplay between control, autonomy, manipulation, and commercial versus prosocial design goals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. AI, Radical Ignorance, and the Institutional Approach to Consent.
- Author
-
Steinberg, Etye
- Abstract
More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this the problem of radical ignorance. Interestingly, radical ignorance exists in consent contexts other than AI, where it seems that individuals can provide informed consent. The article argues that radical ignorance can undermine informed consent in some contexts but not others because, under certain institutional, autonomy-protecting conditions, consent can be valid without being (perfectly) informed. By understanding these institutional conditions, we can formulate practical solutions to foster valid, albeit imperfectly informed consent across various decision contexts and within different institutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. The Right to be an Exception to Predictions: a Moral Defense of Diversity in Recommendation Systems
- Author
-
Viganò, Eleonora
- Published
- 2023
- Full Text
- View/download PDF
9. Social Media and its Negative Impacts on Autonomy.
- Author
-
Sahebi, Siavosh and Formosa, Paul
- Abstract
How social media impacts the autonomy of its users is a topic of increasing focus. However, much of the literature that explores these impacts fails to engage in depth with the philosophical literature on autonomy. This has resulted in a failure to consider the full range of impacts that social media might have on autonomy. A deeper consideration of these impacts is thus needed, given the importance of both autonomy as a moral concept and social media as a feature of contemporary life. By drawing on this philosophical literature, we argue that autonomy is broadly a matter of developing autonomy competencies, having authentic ends and control over key aspects of your own life, and not being manipulated, coerced, and controlled by others. We show how the autonomy of users of social media can be disrespected and harmed through the control that social media can have over its users’ data, attention, and behaviour. We conclude by discussing various recommendations to better regulate social media. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Moral Reasons Not to Posit Extended Cognitive Systems: a Reply to Farina and Lavazza.
- Author
-
Cassinadri, Guido
- Abstract
Given the metaphysical and explanatory stalemate between Embedded (EMB) and Extended (EXT) cognition, different authors proposed moral arguments to overcome such a deadlock in favor of EXT. Farina and Lavazza (2022) attribute to EXT and EMB a substantive moral content, arguing in favor of the former by virtue of its progressiveness and inclusiveness. In this treatment, I criticize four of their moral arguments. In Sect. 2, I focus on the argument from legitimate interventions (Sect. 2.1) and on the argument from extended agency (Sect. 2.2). Section 3 concerns the argument from better protection (Sect. 3.1) and the argument from better treatment (Sect. 3.2). Sections 4 and 5 are dedicated to counterarguments against each respectively. By distinguishing between EXT (intended as an ontological claim on the extension of cognition) and the extended view (intended as a moral heuristic), I argue that it is sufficient to use this second version for directly addressing and evaluating moral problems on normative grounds, independently of the causal (EMB) or constitutive (EXT) cognitive influence of the external resource on the agents’ minds. Moreover, I argue that the arguments and assumptions used by EXT theorists do not foster values of progressiveness and inclusiveness. To conclude, in Sect. 6, I show that the analysis of each argument converges on the conclusion that EXT does not have substantive moral content and implications per se, since they always depend on further assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Why Digital Assistants Need Your Information to Support Your Autonomy.
- Author
-
Heinrichs, Jan-Hendrik
- Subjects
ACTIVITIES of daily living ,INFORMATION needs ,HUNGER ,REFLECTIVE learning ,MATHEMATICAL optimization ,DATA analysis - Abstract
This article investigates how human life is conceptualized in the design and use of digital assistants and how this conceptualization feeds back into the life really lived. It suggests that a specific way of conceptualizing human life — namely as a set of tasks to be optimized — is responsible for the much-criticized information hunger of these digital assistants. The data collection of digital assistants raises not just several issues of privacy, but also the potential for improving people's degree of self-determination, because the optimization model of daily activity is genuinely suited to a certain mode of self-determination, namely the explicit and reflective setting, pursuing, and monitoring of goals. Furthermore, optimization systems' need for generation and analysis of data overcomes one of the core weaknesses in human capacities for self-determination, namely problems with objective and quantitative self-assessment. It will be argued that critiques according to which digital assistants threaten to reduce their users' autonomy tend to ignore that the risks to autonomy are derivative to potential gains in autonomy. These critiques are based on an overemphasis of a success conception of autonomy. Counter to this conception, being autonomous does not require a choice environment that exclusively supports a person's "true" preferences, but the opportunity to engage with external influences, supportive as well as adverse. In conclusion, it will be argued that ethical evaluations of digital assistants should consider potential gains as well as potential risks for autonomy caused by the use of digital assistants. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.