445 results on '"Automated Decision-Making"'
Search Results
2. Good decisions in an imperfect world: a human-focused approach to automated decision-making.
- Author
-
Bacher, Bettina
- Subjects
- *
ARTIFICIAL intelligence , *SOCIAL interaction , *GENERAL Data Protection Regulation, 2016 , *DECISION making , *HEURISTIC - Abstract
Legal rules are based on an imagined regulatory scene that contains presumptions about the reality a regulation addresses. Regarding automated decision-making (ADM), these include a belief in the 'good human decision' that is mirrored in the cautious approach in the GDPR. Yet the 'good human decision' defies psychological insight into human weaknesses in decision-making. Instead, it reflects a general unease about algorithmic decisions. Against this background I explore how algorithms become part of human relationships and whether the use of decision systems causes a conflict with human needs, values and the prevailing socio-legal framework. Inspired by the concept of Human-Centered AI, I then discuss how the law may address the apprehension towards decision systems. I outline a human-focused approach to regulating ADM that focuses on improving the practice of decision-making. The interaction between humans and machines is an essential part of the regulation. It must address socio-legal changes caused by decision systems both to integrate them into the existing value system and adapt the latter to changes brought forth by ADM. A human-focused approach thus connects the benefits of technology with human needs and societal values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Artificial Intelligence, Rationalization, and the Limits of Control in the Public Sector: The Case of Tax Policy Optimization.
- Author
-
Mökander, Jakob and Schroeder, Ralph
- Subjects
- *
INCOME inequality , *WEALTH inequality , *SCHOLARLY method , *ARTIFICIAL intelligence , *FISCAL policy - Abstract
In this paper, we first frame the use of artificial intelligence (AI) systems in the public sector as a continuation and intensification of long-standing rationalization and bureaucratization processes. Drawing on Weber, we understand the core of these processes to be the replacement of traditions with instrumental rationality, that is, the most calculable and efficient way of achieving any given policy objective. Second, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well-known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end: reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, our analysis also highlights that AI-driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens' sense of their (non-instrumental) obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Third, we observe that contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic: science can only provide the means – it cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI-driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny, deliberation, and debate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types.
- Author
-
Klein, Uwe, Depping, Jana, Wohlfahrt, Laura, and Fassbender, Pantaleon
- Subjects
- *
ARTIFICIAL intelligence , *RISK perception , *TRUST , *LABOR contracts , *BIG data - Abstract
Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) and are extended by individual descriptive elements of AI systems in comparison to the original studies. The first online experiment examines decisions made by artificial intelligence with varying degrees of impact. In the high-impact scenario, applicants are automatically selected for a job and immediately received an employment contract. In the low-impact scenario, three applicants are automatically invited for another interview. In addition, the relationship between age and risk perception is investigated. The second online experiment tests subjects' perceived trust in decisions made by artificial intelligence, either semi-automatically through the assistance of human experts or fully automatically in comparison. Two task types are distinguished. The task type that requires "human skills"—represented as a performance evaluation situation—and the task type that requires "mechanical skills"—represented as a work distribution situation. In addition, the extent of negative emotions in automated decisions is investigated. The results are related to the findings of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018). Implications for further research activities and practical relevance are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Personalizace ceny vs. zásada rovnosti: je cenová diskriminace opravdu diskriminací?
- Author
-
Bónová, Kristýna
- Subjects
CONSUMER behavior ,INTERNET usage monitoring ,CONSUMER protection ,PRICE discrimination ,CONSUMERS ,CONSUMER preferences - Abstract
Copyright of Pravnik is the property of Czech Academy of Sciences, Institute of State & Law and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
6. Decision-making power and responsibility in an automated administration
- Author
-
Charlotte Langer
- Subjects
Automated decision-making ,Artificial intelligence ,Public service ,Administration ,Non-delegation ,Rule of law ,Computational linguistics. Natural language processing ,P98-98.5 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract The paper casts a spotlight on one of the manifold legal questions that arise with the proliferation of artificial intelligence. This new technology is attractive for many fields, including public administration, where it promises greater accuracy and efficiency, freeing up resources for better interaction and engagement with citizens. However, public powers are bound by certain constitutional constraints that must be observed, regardless of whether decisions are made by humans or machines. This includes the non-delegation principle, which aims to limit the delegation and sub-delegation of decisions affecting citizens’ rights in order to ensure governmental accountability, reviewability, and contestability. This puts some constraints on the automation of decision-making by public entities, as algorithmic decision-making entails delegating decisions to software development companies on the one hand, and to algorithms on the other. The present paper reveals and explains these constraints and concludes with suggestions to navigate these conflicts in a manner that satisfies the rule of law while maximizing the benefits of new technologies.
- Published
- 2024
- Full Text
- View/download PDF
7. Digital Governance and Neoliberalism: The Evolution of Machine Learning in Australian Public Policy
- Author
-
Harry Jobberns and Michael Guihot
- Subjects
automated decision-making ,neoliberalism ,legislation ,review ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
Increasingly, government agencies are using computers to automate decisions about citizens. Often, the reason proffered for this is increased efficiency. This article reveals the extent to which computer automated decision-making processes in government are being legislatively adopted, provides an overview of the risks associated with such incorporation and analyses the limited extent to which the parliament has considered these associated risks. We analysed 35 core pieces of Commonwealth legislation that allow computers to make decisions on behalf of the administrative decision-maker. The Commonwealth government has authorised at least 35 of its agencies to use computers to make decisions and in doing so has expanded its power. This has occurred without much debate. We reviewed the associated secondary materials – including second reading speeches and parliamentary debates – to understand the level of discussion on the issue. Surprisingly, our analysis reveals that most amendments have passed with little to no debate. While the drive for governmental efficiency is commendable, failing to adequately debate the risks involved in automating administrative decisions undermines core administrative law principles and exposes governmental agencies and, more importantly, the Australian population to the risk of serious error. This article calls for increased debate before these authorisations are made and the incorporation of audit and review processes to control the use of this power by government agencies ex post.
- Published
- 2024
- Full Text
- View/download PDF
8. Indeterminacy of Legal Language as a Guide Towards Ideally Algorithmisable Areas of Law
- Author
-
Andrej Krištofík
- Subjects
legal theory ,algorithmic law ,automated decision-making ,artificial intelligence ,indeterminate language ,evolutive interpretation ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
This article delves into the relationship between language in law and the automation of legal decision-making processes. The conflict arises between the indeterminate language of the law and the necessary precision of algorithmic language, where the first needs to be translated into the other during the process of automation. The article seeks to provide definitions as well as the necessary excursion into the role of indeterminacy in the legal language to be able to shed a light on this process. In this regard, the article examines several problematic examples of the currently utilised algorithmic systems in legal decision-making. Subsequently, the article sets forth its thesis: that indeterminate language used within legal rules sets out the perimeter for areas suitable for the process of automation, determined by the very nature of such areas of law as a negative demarcation. Lastly, it provides language-based positive demarcations for such areas. Further, by examining the theory of legal indeterminacy, the article shows that the conclusion does not in fact lie in the technical impossibility of creating an indetermined algorithm, but rather in the very purpose of such language – that of mandating a value judgement. The conclusion of this article thus seeks to be technologically neutral by providing a different route of the purpose examination to reach it, rather than technological impossibility, which may change in time.
- Published
- 2024
- Full Text
- View/download PDF
9. “The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making.
- Author
-
Szafran, Daria and Bach, Ruben L.
- Abstract
The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (N = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: Human elements in decision-making, Shortcomings of the data, Social impact of AI, and Properties of AI. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Introduction to the digital welfare state: Contestations, considerations and entanglements.
- Author
-
van Toorn, Georgia, Henman, Paul, and Soldatić, Karen
- Subjects
- *
DATA analytics , *WELFARE state , *SOCIOLOGICAL research , *STATE power , *SOCIAL services - Abstract
This article introduces a special issue of the Journal of Sociology focused on critical analysis of the digital welfare state. The digitalisation of welfare policy, institutions and service delivery has led to increased scrutiny, social sorting and surveillance of welfare recipients and other marginalised groups. This collection of papers contributes to current debates about digital welfare using sociological approaches which foreground the role of power relations and human agency in shaping these dynamics. We provide introductory insights into themes explored within the collection, including the connection of digitalisation with 'the social', the role of digital technologies in truth-making, and the importance of humans and their labour in operationalising digital welfare. In addition, we highlight the value of sociological research in revealing the various processes and relationships through which state power is constituted and expressed digitally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Disablism, racism and the spectre of eugenics in digital welfare.
- Author
-
van Toorn, Georgia and Soldatić, Karen
- Subjects
- *
SCIENTIFIC racism , *DIGITAL technology , *WELFARE state , *CHILD welfare , *ABLEISM , *EUGENICS - Abstract
This article explores the historical ties between the digital welfare state and eugenics, highlighting how the use of data infrastructures for classification and governance in the digital era has roots in eugenic data practices and ideas. Through an analysis of three domains of automated decision-making – child welfare, immigration and disability benefits – the article demonstrates how these automated systems perpetuate hierarchical divisions originally shaped by ableist eugenic race science. It underscores the importance of critically engaging with this historical context of data utilisation, emphasising its entanglement with eugenic perspectives on racial, physical and mental superiority, individual and social worth, and the categorisation of data subjects as deserving or undeserving. By engaging with this history, the article provides a deeper understanding of the contemporary digital welfare state, particularly in terms of its discriminatory divisions based on race and disability, which are deeply intertwined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. 'This is NOT human services': Counter-mapping automated decision-making in social services in Australia.
- Author
-
Sleep, Lyndal
- Subjects
- *
ARTIFICIAL intelligence , *SOCIAL services , *COST control , *WELFARE state , *BIG data - Abstract
This paper offers a counter-map of automation in social services decision-making in Australia. It aims to amplify alternative discourses that are often obscured by power inequalities and disadvantage. Redden (2005) has used counter-mapping to frame an analysis of big data in government in Canada, contrasting with 'dominant outward facing government discourses about big data applications' to focus on how data practices are both socially shaped and shaping. This paper reports on a counter-mapping project undertaken in Australia using a mixed methods approach incorporating document analysis, interviews and web scraping to amplify divergent discourses about automated decision-making. It demonstrates that when the focus of analysis moves beyond dominant discourses of neoliberal efficiency, cost cutting, accuracy and industriousness, alternative discourses of service users' experiences of automated decision-making as oppressive, harmful, punitive and inhuman(e) can be located. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Decision-making power and responsibility in an automated administration.
- Author
-
Langer, Charlotte
- Subjects
ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,PUBLIC interest law ,COMPUTER software development ,CITIZENS - Abstract
The paper casts a spotlight on one of the manifold legal questions that arise with the proliferation of artificial intelligence. This new technology is attractive for many fields, including public administration, where it promises greater accuracy and efficiency, freeing up resources for better interaction and engagement with citizens. However, public powers are bound by certain constitutional constraints that must be observed, regardless of whether decisions are made by humans or machines. This includes the non-delegation principle, which aims to limit the delegation and sub-delegation of decisions affecting citizens' rights in order to ensure governmental accountability, reviewability, and contestability. This puts some constraints on the automation of decision-making by public entities, as algorithmic decision-making entails delegating decisions to software development companies on the one hand, and to algorithms on the other. The present paper reveals and explains these constraints and concludes with suggestions to navigate these conflicts in a manner that satisfies the rule of law while maximizing the benefits of new technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Artificial Intelligence in Judicial Decision-Making: A Comparative Analysis of Recent Rulings in Colombia and The Netherlands.
- Author
-
Rojas, Maria Lorena Flórez
- Subjects
ARTIFICIAL intelligence ,JUDICIAL process ,CHATGPT ,COURTS - Abstract
Case T-323/2024, Colombian Constitutional Court, Second Review Chamber. File T-9.301.656 Case 8060619\CV EXPL 07-06/2024, Gelderland District Court, ECLI:NL:RBGEL:2024:3636 This case note addresses two judicial decisions that disclose the use of generative AI, specifically ChatGPT, within their decision-making process. The first case, originating in Colombia, started from a guardianship action filed on behalf of a minor with autism against a health service provider. Both the initial and appellate courts ruled in favour of the minor, with the Circuit Court judge using ChatGPT to help in drafting the ruling. This raised concerns about the right to due process and judicial independence, encouraging the Constitutional Court to review the case. The second case takes place in The Netherlands, involving a lower court judge using ChatGPT to decide factual details in a property dispute. This raises questions about the suitability of AI in legal contexts and the potential risks of relying on unverified information. Both cases highlight critical issues such as transparency, accountability, and the preservation of human judgment in legal decision-making processes. These cases serve as important precedents in the ongoing discussion about the role of AI in the judiciary and the safeguards necessary to maintain the integrity of legal systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making.
- Author
-
Schaap, Gabi, Bosse, Tibor, and Hendriks Vettehen, Paul
- Subjects
- *
ACADEMIC debating , *TRUST , *AVERSION , *DECISION making , *DEFAULT (Finance) - Abstract
While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on 'algorithmic aversion' in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across three high-powered (Ntotal = 1192), preregistered 2 (agent: algorithm/human) × 2 (benefits: high/low) × 2 (control: user control/no control) between-subjects experiments, and two domains (finance and dating), the results were quite consistent: there is little evidence for a default aversion against algorithms and in favor of human decision makers. Instead, users accept or reject decisions and decisional agents based on their predicted benefits and the ability to exercise control over the decision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A health-conformant reading of the GDPR's right not to be subject to automated decision-making.
- Author
-
Kolfschooten, Hannah B van
- Subjects
- *
ARTIFICIAL intelligence in medicine , *MEDICAL care , *MEDICAL decision making , *LEGAL status of patients , *PATIENTS' rights , *RIGHT of privacy , *DATA protection laws - Abstract
As the use of Artificial Intelligence (AI) technologies in healthcare is expanding, patients in the European Union (EU) are increasingly subjected to automated medical decision-making. This development poses challenges to the protection of patients' rights. A specific patients' right not to be subject to automated medical decision-making is not considered part of the traditional portfolio of patients' rights. The EU AI Act also does not contain such a right. The General Data Protection Regulation (GDPR) does, however, provide for the right 'not to be subject to a decision based solely on automated processing' in Article 22. At the same time, this provision has been severely critiqued in legal scholarship because of its lack of practical effectiveness. However, in December 2023, the Court of Justice of the EU first provided an interpretation of this right in C-634/21 (SCHUFA)—although in the context of credit scoring. Against this background, this article provides a critical analysis of the application of Article 22 GDPR to the medical context. The objective is to evaluate whether Article 22 GDPR may provide patients with the right to refuse automated medical decision-making. It proposes a health-conformant reading to strengthen patients' rights in the EU. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Digital Governance and Neoliberalism: The Evolution of Machine Learning in Australian Public Policy.
- Author
-
Jobberns, Harry and Guihot, Michael
- Subjects
COMPUTERS ,GOVERNMENT agencies ,DECISION making ,LEGISLATION ,ADMINISTRATIVE efficiency - Abstract
Increasingly, government agencies are using computers to automate decisions about citizens. Often, the reason proffered for this is increased efficiency. This article reveals the extent to which computer automated decision-making processes in government are being legislatively adopted, provides an overview of the risks associated with such incorporation and analyses the limited extent to which the parliament has considered these associated risks. We analysed 35 core pieces of Commonwealth legislation that allow computers to make decisions on behalf of the administrative decision-maker. The Commonwealth government has authorised at least 35 of its agencies to use computers to make decisions and in doing so has expanded its power. This has occurred without much debate. We reviewed the associated secondary materials – including second reading speeches and parliamentary debates – to understand the level of discussion on the issue. Surprisingly, our analysis reveals that most amendments have passed with little to no debate. While the drive for governmental efficiency is commendable, failing to adequately debate the risks involved in automating administrative decisions undermines core administrative law principles and exposes governmental agencies and, more importantly, the Australian population to the risk of serious error. This article calls for increased debate before these authorisations are made and the incorporation of audit and review processes to control the use of this power by government agencies ex post. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Light in the Black Box?: On the Data Protection Obligation to Disclose Credit Scoring Algorithms.
- Author
-
Brüggemann, Erik and Möller, Carl Christoph
- Subjects
DATA protection ,GENERAL Data Protection Regulation, 2016 ,TRADE secrets ,EUROPEAN law ,PERSONALLY identifiable information - Abstract
Most consumers are scored by credit agencies. They do not know how, although they have a right to know. The right of access under GDPR requires disclosure of how personal data is processed. This includes revealing information about algorithms used to calculate score values for the creditworthiness of individuals, although there are frequent counter arguments. Contrary to further widespread claims, this does not conflict with the protection of trade secrets under European law. Nor can European regulations on intellectual property or freedom to conduct a business be invoked against making this information available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Paradata as a Tool for Legal Analysis: Utilising Data-on-Data Related Processes
- Author
-
Enqvist, Lena, Bolisani, Ettore, Series Editor, Handzic, Meliha, Series Editor, Huvila, Isto, editor, Andersson, Lisa, editor, and Sköld, Olle, editor
- Published
- 2024
- Full Text
- View/download PDF
20. Unveiling the Potential of AI in Gastroenterology: Challenges and Opportunities
- Author
-
Saxena, Esha, Parveen, Suraiya, Ahad, Mohd. Abdul, Yadav, Meenakshi, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Goar, Vishal, editor, Sharma, Aditi, editor, Shin, Jungpil, editor, and Mridha, M. Firoz, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Automated Decision-Making in the Public Sector: A Multidisciplinary Literature Review
- Author
-
Rizk, Aya, Lindgren, Ida, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Janssen, Marijn, editor, Crompvoets, Joep, editor, Gil-Garcia, J. Ramon, editor, Lee, Habin, editor, Lindgren, Ida, editor, Nikiforova, Anastasija, editor, and Viale Pereira, Gabriela, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Algorithmic indirect discrimination, fairness and harm
- Author
-
Thomsen, Frej Klem
- Published
- 2024
- Full Text
- View/download PDF
23. Governing the Automated Welfare State: Translations between AI Ethics and Anti-discrimination Regulation
- Author
-
Ellinor Blom Lussi, Stefan Larsson, Charlotte Högberg, and Anne Kaun
- Subjects
Automated Decision-Making ,public-sector governance ,AI ethics ,discrimination ,fairness ,Social pathology. Social and public welfare. Criminology ,HV1-9960 - Abstract
There is an increasing demand to utilize technological possibilities in the Nordic public sector. Automated decision-making (ADM) has been deployed in some areas towards that end. While ADM is associated with a range of benefits, research shows that its use, with elements of AI, also implicates risks of discrimination and unfair treatment, which has stimulated a flurry of normative guidelines. This article seeks to explore how a sample of these international high-level principled ideas on fairness translate into the specific governance of ADM in national public-sector authorities in Sweden. It does so by answering the question of how ideas about AI ethics and fairness are considered in relation to regulation on anti-discrimination in Swedish public-sector governance. By using a Scandinavian institutionalist approach to translation theory, we trace how ideas about AI governance and public-sector governance translate into state-authority practice; specifically, regarding the definition of ADM, how AI has impacted it as both discourse and technology, and the ideas of ‘ethicsʼ and ‘discriminationʼ. The results indicate that there is a variance in how different organizations understand and translate ideas about AI ethics and discrimination. These tensions need to be addressed in order to develop AI governance practices.
- Published
- 2024
- Full Text
- View/download PDF
24. Black-Box Testing and Auditing of Bias in ADM Systems.
- Author
-
Krafft, Tobias D., Hauer, Marc P., and Zweig, Katharina
- Abstract
For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration.
- Author
-
Enqvist, Lena
- Subjects
- *
GENERAL Data Protection Regulation, 2016 , *DATA protection , *SOCIAL security , *ARTIFICIAL intelligence , *AUTOMATION - Abstract
This article focuses on the legal implications of the growing reliance on automated systems in public administrations, using the example of social security benefits administration. It specifically addresses the deployment of automated systems for decisions on benefits eligibility within the frameworks of the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). It compares how these two legal frameworks, each targeting different regulatory objects (personal data versus AI systems) and employing different protective measures, apply for two common system types: rule-based systems utilised for making fully automated decisions on eligibility, and machine learning AI systems utilised for assisting case administrators in their decision-making. It concludes on the combined impact that the GDPR and the AIA will have on each of these types of systems, as well as on differences in how these instruments determines the basic legality of utilising such systems within social security administration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. AUTOMATED DECISION-MAKING AND ACCESS TO DATA.
- Author
-
DACAR, Rok
- Subjects
- *
DECISION making , *LEGAL instruments , *ANTITRUST law , *INTERNET marketing , *EUROPEAN Union law , *PERSONALLY identifiable information - Abstract
This paper explores the mechanisms by which companies can gain access to data necessary for automated decision-making in scenarios without direct contractual agreements, focusing on market-driven approaches. It introduces the concept of the essential facilities doctrine under EU competition law and examines its applicability to sets of data, alongside an examination of current ex-ante regulatory instruments which grant data access rights, such as the Type Approval Regulation, the Open Data Directive, the Electricity Directive, the Digital Markets Act, and the Data Act. These legal instruments are analysed in terms of their ability to facilitate access to data necessary for the automation of decision-making processes. In addition, the study looks at the challenges and opportunities presented by these legal instruments, including the nuances of applying the essential facilities doctrine to data. The article concludes that the most efficient way for a company to gain access to sets of data required for automated decision-making (in the absence of a contractual agreement) is to base its data access claim on an act of ex-ante regulation. If, however, such legal basis does not exist, a company could still base its data access claim on the essential facilities doctrine. The practical applicability of the doctrine to sets of data, however, remains unclear. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. The black box problem revisited. Real and imaginary challenges for automated legal decision making.
- Author
-
Brożek, Bartosz, Furman, Michał, Jakubiec, Marek, and Kucharzyk, Bartłomiej
- Subjects
ARTIFICIAL intelligence ,OPACITY (Linguistics) ,DECISION making in law ,ARTIFICIAL intelligence laws - Abstract
This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We argue further that contrary to often defended claims the opacity issue is not a genuine problem. We also dismiss the justification problem. Further, we describe the tensions involved in the strangeness and unpredictability problems and suggest some ways to alleviate them. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Decoding the algorithmic operations of Australia's National Disability Insurance Scheme.
- Author
-
Toorn, Georgia and Carney, Terry
- Abstract
In recent years, Australia has embarked on a digital transformation of its social services, with the primary goal of creating user‐centric services that are more attentive to the needs of citizens. This article examines operational and technological changes within Australia's National Disability Insurance Scheme (NDIS) as a result of this comprehensive government digital transformation strategy. It discusses the effectiveness of these changes in enhancing outcomes for users of the scheme. Specifically, the focus is on the National Disability Insurance Agency's (NDIA) use of algorithmic decision support systems to aid in the development of personalised support plans. This administrative process, we show, incorporates several automated elements that raise concerns about substantive fairness, accountability, transparency and participation in decision making. The conclusion drawn is that algorithmic systems exercise various forms of state power, but in this case, their subterranean administrative character positions them as “algorithmic grey holes”—spaces effectively beyond recourse to legal remedies and more suited to redress by holistic and systemic accountability reforms advocated by algorithmic justice scholarship. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. EU ADMINISTRATIVE DECISION-MAKING DELEGATED TO MACHINES - LEGAL CHALLENGES AND ISSUE.
- Author
-
HUBKOVÁ, PAVLÍNA
- Abstract
Increasing computing power, the constant development of different types of digital tools or even the use of AI systems - they all provide the EU administration with an opportunity to use automated decision-making (ADM) tools to improve the effectiveness and efficiency of administrative action. At the same time, however, the use of these tools raises several concerns, issues or challenges. From a legal perspective, there is a risk of compromising or reducing the accountability of public actors. The use of new technologies in decision-making may also affect fundamental values and principles of the EU as a whole. Automation, the use of large amounts of data and the extremely rapid processing of such data may affect or jeopardise the rights of individuals protected by EU law, including the fundamental rights guaranteed by the EU Charter. In order to keep administrative action within the limits of the law and to guarantee the rights of individuals, it is necessary to keep an eye on the various legal challenges associated with these phenomena. This article looks at three inter-connected levels of automated decision-making - the data, the ADM tool and the way it is programmed, and the output and its reviewability - and presents the legal issues or challenges associated with each of these levels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis
- Author
-
Francesca Palmiotto
- Subjects
automated decision-making ,EU Law ,Fundamental Rights ,Artificial Intelligence ,Migration ,Asylum ,GDPR ,AI Act ,Law of Europe ,KJ-KKZ ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
This Article addresses the pressing issues surrounding the use of automated systems in public decision-making, specifically focusing on migration, asylum, and mobility. Drawing on empirical data, this Article examines the potential and limitations of the General Data Protection Regulation and the Artificial Intelligence Act in effectively addressing the challenges posed by automated decision-making (ADM). The Article argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications where automated systems assist human decision-makers rather than replace them entirely. To bridge the gap between ADM in law and practice, this Article proposes to move beyond the concept of “automated decisions” and complement the legal protection in the GDPR and AI Act with a taxonomy that can inform a fundamental rights analysis. This taxonomy enhances our understanding of ADM and allows to identify the fundamental rights at stake and the sector-specific legislation applicable to ADM. The Article calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society.
- Published
- 2024
- Full Text
- View/download PDF
31. Where are the pandemic drones? On the 'failure' of automated aerial solutionism.
- Author
-
Jackman, Anna, Richardson, Michael, and Veber, Madelene
- Subjects
- *
FAILURE (Psychology) , *PANDEMICS , *COVID-19 pandemic - Abstract
In the early days of the COVID-19 pandemic, excitement broke out around the potential for drones to generate aerial solutions to devilish pandemic problems. But despite the hype, pandemic drones largely failed to take to the sky and far from the scale initially imagined. This article pursues the failure of the pandemic drone to materialise, showing how it nevertheless functioned as a locus of experimentation for remote logics and processes. As such, we shift focus away from what the pandemic drone is to if and where it – or its logics – can be found. To learn from the pandemic drone, we turn to three trajectories of failure: failure as experiment, failure as imaginary and failure as glitch. With particular attention to specific case studies, we show how failure enables drone logics and processes to migrate across various socio-technical forms, sites and applications of automated decision-making responses to the pandemic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. The Adoption of Artificial Intelligence in Bureaucratic Decision-making: A Weberian Perspective: How the Weberian ideals of Hierarchy, Legal Certainty, Accountability and Due Process in bureaucracy invite a careful consideration of the integration of artificial intelligence into bureaucratic decision-making
- Author
-
Cetina Presuel, Rodrigo and Martinez Sierra, Jose M.
- Subjects
ARTIFICIAL intelligence ,BUREAUCRACY ,DUE process of law ,DECISION making ,DISCRIMINATION (Sociology) ,DIVISION of labor - Abstract
This work questions AI´s role in bureaucratic decision-making. The Weberian conception of bureaucracy, based around the concept of an ideal bureaucracy in which authority is distributed, delegated, clearly delimited and hierarchical and that enshrines the following of formal rules, task specialization through division of labor, legal certainty and a predilection for efficiency in recordable and accountable decisions can serve as a framework to orient how governments should approach the adoption of artificial intelligence (AI) given the many problems associated with its careless deployment. Using theoretical analysis, this work explains Weberian ideas of bureaucracy and contrasts them with real-life cases of implementation of AI in bureaucratic decision-making, often with detrimental results for society. After identifying and framing issues related to AI, e.g., lack of transparency, attempts to shift accountability from humans to technology, the exacerbation of bias and potential for systemic discrimination, the paper proposes Weberian prescriptions that should help public administration make careful decisions about the adoption of AI and the consequences of its implementation. The article also engages with Weber critically, rejecting the notion that public administrators do not engage in politics and asserting that AI decision-making is necessarily political as well, as it entails exercising power over citizens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Unsupervised Learning-based Approach for Contextual Understanding of Web Material Around a New Domain of Algorithmic Government.
- Author
-
Gupta, Rajan and Pal, Saibal K.
- Subjects
CONTEXTUAL learning ,K-means clustering ,MACHINE learning ,SEARCH engines ,ARTIFICIAL intelligence - Abstract
Grasping the contextual nuances is fundamental for efficacious learning in a novel discipline through internet-based research. Such comprehension significantly augments the decision-making process by promoting well-grounded and informed choices. And with advent of machine learning approaches, it becomes even more fast and robust to enable collaboration between machine algorithms and humans. However, human expertise still holds the key for new domain, which has been proposed in this study as a key step in unsupervised learning approach of k-means clustering technique. Domain search term and context terms for the new domain are added to the clustering technique, and the relevance of the resultant groups has been tested. Context setting helps to analyse and understand the content of documents and other sources of information. For a new domain like algorithmic government, which does not have many documents on the web, it was found that contextual learning was up to 40% more relevant than the normal learning approach. The qualitative aspect of the clusters was found much better by the experts than quantitative aspect due to availability of lesser number of search documents. It was found that scientific research also supports the groups formed during contextual learning approach. This approach should help government to better understand and respond to the needs and concerns of their citizens by deriving better data insights in more quickly and to make more informed, evidence-based decisions that are sensitive to the needs and values of different communities and stakeholders. And thus, many stakeholders in the new domain can use this approach for exploration, research, policy formulation, strategizing, implementing, and testing of the various learned concepts. A total of 15 search engines were used in the experimental settings with thousands of web crawling being done using the Carrot
2 engine. Text embedding was done using bag-of-word technique, and k-means clustering was implemented for producing 25 clusters across the two types of learnings. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
34. When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis.
- Author
-
Palmiotto, Francesca
- Subjects
CIVIL rights ,GENERAL Data Protection Regulation, 2016 ,ARTIFICIAL intelligence ,TAXONOMY ,PUBLIC spaces - Abstract
This Article addresses the pressing issues surrounding the use of automated systems in public decision-making, specifically focusing on migration, asylum, and mobility. Drawing on empirical data, this Article examines the potential and limitations of the General Data Protection Regulation and the Artificial Intelligence Act in effectively addressing the challenges posed by automated decision-making (ADM). The Article argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications where automated systems assist human decision-makers rather than replace them entirely. To bridge the gap between ADM in law and practice, this Article proposes to move beyond the concept of "automated decisions" and complement the legal protection in the GDPR and AI Act with a taxonomy that can inform a fundamental rights analysis. This taxonomy enhances our understanding of ADM and allows to identify the fundamental rights at stake and the sector-specific legislation applicable to ADM. The Article calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Just accountability structures – a way to promote the safe use of automated decision-making in the public sector.
- Author
-
Hirvonen, Hanne
- Subjects
- *
PUBLIC sector , *DECISION making , *SYSTEM safety , *LEGISLATION drafting , *ARTIFICIAL intelligence - Abstract
The growing use of automated decision-making (ADM) systems in the public sector and the need to control these has raised many legal questions in academic research and in policymaking. One of the timely means of legal control is accountability, which traditionally includes the ability to impose sanctions on the violator as one dimension. Even though many risks regarding the use of ADM have been noted and there is a common will to promote the safety of these systems, the relevance of the safety research has been discussed little in this context. In this article, I evaluate regulating accountability over the use of ADM in the public sector in relation to the findings of safety research. I conducted the study by focusing on ongoing regulatory projects regarding ADM, the Finnish ADM legislation draft and the EU proposal for the AI Act. The critical question raised in the article is what the role of sanctions is. I ask if official accountability could mean more of an opportunity to learn from mistakes, share knowledge and compensate for harm instead of control via sanctions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Resh(AI)ping Good Administration: Addressing the Mass Effects of Public Sector Digitalisation.
- Author
-
Sanchez-Graells, Albert
- Subjects
PUBLIC sector ,DIGITAL technology ,EUROPEAN Union law ,DUE process of law ,ARTIFICIAL intelligence - Abstract
Public sector digitalisation is transforming public governance at an accelerating rate. Digitalisation is outpacing the evolution of the legal framework. Despite several strands of international efforts to adjust good administration guarantees to new modes of digital public governance, progress has so far been slow and tepid. The increasing automation of decision-making processes puts significant pressure on traditional good administration guarantees, jeopardises individual due process rights, and risks eroding public trust. Automated decision-making has, so far, attracted the bulk of scholarly attention, especially in the European context. However, most analyses seek to reconcile existing duties towards individuals under the right to good administration with the challenges arising from digitalisation. Taking a critical and technology-centred doctrinal approach to developments under the law of the European Union and the Council of Europe, this paper goes beyond current debates to challenge the sufficiency of existing good administration duties. By stressing the mass effects that can derive from automated decision-making by the public sector, the paper advances the need to adapt good administration guarantees to a collective dimension through an extension and a broadening of the public sector's good administration duties: that is, through an extended ex ante control of organisational risk-taking, and a broader ex post duty of automated redress. These legal modifications should be urgently implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Algorithmic gender bias: investigating perceptions of discrimination in automated decision-making.
- Author
-
Kim, Soojong, Oh, Poong, and Lee, Joomi
- Abstract
With the widespread use of artificial intelligence and automated decision-making (ADM), concerns are increasing about automated decisions biased against certain social groups, such as women and racial minorities. The public's skepticism and the danger of algorithmic discrimination are widely acknowledged, yet the role of key factors constituting the context of discriminatory situations is underexplored. This study examined people’s perceptions of gender bias in ADM, focusing on three factors influencing the responses to discriminatory automated decisions: the target of discrimination (subject vs. other), the gender identity of the subject, and situational contexts that engender biases. Based on a randomised experiment (
N = 602), we found stronger negative reactions to automated decisions that discriminate against the gender group of the subject than those discriminating against other gender groups, evidenced by lower perceived fairness and trust in ADM, and greater negative emotion and tendency to question the outcome. The negative reactions were more pronounced among participants in underserved gender groups than men. Also, participants were more sensitive to biases in economic and occupational contexts than in other situations. These findings suggest that perceptions of algorithmic biases should be understood in relation to the public's lived experience of inequality and injustice in society. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
38. Automating public administration: citizens' attitudes towards automated decision-making across Estonia, Sweden, and Germany.
- Author
-
Kaun, Anne, Larsson, Anders Olof, and Masso, Anu
- Abstract
Although algorithms are increasingly used for enabling the automation of tasks in public administration of welfare states, the citizens' knowledge of, experiences with and attitudes towards automated decision-making (ADM) in public administration are still less known. This article strives to reveal the perspectives of citizens who are increasingly exposed to ADM systems, relying on a comparative analysis of a representative survey conducted in Estonia, Germany, and Sweden. The findings show that there are important differences between the three countries when it comes to awareness, trust, and perceived suitability of ADM in public administration, which map onto historical differences in welfare provisions or so-called welfare regimes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Hungarian administrative processes in the digital age: An attempt to a comprehensive examination.
- Author
-
Csatlós, Erzsébet
- Subjects
- *
DIGITAL technology , *PUBLIC services , *HABIT , *ADMINISTRATIVE acts , *ADMINISTRATIVE procedure , *SUSTAINABLE development - Abstract
In a world of sustainable development where digitalisation is among the priorities of all states, the question arises how digital Hungarian public administrative procedure is. The study aims to give an overall insight into the state of affairs in Hungary in individual cases and explore the level of digitalisation by exploring statistics on the clients’ habits on the usage of the available digital public service on one hand, just to see how it appears in their everyday, and the level of automatisation in decision-making on the other, from the aspect of the authority. As a result, when assessing the extent of digital public services in Hungary, the focus tends to concentrate on levels 1–3, and possibly level 4, of digital public services according to a five-stages chart settled as a goal by the European Union in 2002 and also used as reference in the Hungarian Act on e-public administrative services. Numbers demonstrate that the utilization of digital public services, despite their availability, is not as widespread among people as it could be. Also, the study enumerates the emergence of automated decision-making by establishing categories based on the examples found in the very few normative regulations to offer a picture of the status of digitalisation of the Hungarian administrative proceedings. While complete automation is still a distant goal, rapid technological advancements and innovations are pushing the legal framework to keep up. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. CJEU: The Rating of a Natural Person's Creditworthiness by a Credit Rating Agency Constitutes Profiling and Can Be an Automated Decision under Article 22 GDPR.
- Author
-
Horstmann, Jan
- Subjects
GENERAL Data Protection Regulation, 2016 ,PERSONALLY identifiable information ,ONLINE profiling ,DATA protection - Abstract
Case C-634/21 OQ v Land Hessen (Scoring), Judgment of the Court of Justice of the European Union (First Chamber) of 7 December 2023 Article 22 (1) of Regulation (EU) 2016/679 (General Data Protection Regulation) must be interpreted as meaning that the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes 'automated individual decision-making' within the meaning of that provision, where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Technology 3.0: Police Officers' Perceptions Towards Technology Shifts.
- Author
-
Aviram, Neomi Frisch, Correa, Catarina, and Oliviera, Roberto
- Subjects
POLICE ,BUREAUCRACY ,MILITARY police ,CIVIL service ,DISCRETION ,PSYCHOLOGICAL burnout - Abstract
Police units worldwide are going through a three-generational technological shift: from "street" to "screen" to "system" technologies. This paper focuses on how these digital shifts shape police officers' perceptions. First, concerning the change from "street" to "screen" police, it focuses on how it changes police officers' perceptions of discretion and burnout. The shift from "screen" to "system" policy focuses on how perceptions towards "screen" technologies shape the receptivity of "system" technologies. We address these questions using a mixed-method approach to analyze Brazilian police officers' shift from the Military Police to the Environmental Military Police. Findings suggest that changing from "street" to "screen" police reduces burnout and limited discretion among police officers. Moreover, usefulness in achieving professional goals and perceptions of monitoring via "screen" technology predict receptivity to "system" technology. We conclude that street-level bureaucrats' perceptions of technological shifts are essential to acknowledge when planning and implementing such changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Barriers to adopting automated organisational decision-making through the use of artificial intelligence.
- Author
-
Booyse, Dawid and Scheepers, Caren Brenda
- Subjects
ARTIFICIAL intelligence ,SOCIAL dynamics ,DECISION making ,STRUCTURATION theory ,SKILLED labor - Abstract
Purpose: While artificial intelligence (AI) has shown its promise in assisting human decision, there exist barriers to adopting AI for decision-making. This study aims to identify barriers in the adoption of AI for automated organisational decision-making. AI plays a key role, not only by automating routine tasks but also by moving into the realm of automating decisions traditionally made by knowledge or skilled workers. The study, therefore, selected respondents who experienced the adoption of AI for decision-making. Design/methodology/approach: The study applied an interpretive paradigm and conducted exploratory research through qualitative interviews with 13 senior managers in South Africa from organisations involved in AI adoption to identify potential barriers to using AI in automated decision-making processes. A thematic analysis was conducted, and AI coding of transcripts was conducted and compared to the manual thematic coding of transcripts with insights into computer vs human-generated coding. A conceptual framework was created based on the findings. Findings: Barriers to AI adoption in decision-making include human social dynamics, restrictive regulations, creative work environments, lack of trust and transparency, dynamic business environments, loss of power and control, as well as ethical considerations. Originality/value: The study uniquely applied the adaptive structuration theory (AST) model to AI decision-making adoption, illustrated the dimensions relevant to AI implementations and made recommendations to overcome barriers to AI adoption. The AST offered a deeper understanding of the dynamic interaction between technological and social dimensions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Algorithmic Discrimination From the Perspective of Human Dignity
- Author
-
Carsten Orwat
- Subjects
algorithmic discrimination ,artificial intelligence ,automated decision‐making ,development of personality ,generalisation ,human dignity ,informed consent ,profiling ,statistical discrimination ,Sociology (General) ,HM401-1281 - Abstract
Applications of artificial intelligence, algorithmic differentiation, and automated decision‐making systems aim to improve the efficiency of decision‐making for differentiating persons. However, they may also pose new risks to fundamental rights, including the risk of discrimination and potential violations of human dignity. Anti‐discrimination law is not only based on the principles of justice and equal treatment but also aims to ensure the free development of one’s personality and the protection of human dignity. This article examines developments in AI and algorithmic differentiation from the perspective of human dignity. Problems addressed include the expansion of the reach of algorithmic decisions, the potential for serious, systematic, or structural discrimination, the phenomenon of statistical discrimination and the treatment of persons not as individuals, deficits in the regulation of automated decisions and informed consent, the creation and use of comprehensive and personality‐constituting personal and group profiles, and the increase in structural dominance.
- Published
- 2024
- Full Text
- View/download PDF
44. Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices
- Author
-
Xukang Wang, Ying Cheng Wu, Xueliang Ji, and Hongpeng Fu
- Subjects
algorithmic discrimination ,regulatory measures ,automated decision-making ,computational intelligence ,AI and law ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
IntroductionAlgorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions.MethodsThe methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation.ResultsWe identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia.ConclusionReal-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.
- Published
- 2024
- Full Text
- View/download PDF
45. The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐Making
- Author
-
Stefan Larsson, James Merricks White, and Claire Ingram Bogusz
- Subjects
adm and risks of discrimination ,ai and accountability ,ai and risks of discrimination ,ai and transparency ,artificial intelligence ,automated decision‐making ,discrimination in recruitment ,indirect ai use ,platforms and discrimination ,Sociology (General) ,HM401-1281 - Abstract
Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large‐scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision‐making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes.
- Published
- 2024
- Full Text
- View/download PDF
46. Automated algorithm aided capacity and confidence boost in surgical decision-making training for inferior clivus
- Author
-
Ke Tang, Bo Bu, Hongcheng Tian, Yang Li, Xingwang Jiang, Zenghui Qian, and Yiqiang Zhou
- Subjects
surgical planning ,high-risk ,surgical simulation ,automated decision-making ,confidence ,risk-benefit assessment ,Surgery ,RD1-811 - Abstract
ObjectiveTo assess the impact of automated algorithms on the trainees’ decision-making capacity and confidence for individualized surgical planning.MethodsAt Chinese PLA General Hospital, trainees were enrolled to undergo decision-making capacity and confidence training through three alternative visual tasks of the inferior clivus model formed from an automated algorithm and given consecutively in three exemplars. The rationale of automated decision-making was used to instruct each trainee.ResultsFollowing automated decision-making calculation in 50 skull base models, we screened out three optimal plans, infra-tubercle approach (ITA), trans-tubercle approach (TTA), and supra-tubercle approach (STA) for 41 (82.00%), 8 (16.00%), and 1 (2.00%) subject, respectively. From September 1, 2023, through November 17, 2023, 62 trainees (median age [range]: 27 [26–28]; 28 [45.16%] female; 25 [40.32%] neurosurgeons) made a decision among the three plans for the three typical models (ITA, TTA, and STA exemplars). The confidence ratings had fine test-retest reliability (Spearman's rho: 0.979; 95% CI: 0.970 to 0.988) and criterion validity with time spent (Spearman's rho: −0.954; 95%CI: −0.963 to −0.945). Following instruction of automated decision-making, time spent (initial test: 24.02 vs. 7.13 in ITA; 30.24 vs. 7.06 in TTA; 34.21 vs. 12.82 in STA) and total hits (initial test: 30 vs. 16 in ITA; 37 vs. 17 in TTA; 42 vs. 28 in STA) reduced significantly; confidence ratings (initial test: 2 vs. 4 in ITA; 2 vs. 4 in TTA; 1 vs. 3 in STA) increased correspondingly. Statistically significant differences (P
- Published
- 2024
- Full Text
- View/download PDF
47. Formal, Procedural, and Material Requirements of the Rule of Law in the Context of Automated Decision-Making
- Author
-
Suksi, Markku and Suksi, Markku, editor
- Published
- 2023
- Full Text
- View/download PDF
48. Legislating for Legal Certainty, with a Right to a Human Face, in an Automated Public Administration
- Author
-
Pöysti, Tuomas and Suksi, Markku, editor
- Published
- 2023
- Full Text
- View/download PDF
49. Automation in Administrative Decision-Making Concerning Social Benefits : A Government Agency Perspective
- Author
-
Sarlin, Riku and Suksi, Markku, editor
- Published
- 2023
- Full Text
- View/download PDF
50. Introduction
- Author
-
Suksi, Markku and Suksi, Markku, editor
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.