278 results on '"ARTIFICIAL intelligence & ethics"'
Search Results
2. Generative Artificial Intelligence as Hypercommons: Ethics of Authorship and Ownership.
- Author
-
Islam, Gazi and Greenwood, Michelle
- Subjects
GENERATIVE artificial intelligence ,AUTHORSHIP ,BUSINESS ethics ,SCHOLARLY publishing ,MORAL agent (Philosophy) ,ARTIFICIAL intelligence & ethics - Abstract
In this editorial essay, we argue that Generative Artificial Intelligence programs (GenAI) draw on what we term a "hypercommons", involving collectively produced inputs and labour that are largely invisible or untraceable. We argue that automatizing the exploitation of common inputs, in ways that remix and reconfigure them, can lead to a crisis of academic authorship in which the moral agency involved in scholarly production is increasingly eroded. We discuss the relationship between the hypercommons and authorship in terms of moral agency and the ethics of academic production, speculating on different responses to the crisis of authorship as posed by GenAI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Voice in the Machine: Ethical Considerations for Language-Capable Robots: Parsing the promise of language-capable robots.
- Author
-
Williams, T., Matuszek, Cynthia, Jokinen, Kristiina, Korpan, Raj, Pustejovsky, James, and Scassellati, Brian
- Subjects
- *
ARTIFICIAL intelligence & ethics , *ETHICS , *COMPUTATIONAL linguistics , *NATURAL language processing , *DISCRIMINATION (Sociology) , *PREJUDICES - Abstract
The article discusses various ethical considerations for language-capable robots. These concerns include trust, influence, identity, and privacy, and will require consideration by researchers, practitioners, and the general public. Various potential negative outcomes are discussed including robot control over human morals, a default identity perception grounded in white heteropatriarchy, gendered and racialized language-capable robots, and the potential for robots to be used as mobile surveillance tools.
- Published
- 2023
- Full Text
- View/download PDF
4. Artificial Intelligence and the Political Legitimacy of Global Governance.
- Author
-
Erman, Eva and Furendal, Markus
- Subjects
- *
ARTIFICIAL intelligence & ethics , *LEGITIMACY of governments , *DEMOCRACY , *DECISION making in political science , *INTERNATIONAL organization - Abstract
Although the concept of "AI governance" is frequently used in the debate, it is still rather undertheorized. Often it seems to refer to the mechanisms and structures needed to avoid "bad" outcomes and achieve "good" outcomes with regard to the ethical problems artificial intelligence is thought to actualize. In this article we argue that, although this outcome-focused view captures one important aspect of "good governance," its emphasis on effects runs the risk of overlooking important procedural aspects of good AI governance. One of the most important properties of good AI governance is political legitimacy. Starting out from the assumptions that AI governance should be seen as global in scope and that political legitimacy requires at least a democratic minimum, this article has a twofold aim: to develop a theoretical framework for theorizing the political legitimacy of global AI governance, and to demonstrate how it can be used as a compass for critially assessing the legitimacy of actual instances of global AI governance. Elaborating on a distinction between "governance by AI" and "governance of AI" in relation to different kinds of authority and different kinds of decision-making leads us to the conclusions that much of the existing global AI governance lacks important properties necessary for political legitimacy, and that political legitimacy would be negatively impacted if we handed over certain forms of decision-making to artificial intelligence systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. IF WE COULD TALK TO THE ANIMALS, HOW SHOULD WE DISCUSS THEIR LEGAL RIGHTS?
- Author
-
Torrance, Andrew W. and Tomlinson, Bill
- Subjects
ANIMAL communication ,ARTIFICIAL intelligence ,NEURAL circuitry ,MACHINE learning ,ARTIFICIAL intelligence & ethics - Abstract
The intricate tapestry of animal communication has long fascinated humanity, with the sophisticated linguistics of cetaceans holding a special place of intrigue due to the cetaceans' significant brain size and apparent intelligence. This Essay explores the legal implications of the recent advancements in artificial intelligence (AI), specifically machine learning and neural networks, that have made significant strides in deciphering sperm whale (Physeter macrocephalus) communication. We view the ability of a being to communicate as one--but not the only--potential pathway to qualify for legal rights. As such, we investigate the possibility that the ability to communicate should trigger legal rights for beings capable of communicating, whether they be cetaceans or other creatures. As the Cetacean Translation Initiative (CETI) project, which is actively working to unlock sperm whale language, moves closer to enabling meaningful humancetacean dialogue, we stand on the precipice of a transformative understanding that may compel a radical reevaluation of animal legal rights and, perhaps, human legal rights as well. In fact, viewing eligibility for legal rights through a more objective lens, such as a communication criterion, may even improve our understanding of human legal rights, their origins, extent, application, and even entitlement itself. We begin with an overview of animal communication, emphasizing the complex acoustic patterns of sperm whale songs and clicks, which have been captured and analyzed through the collaborative efforts of marine biologists and computer scientists. This cross-disciplinary effort has yielded what the Dominica Sperm Whale Project has named "Flukebook"--a robust dataset that informs machine-learning models with acoustic signals, contextual behavioral data, genetic data, and geospatial information--that opens the door to the potential of an interspecies large language model (LLM) useful for communication among sperm whales and humans. Having established that the prospect of communicating with another species is becoming increasingly feasible, we then delve into the philosophical and ethical considerations that accompany such a breakthrough. Drawing upon the perspectives of thinkers such as Jeremy Bentham, Professor Peter Singer, and Professor Martha Nussbaum, we investigate the ethical foundations for considering the legal rights of cetaceans, or other nonhuman animals. This investigation is juxtaposed with historical whaling laws and modern legal frameworks, probing the adequacy of current laws, norms, practices, and attitudes regarding emerging interspecies communication. Finally, we propose a novel legal paradigm that contends with the implications of cetacean communication capabilities. As we inch toward potentially understanding requests, preferences, or even rules or laws of sperm whales, the ethical imperative to reexamine their legal standing becomes undeniable. This Essay examines practical legal issues such as jurisdiction, standing, representation, autonomy, and the feasibility of animal citizenship. In fact, it envisions innovative legal constructs such as a "Magna Carta Cetacea" and a "United Species" extension of the United Nations. In addition, we endeavor to articulate an objective standard by which any being capable of the requisite communication qualifies for legal rights. In this potential legal frontier, the communication of preferences by an animal may necessitate that we seriously consider conferring legal rights to those animals. This groundbreaking dialogue could not only elevate the rights of whales, but also provoke a broader discussion about the principles underlying human legal rights themselves, challenging our current anthropocentric legal systems to evolve. As we decode the "codas" of sperm whales, we are challenged to reenvision the legal and normative matrix of life on Earth and our place within it, guided by potential principles such as mutual respect and legal recognition that transcend species boundaries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
6. Co-Authoring with an AI? Ethical Dilemmas and Artificial Intelligence.
- Author
-
Jabotinsky, Hadar Y. and Sarel, Roee
- Subjects
- *
ARTIFICIAL intelligence & ethics , *ARTIFICIAL intelligence , *ETHICS , *ONLINE chat - Abstract
The Artificial Intelligence ("AI") revolution threatens to change the way which legal articles are written. Authors who previously had to sort through vast amounts of scholarship can now get quick answers from algorithms in a click of a button. While this may speed up legal writing, it may also bear significant challenge for transparency: can editors still trust the content submitted by authors? Would AI influence also how editors review submissions? And are current submission guidelines adequate to deal with AI? Our Article addresses these questions in a three-step process. First, we engaged with ChatGPT--the fastest growing consumer application in history--only a few days after its release. By posing ChatGPT with questions on the ethics of using AI and asking it to convert the responses into an article, we derive new insights on the strength of AI to provide relevant answers as well as its weaknesses of "hallucinated" sources and limited accuracy. Second, we conducted a comparison of ChatGPT to its internet-connected variant of Microsoft's Bing Chat, using a similar exercise. This comparison enables us to identify the circumstances under which turning to AI would be useful. Finally, we apply our insights to different guidelines of major publishers, which have diverged on their approach toward AI-written text. The Article's contribution is two-fold: (1) it provides a novel examination of how Generative AI can be used in legal writing and the ethical boundaries of such practices, and (2) it compares the existing publishing guidelines and identifies how these may affect the behavior of authors and editors. We find that law reviews seem to ignore the issue altogether and emphasize the need to develop AI policies for legal writing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
7. Bridging the accountability gap of artificial intelligence – what can be learned from Roman law?
- Author
-
Heine, Klaus and Quintavalla, Alberto
- Subjects
- *
ARTIFICIAL intelligence , *ROMAN law , *ENSLAVED persons , *STAKEHOLDERS , *ARTIFICIAL intelligence & ethics - Abstract
This paper discusses the accountability gap problem posed by artificial intelligence. After sketching out the accountability gap problem we turn to ancient Roman law and scrutinise how slave-run businesses dealt with the accountability gap through an indirect agency of slaves. Our analysis shows that Roman law developed a heterogeneous framework in which multiple legal remedies coexist to accommodate the various competing interests of owners and contracting third parties. Moreover, Roman law shows that addressing the various emerging interests had been a continuous and gradual process of allocating risks among different stakeholders. The paper concludes that these two findings are key for contemporary discussions on how to regulate artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Does Information About AI Regulation Change Manager Evaluation of Ethical Concerns and Intent to Adopt AI?
- Author
-
Cuéllar, Mariano-Florentino, Larsen, Benjamin, Lee, Yong Suk, and Webb, Michael
- Subjects
ARTIFICIAL intelligence laws ,ARTIFICIAL intelligence & ethics ,TECHNOLOGICAL innovations ,ARTIFICIAL intelligence ,ORGANIZATIONAL transparency ,PRIVACY ,DISCRIMINATION (Sociology) ,EXECUTIVES - Abstract
We examine the impacts of potential artificial intelligence (AI) regulations on managers' perceptions on ethical issues related to AI and their intentions to adopt AI technologies. We conduct a randomized online survey experiment on more than a thousand managers in the United States. We randomly present managers with different proposed AI regulations, and ask about ethical issues related to AI and their intentions related to AI adoption. We find that information about AI regulation increases manager perception of the importance of safety, privacy, bias/discrimination, and transparency issues related to AI. However, there is a tradeoff; regulation information reduces manager intent to adopt AI technologies. Moreover, information about regulation increases manager intent to spend on developing AI strategy including ethical issues at the cost of investing in AI adoption, such as providing AI training to current employees or purchasing AI software packages. (JEL: K24, L21, L51, O33, O38) [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning.
- Author
-
Lanz, Lukas, Briker, Roman, and Gerpott, Fabiola H.
- Subjects
LEADERSHIP ethics ,HUMAN-artificial intelligence interaction ,SUPERIOR-subordinate relationship ,EMPLOYEE attitudes ,MACHINE learning ,SUPERVISION ,ARTIFICIAL intelligence & ethics - Abstract
The role of artificial intelligence (AI) in organizations has fundamentally changed from performing routine tasks to supervising human employees. While prior studies focused on normative perceptions of such AI supervisors, employees' behavioral reactions towards them remained largely unexplored. We draw from theories on AI aversion and appreciation to tackle the ambiguity within this field and investigate if and why employees might adhere to unethical instructions either from a human or an AI supervisor. In addition, we identify employee characteristics affecting this relationship. To inform this debate, we conducted four experiments (total N = 1701) and used two state-of-the-art machine learning algorithms (causal forest and transformers). We consistently find that employees adhere less to unethical instructions from an AI than a human supervisor. Further, individual characteristics such as the tendency to comply without dissent or age constitute important boundary conditions. In addition, Study 1 identified that the perceived mind of the supervisors serves as an explanatory mechanism. We generate further insights on this mediator via experimental manipulations in two pre-registered studies by manipulating mind between two AI (Study 2) and two human supervisors (Study 3). In (pre-registered) Study 4, we replicate the resistance to unethical instructions from AI supervisors in an incentivized experimental setting. Our research generates insights into the 'black box' of human behavior toward AI supervisors, particularly in the moral domain, and showcases how organizational researchers can use machine learning methods as powerful tools to complement experimental research for the generation of more fine-grained insights. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Investigating the Intersections of Ethics and Artificial Intelligence in the Collections as Data Position Papers.
- Author
-
Osti, Giulia, Cushing, Amber, and Little, Suzanne
- Subjects
- *
ARTIFICIAL intelligence , *DIGITAL technology , *INFORMATION technology , *TECHNOLOGICAL innovations , *DATA privacy , *INFORMATION science - Abstract
A paradigm shift is currently underway with the emergence of the Collections as Data movement, which advocates the creation and dissemination of cultural heritage collections that are amenable to large‐scale computation to empower both collection managers and users. Although this discourse is beginning to gain some traction in the literature, critical evidence‐based assessments of the opportunities and risks of this process are underexplored. This paper presents the results of a content analysis of the official position statements (n = 83) produced in the Collections as Data forums and written by international professionals working with digital collections. Although preliminary, the analysis presented and discussed here sheds light on the initial reception of the idea of Collections as Data and its articulation in practice. The study represents the first systematic attempt to explore the complexities of the intersection of ethics and artificial intelligence in the context of cultural heritage, aiming at providing a valuable precedent for further elaboration and discussion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Is Computing a Discipline in Crisis?
- Author
-
Vardi, Moshe Y.
- Subjects
- *
ARTIFICIAL intelligence , *ARTIFICIAL intelligence & ethics , *ARTIFICIAL intelligence & society , *CHATGPT , *ARTIFICIAL intelligence research , *SOCIAL responsibility of business - Abstract
The article focuses on the risks of artificial intelligence (AI) since the introduction of ChatGPT and the social responsibility of computing professionals to address these concerns. The author discusses the lack of research on the topic at NeurIPS, the disconnect between the Association for the Advancement of Artificial Intelligence (AAAI) and the International Joint Conference on Artificial Intelligence (IJCAI), and the issue of for-profit Big Tech.
- Published
- 2024
- Full Text
- View/download PDF
12. NEXT GEN AI.
- Author
-
Enis, Matt
- Subjects
- *
ARTIFICIAL intelligence in libraries , *ARTIFICIAL intelligence & ethics , *ARTIFICIAL intelligence , *LIBRARIANS , *ELECTRONIC data processing - Abstract
The article discusses the role of librarians in helping patrons work with generative artificial intelligence (AI). Topics include the potential of this technology to automate repetitive, time-consuming task in many different fields, impact of the growing concern about the potential impact of generative AI on the jobs of screenwriters and actors, and ethical concerns on generative AI.
- Published
- 2024
13. Data Science Meets Law: Learning Responsible AI together.
- Author
-
Hod, Shlomi, Chagal-Feferkorn, Karni, Elkin-Koren, Niva, and Gal, Avidor
- Subjects
- *
ARTIFICIAL intelligence & ethics , *LAW , *LEARNING , *EQUALITY , *RIGHT of privacy , *PROBLEM solving , *INTERDISCIPLINARY teams in education - Abstract
The authors discuss a course they designed concerning the intersection of what is termed responsible artificial intelligence (AI), law, ethics, and society. According to the article, the goal was to develop multidisciplinary collaborative skills for law and data science students by participating in joint problem-solving work including transparency, privacy, and equality.
- Published
- 2022
- Full Text
- View/download PDF
14. BIGGER THAN BIAS.
- Author
-
Perrigo, Billy and POPLI, NIK
- Subjects
WOMEN computer scientists ,ARTIFICIAL intelligence research ,ARTIFICIAL intelligence & ethics ,RESEARCH institutes ,ARTIFICIAL intelligence laws ,ARTIFICIAL intelligence in industry - Abstract
The article discusses the life, career, and accomplishments of computer scientist Timnit Gebru particularly in the field of artificial intelligence (AI). Topics explored include her research on large language AI models which led to her departure from the ethical AI team of technology firm Google, the Distributed AI Research Institute (DAIR) she launched with the goal of making AI beneficial to everyone, and her support for the proper regulation of AI use in industry.
- Published
- 2022
15. When Machine Learning Goes Off the Rails.
- Author
-
BABIC, BORIS, COHEN, I. GLENN, EVGENIOU, THEODOROS, and GERKE, SARA
- Subjects
MACHINE learning ,RISK management in business ,ARTIFICIAL intelligence & ethics ,DECISION making ,HUMAN-artificial intelligence interaction ,ARTIFICIAL intelligence software - Abstract
Products and services that rely on machine learning—computer programs that constantly absorb new data and adapt their decisions in response—don’t always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside. Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake. A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended. [ABSTRACT FROM AUTHOR]
- Published
- 2021
16. AI in History.
- Author
-
Jones, Matthew L.
- Subjects
- *
ARTIFICIAL intelligence , *HISTORICAL research methods , *HISTORY & technology , *ARTIFICIAL intelligence & society , *ARTIFICIAL intelligence & ethics - Abstract
The article discusses artificial intelligence (AI), with a particular focus given to its history as well as its use as a supplement in historical practice. Topics mentioned include data-focused machine learning, the historicization of AI systems, and the omnipresence of biases in historical automatic decision systems.
- Published
- 2023
- Full Text
- View/download PDF
17. Artificial Intelligence Applications and the Rules of Professional Conduct.
- Author
-
Marshall, Romaine C. and Cohen, Gregory
- Subjects
- *
ARTIFICIAL intelligence & ethics , *PRACTICE of law , *INTEGRATED bar , *LAWYERS , *LEGAL ethics , *CHATGPT , *ATTORNEY & client - Abstract
The article discusses the guidance issued by the Utah State Bar (USB) on the integration of generative artificial intelligence (AI) applications in legal practice. Topics explored include the adherence of attorneys to the USB Rules of Professional Conduct with regard to their use of the AI program ChatGPT, the concerns related to client confidentiality such as inappropriate disclosure of information and data breaches, and the potential impact of AI applications on attorney-client relations.
- Published
- 2023
18. AI and Course Work: Figuring out Ethical Strategies.
- Author
-
PETERS, BRIAN M.
- Subjects
ARTIFICIAL intelligence in education ,ACADEMIC workload of students ,ARTIFICIAL intelligence & ethics ,ASSESSMENT for learning (Teaching model) - Abstract
The article offers information on the challenges instructors face regarding artificial intelligence (AI) in student work, discussing ongoing discussions and initiatives at Concordia University and Cegep levels. Topics include the detection of AI, ethical strategies, and the impact on pedagogy and assessment.
- Published
- 2023
- Full Text
- View/download PDF
19. Seeming Ethical Makes You Attractive: Unraveling How Ethical Perceptions of AI in Hiring Impacts Organizational Innovativeness and Attractiveness.
- Author
-
da Motta Veiga, Serge P., Figueroa-Armijos, Maria, and Clark, Brent B.
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL intelligence & ethics ,ORGANIZATIONAL behavior ,INNOVATIONS in business ,EMPLOYEE selection ,EMPLOYMENT practices ,SOCIAL sciences & ethics - Abstract
More organizations use AI in the hiring process than ever before, yet the perceived ethicality of such processes seems to be mixed. With such variation in our views of AI in hiring, we need to understand how these perceptions impact the organizations that use it. In two studies, we investigate how ethical perceptions of using AI in hiring are related to perceptions of organizational attractiveness and innovativeness. Our findings indicate that ethical perceptions of using AI in hiring are positively related to perceptions of organizational attractiveness, both directly and indirectly via perceptions of organizational innovativeness, with variations depending on the type of hiring method used. For instance, we find that individuals who consider it ethical for organizations to use AI in ways often considered to be intrusive to privacy, such as analyzing social media content for traits and characteristics, view such organizations as both more innovative and attractive. Our findings trigger a timely discussion about the critical role of ethical perceptions of AI in hiring on organizational attractiveness and innovativeness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work.
- Author
-
Bankins, Sarah and Formosa, Paul
- Subjects
ARTIFICIAL intelligence & ethics ,QUALITY of work life ,HUMAN-artificial intelligence interaction ,ARTIFICIAL intelligence in business ,ORGANIZATIONAL change - Abstract
The increasing workplace use of artificially intelligent (AI) technologies has implications for the experience of meaningful human work. Meaningful work refers to the perception that one's work has worth, significance, or a higher purpose. The development and organisational deployment of AI is accelerating, but the ways in which this will support or diminish opportunities for meaningful work and the ethical implications of these changes remain under-explored. This conceptual paper is positioned at the intersection of the meaningful work and ethical AI literatures and offers a detailed assessment of the ways in which the deployment of AI can enhance or diminish employees' experiences of meaningful work. We first outline the nature of meaningful work and draw on philosophical and business ethics accounts to establish its ethical importance. We then explore the impacts of three paths of AI deployment (replacing some tasks, 'tending the machine', and amplifying human skills) across five dimensions constituting a holistic account of meaningful work, and finally assess the ethical implications. In doing so we help to contextualise the meaningful work literature for the era of AI, extend the ethical AI literature into the workplace, and conclude with a range of practical implications and future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Biases in AI Systems.
- Author
-
SRINIVASAN, RAMYA and CHANDER, AJAY
- Subjects
- *
ARTIFICIAL intelligence , *ARTIFICIAL intelligence & ethics , *HUMAN facial recognition software , *INSTITUTIONAL racism , *MACHINE learning - Abstract
The article criticizes the ethics of artificial intelligence systems with a focus on alleged systemic racism within computer facial recognition software. The alleged responsibilities of machine learning developers in preventing race biases from being integrated into artificial intelligence processes are outlined and potential biases in phases of machine learning including sampling, measurement, and negative sets are described.
- Published
- 2021
- Full Text
- View/download PDF
22. Ethical Machines.
- Author
-
Blackman, Reid
- Subjects
ARTIFICIAL intelligence & ethics ,ORGANIZATIONAL ethics ,LEADERS ,MACHINE learning ,SUBJECTIVITY ,ORGANIZATIONAL aims & objectives - Abstract
Artificial intelligence (AI), specifically machine learning (ML), is transforming the world, equipping organizations with the ability to process volumes of data efficiently to achieve insights and outcomes that would be impossible if done by hand. That ability can impact individual's lives and organizational operations in ways both good and bad. In Ethical Machines, Reid Blackman instructs readers in both how to view AI ethics and how to successfully embed it into operations--with the end goal of conducting AI for good.
- Published
- 2023
23. ¿TIENE ALGO QUE DECIR LA ÉTICA? Retos y oportunidades de la inteligencia artificial en la empresa.
- Author
-
SALCEDO, ALEJANDRO
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL intelligence & ethics ,CORPORATE culture ,RISK assessment ,INDIVIDUAL development ,JUSTICE ,PERIODICAL articles - Abstract
Copyright of Revista Istmo is the property of Centros Culturales de Mexico, A.C. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
24. Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.
- Author
-
Martin, Kirsten and Waldman, Ari
- Subjects
ALGORITHMS ,ARTIFICIAL intelligence ,ARTIFICIAL intelligence & ethics ,TECHNOLOGY & ethics ,BUSINESS ethics ,DECISION making ,ETHICAL decision making ,DECISION making in business - Abstract
Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance mechanisms in practice today, such as notices and impact statements, do not lead to algorithmic decisions being perceived as more legitimate in general, and, consistent with legitimacy theory, that algorithmic decisions with good outcomes are perceived as more legitimate than bad outcomes. Yet, robust governance, such as offering an appeal process, can create a legitimacy dividend for decisions with bad outcomes. However, when arbitrary or morally dubious factors are used to make decisions, most legitimacy dividends are erased. In other words, companies cannot overcome the legitimacy penalty of using arbitrary or morally dubious factors, such as race or the day of the week, with a good outcome or an appeal process for individuals. These findings add new perspectives to both the literature on legitimacy and policy discussions on algorithmic decision-making in firms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. ChatGPT and global public health: Applications, challenges, ethical considerations and mitigation strategies.
- Author
-
Parray, Ateeb Ahmad, Inam, Zuhrat Mahfuza, Ramonfaur, Diego, Haider, Shams Shabab, Mistry, Sabuj Kanti, and Pandya, Apurva Kumar
- Subjects
CHATGPT ,PUBLIC health research ,ARTIFICIAL intelligence & ethics ,PUBLIC health ,DEEP learning ,TECHNOLOGY - Abstract
The advancement of deep learning and artificial intelligence has resulted in the development of state-ofthe- art language models, such as ChatGPT. This technology can analyze large amounts of data, identify patterns, and assist in the analysis and understanding of risk factors for diseases. Despite its potential, the applications, challenges, and ethical considerations have not been yet fully explored in global health research. This paper examines the applications of ChatGPT in global health research, assesses the challenges in its use, and proposes mitigation strategies. Additionally, it describes the ethical considerations around the use of ChatGPT in global health research and suggests potential avenues for addressing these issues. This paper summarizes that it is crucial to understand the capabilities and limitations of this technology in order to fully realize its potential and ensure its responsible integration into global health research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. MARC ANDREESSEN ON ARTIFICIAL INTELLIGENCE AND THE FUTURE.
- Author
-
MANGU-WARD, KATHERINE
- Subjects
- *
CHATGPT , *ARTIFICIAL intelligence , *ARTIFICIAL intelligence & ethics - Abstract
The article presents an interview with Marc Andreessen co-founder of the venture capital powerhouse Andreessen Horowitz about the future of artificial intelligence (A.I.) and its implications. Topics include the recent breakthroughs in A.I., the capabilities and limitations of large language models like ChatGPT, and the cultural and societal considerations surrounding A.I. development and values.
- Published
- 2023
27. Artificial Intelligence and Robots for the Library and Information Professions.
- Author
-
Luca, Edward, Narayan, Bhuva, and Cox, Andrew
- Subjects
- *
ARTIFICIAL intelligence , *ARTIFICIAL intelligence & ethics , *LIBRARIES - Abstract
An introduction is presented in which the editor discusses articles within the issue on topics including the ethical implications of artificial intelligence (AI) and related technologies, the representation of library activities in national AI plans, and a review of curricula in Australia to identify the framing of digital technologies.
- Published
- 2022
- Full Text
- View/download PDF
28. The Ethics of AI for Information Professionals: Eight Scenarios.
- Author
-
Cox, Andrew
- Subjects
- *
INFORMATION professionals , *ETHICS , *ARTIFICIAL intelligence & ethics , *ARTIFICIAL intelligence , *SOCIAL justice , *INNOVATION adoption , *PROFESSIONAL ethics - Abstract
Artificial Intelligence (AI) is central to transformative changes happening in many industries, perhaps potentially to a fourth industrial revolution, but it has also raised a storm of ethical concerns. Information professionals need to navigate these ethical issues effectively because they are likely to use AI in delivering services as well as contributing to the process of adoption of AI more widely in their organisations. Professional ethical codes are too high level to offer precise or complete guidance. In this context, the purpose of this paper is to review the relevant literature and describe eight ethics scenarios of AI which have been developed specifically for information professionals to understand the issues in a concrete form. The paper considers how AI might be defined and presents some of the applications relevant to the information profession. It then summarises the key ethical issues raised by AI in general both those inherent to the technology and those arising from the nature of the AI industry. It considers existing studies that have discussed aspects of the ethical issues specifically for information professionals. It then describes a set of eight ethics scenarios that have been developed and shared in an open form to promote their reuse. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. The Ethics of Blockchain in Organizations.
- Author
-
Sharif, Monica M. and Ghodoosi, Farshad
- Subjects
BLOCKCHAINS ,TECHNOLOGY & ethics ,ORGANIZATIONAL ethics ,EMPLOYEE recruitment ,EMPLOYEE selection ,PERSONNEL management ,ARTIFICIAL intelligence & ethics ,EMPLOYEE retention - Abstract
Blockchain is an open digital ledger technology that has the capability of significantly altering the way that people operations (i.e. human resource management) operate in organizations. This research takes a first step in proposing several ways in which the blockchain technology can be used to improve current organizational practices, while also considering the ethical implications. Specifically, the paper examines the role that blockchain technology plays in three primary areas of people operations: (1) entry to the organization (via recruitment and selection), (2) intraorganizational processes (including compensation via smart contracts, retention and motivation via shared leadership and conflict management via network-based dispute resolution, and performance management), and (3) exit (offboarding). In each section, the paper reviews the ethical implications from the lenses of virtue ethics, utilitarianism, deontology and contractarianism. The paper concludes that in whole the implementation of blockchain technology in people operations processes can create a more ethical work environment. However, careful implementation is necessary and requires extensive examination of ethical implications in advance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.
- Author
-
Telkamp, Jake B. and Anderson, Marc H.
- Subjects
ARTIFICIAL intelligence & ethics ,ORGANIZATIONAL ethics ,MORAL judgment ,ARTIFICIAL intelligence in business ,MORAL foundations theory - Abstract
Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization's use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person's moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.
- Author
-
Giroux, Marilyn, Kim, Jungkeun, Lee, Jacob C., and Park, Jongwon
- Subjects
ARTIFICIAL intelligence & ethics ,GUILT (Psychology) ,HUMAN-artificial intelligence interaction ,SELF-service (Economics) ,PURCHASING ,INTENTION ,SHOPPING ,ARTIFICIAL intelligence in business - Abstract
Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers' moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. From Reality to World. A Critical Perspective on AI Fairness.
- Author
-
John-Mathews, Jean-Marie, Cardon, Dominique, and Balagué, Christine
- Subjects
ARTIFICIAL intelligence & ethics ,FAIRNESS ,REALITY ,ALGORITHMS ,MACHINE learning ,BIG data ,BUSINESS ethics - Abstract
Fairness of Artificial Intelligence (AI) decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between "reality" and "world" from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is "realist", in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, we show that the limits of "realist" fairness corrections lead to the elaboration of "radical responses" to fairness, that is, responses that radically change the format of data. Third, we show that fairness correction is shifting to a "domination regime" that absorbs criticism, and we provide some theoretical and practical avenues for further development in AI ethics. Using an ad hoc critical space stabilized by reality tests alongside the algorithm, we build a shared responsibility model which is compatible with the radical response to fairness issues. Finally, this paper shows the fundamental contribution of pragmatic sociology theories, insofar as they afford a social and political perspective on AI ethics by giving an active role to material actors such as database formats on ethical debates. In a context where data are increasingly numerous, granular, and behavioral, it is essential to renew our conception of AI ethics on algorithms in order to establish new models of responsibility for companies that take into account changes in the computing paradigm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. From Greenwashing to Machinewashing: A Model and Future Directions Derived from Reasoning by Analogy.
- Author
-
Seele, Peter and Schultz, Mario D.
- Subjects
GREENWASHING (Marketing) ,ANALOGY ,KNOWLEDGE transfer ,ORGANIZATIONAL communication ,ARTIFICIAL intelligence & ethics ,DECEPTION ,CORPORATE political activity ,DECOUPLING (Organizational behavior) - Abstract
This article proposes a conceptual mapping to outline salient properties and relations that allow for a knowledge transfer from the well-established greenwashing phenomenon to the more recent machinewashing. We account for relevant dissimilarities, indicating where conceptual boundaries may be drawn. Guided by a "reasoning by analogy" approach, the article addresses the structural analogy and machinewashing idiosyncrasies leading to a novel and theoretically informed model of machinewashing. Consequently, machinewashing is defined as a strategy that organizations adopt to engage in misleading behavior (communication and/or action) about ethical Artificial Intelligence (AI)/algorithmic systems. Machinewashing involves misleading information about ethical AI communicated or omitted via words, visuals, or the underlying algorithm of AI itself. Furthermore, and going beyond greenwashing, machinewashing may be used for symbolic actions such as (covert) lobbying and prevention of stricter regulation. By outlining diverse theoretical foundations of the established greenwashing domain and their relation to specific research questions, the article proposes a machinewashing model and a set of theory-related research questions on the macro, meso, and micro-level for future machinewashing research. We conclude by stressing limitations and by outlining practical implications for organizations and policymakers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Moral Judgments in the Age of Artificial Intelligence.
- Author
-
Sullivan, Yulia W. and Fosso Wamba, Samuel
- Subjects
MORAL judgment ,ARTIFICIAL intelligence & ethics ,RESPONSIBILITY ,HARM (Ethics) ,BLAME ,INTENTION ,REASONING ,EMOTIONAL state - Abstract
The current research aims to answer the following question: "who will be held responsible for harm involving an artificial intelligence (AI) system?" Drawing upon the literature on moral judgments, we assert that when people perceive an AI system's action as causing harm to others, they will assign blame to different entity groups involved in an AI's life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency—attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience—attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI—mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Employee Perceptions of the Effective Adoption of AI Principles.
- Author
-
Kelley, Stephanie
- Subjects
EMPLOYEE psychology ,ARTIFICIAL intelligence in business ,EMPLOYEE attitudes ,ORGANIZATIONAL communication ,EMPLOYEE training ,ETHICS & compliance officers ,ORGANIZATIONAL structure ,ARTIFICIAL intelligence & ethics - Abstract
This study examines employee perceptions on the effective adoption of artificial intelligence (AI) principles in their organizations. 49 interviews were conducted with employees of 24 organizations across 11 countries. Participants worked directly with AI across a range of positions, from junior data scientist to Chief Analytics Officer. The study found that there are eleven components that could impact the effective adoption of AI principles in organizations: communication, management support, training, an ethics office(r), a reporting mechanism, enforcement, measurement, accompanying technical processes, a sufficient technical infrastructure, organizational structure, and an interdisciplinary approach. The components are discussed in the context of business code adoption theory. The findings offer a first step in understanding potential methods for the effective adoption of AI principles in organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability.
- Author
-
Tóth, Zsófia, Caruana, Robert, Gruber, Thorsten, and Loebbecke, Claudia
- Subjects
ROBOTS ,ARTIFICIAL intelligence & ethics ,NORMATIVITY (Ethics) ,ARTIFICIAL intelligence in business ,DESIGNERS - Abstract
Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots' use. To address this dearth, this study offers a conceptual framework that interpretively develops the ethical implications of AI robot applications, drawing on descriptive and normative ethical theory. The new framework elaborates on how the locus of morality (human to AI agency) and moral intensity combine within context-specific AI robot applications, and how this might influence accountability thinking. Our theorization indicates that in situations of escalating AI agency and situational moral intensity, accountability is widely dispersed between actors and institutions. 'Accountability clusters' are outlined to illustrate interrelationships between the locus of morality, moral intensity, and accountability and how these invoke different categorical responses: (i) illegal, (ii) immoral, (iii) permissible, and (iv) supererogatory pertaining to using AI robots. These enable discussion of the ethical implications of using AI robots, and associated accountability challenges for a constellation of actors—from designer, individual/organizational users to the normative and regulative approaches of industrial/governmental bodies and intergovernmental regimes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda.
- Author
-
Hunkenschroer, Anna Lena and Luetge, Christoph
- Subjects
ARTIFICIAL intelligence & ethics ,EMPLOYEE recruitment ,EMPLOYEE selection ,JOB advertising ,JOB resumes ,EMPLOYMENT interviewing ,HUMAN facial recognition software ,ARTIFICIAL intelligence in business - Abstract
Companies increasingly deploy artificial intelligence (AI) technologies in their personnel recruiting and selection process to streamline it, making it faster and more efficient. AI applications can be found in various stages of recruiting, such as writing job ads, screening of applicant resumes, and analyzing video interviews via face recognition software. As these new technologies significantly impact people's lives and careers but often trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. However, given the novelty of AI applications in recruiting practice, the subject is still an emerging topic in academic literature. To inform and strengthen the foundation for future research, this paper systematically reviews the extant literature on the ethicality of AI-enabled recruiting to date. We identify 51 articles dealing with the topic, which we synthesize by mapping the ethical opportunities, risks, and ambiguities, as well as the proposed ways to mitigate ethical risks in practice. Based on this review, we identify gaps in the extant literature and point out moral questions that call for deeper exploration in future research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Clicks, Lies and Videotape.
- Author
-
Borel, Brooke
- Subjects
- *
ARTIFICIAL intelligence & ethics , *ARTIFICIAL intelligence & society , *VIDEO editing , *DIGITAL audio , *FAKE news , *ACCURACY in journalism - Abstract
The article explores how artificial intelligence (AI) is making it possible, and easy, to manipulate audio and video. Particular focus is given to how this may increase the spread of disinformation on social media, thus having profound effects on public discourse and political stability. Additional topics include AI detection tools to flag fake videos, how fake news was a factor in the 2016 elections in the U.S., and how fake video is especially effective at provoking fear.
- Published
- 2018
- Full Text
- View/download PDF
39. Should AI-enabled medical devices be explainable?
- Author
-
Matulionyte, Rita, Nolan, Paul, Magrabi, Farah, and Beheshti, Amin
- Subjects
ARTIFICIAL intelligence in medicine ,MEDICAL equipment ,MACHINE learning ,PRODUCT liability ,DEEP learning ,ARTIFICIAL intelligence laws ,ARTIFICIAL intelligence & ethics - Abstract
Despite its exponential growth, artificial intelligence (AI) in healthcare faces various challenges. One of them is a lack of explainability of AI medical devices, which arguably leads to insufficient trust in AI technologies, quality, and accountability and liability issues. The aim of this paper is to examine whether, why and to what extent AI explainability should be demanded with relation to AI-enabled medical devices and their outputs. Relying on a critical analysis of interdisciplinary literature on this topic and an empirical study, we conclude that the role of AI explainability in the medical AI context is a limited one. If narrowly defined, AI explainability principle is capable of addressing only a limited range of challenges associated with AI and is likely to reach fewer goals than sometimes expected. The study shows that, instead of technical explainability of medical AI devices, most stakeholders need more transparency around its development and quality assurance process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Is Having AI Generate Text Cheating?
- Author
-
Baquero, Carlos
- Subjects
- *
ARTIFICIAL intelligence , *AUTHORSHIP , *MACHINE learning , *ARTIFICIAL intelligence & ethics , *NATURAL language processing - Abstract
The author discusses using text generated by artificial intelligence (AI) and examines if it offers an unfair advantage to writers. He discusses the AI tool GPT-3, developed by research company OpenAI, explores how the tool utilizes language models, and notes the risk of blended writing with AI generated text.
- Published
- 2022
- Full Text
- View/download PDF
41. Dilemmas of Artificial Intelligence.
- Author
-
Denning, Peter J. and Denning, Dorothy E.
- Subjects
- *
ARTIFICIAL intelligence & society , *ARTIFICIAL intelligence & ethics - Abstract
The article lists 10 ethical and design dilemmas raised by AI (Artificial Intelligence) technology as of 2020 including the issue of tools for editing images, videos and soundtracks to produce convincing fakes, the high cost of reliable training data and the notion of surveillance capitalism.
- Published
- 2020
- Full Text
- View/download PDF
42. The art of appropriation.
- Author
-
Berlatsky, Noah
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL intelligence software ,ARTIFICIAL intelligence & ethics ,ARTIFICIAL intelligence research ,WORKS of art in art - Abstract
The article focuses on the concerns surrounding the use of artificial intelligence (AI) in art creation, particularly in relation to issues of exploitation and originality. Topics include the fear that AI threatens the core creative act, the concern about AI picking up material without payment, and the argument for embracing AI as a new form of creativity.
- Published
- 2023
43. Computer says "no comment".
- Author
-
Revell, Timothy
- Subjects
- *
ARTIFICIAL intelligence & ethics , *TECHNOLOGY & society , *ALGORITHMS , *MACHINE learning , *ARTIFICIAL intelligence & society - Abstract
The article questions the establishment of public trust in artificial intelligence algorithms. The author debates the impact of laws such as the European Union's General Data Protection Regulation on the use of algorithms and machine learning in decision-making areas including criminal justice administration, medicine, and autonomous vehicles and presents calls for algorithmic certification and transparency in computer systems.
- Published
- 2018
- Full Text
- View/download PDF
44. THE DIGITAL TRANSFORMATION OF LAW: ARE WE PREPARED FOR ARTIFICIALLY INTELLIGENT LEGAL PRACTICE?
- Author
-
Bridgesmith, Larry and Elmessiry, Adel
- Subjects
- *
ARTIFICIAL intelligence , *LEGAL professions , *TECHNOLOGY & law , *ARTIFICIAL intelligence in education , *ARTIFICIAL intelligence & ethics , *PROFESSIONAL ethics , *LEGAL ethics , *DIGITIZATION - Abstract
The article discusses the relationship between artificial intelligence (AI) and digital transformation of law and legal practice. Topics include digital transformation that corporate legal leaders seek to achieve, ethical concerns on the impact of AI on the legal profession and the Codes of Professional Responsibility, adoption of AI in law schools, and foundations of the development of ethical AI recommended by the European Union (EU).
- Published
- 2021
45. A Philosopher's Daughter Navigates a Career in AI.
- Subjects
- *
ARTIFICIAL intelligence research , *HUMAN-artificial intelligence interaction , *ARTIFICIAL intelligence & ethics , *EXISTENTIAL risk from artificial general intelligence - Abstract
The article presents the authors views and experience in working in artificial intelligence research and development. Topics include the author's introduction to the fields of technology and artificial intelligence, inventions and contributions to the field, funding of the ELLIS Unit Alicante Foundation, and the importance of steering the development of artificial intelligence for the benefit of humanity.
- Published
- 2023
- Full Text
- View/download PDF
46. Who Is Responsible Around Here?
- Author
-
Vardi, Moshe Y.
- Subjects
- *
ARTIFICIAL intelligence , *ARTIFICIAL intelligence & ethics , *HUMAN-artificial intelligence interaction , *CHATGPT - Abstract
The article focuses on the challenges of responsibility when it comes to artificial intelligence (AI). The author discusses the generative text programming known as ChatGPT, the "Blueprint for an AI Bill of Rights" put forth by the U.S. Office of Science and Technology Policy (OSTP), and the Association for Computing Machinery (ACM) policy statement.
- Published
- 2023
- Full Text
- View/download PDF
47. Guest Editorial: Business Ethics in the Era of Artificial Intelligence.
- Author
-
Haenlein, Michael, Huang, Ming-Hui, and Kaplan, Andreas
- Subjects
ARTIFICIAL intelligence & ethics ,BUSINESS ethics ,FAIRNESS - Abstract
An introduction is presented in which the authors discuss various reports within the journal on topics including the relationship between artificial intelligence (AI) and business ethics, AI fairness, and models of corporate responsibility.
- Published
- 2022
- Full Text
- View/download PDF
48. The Ethics of Artificial Intelligence: Review of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI by R. Blackman; Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges by B.C. Stahl, D. Schroeder, and R. Rodrigues; and AI Ethics by M. Coeckelbergh: Ethical Machines: Your concise guide to totally unbiased, transparent, and respectful AI, Harvard Business Review Press, 2022, 224 pp., ISBN 9781647822811; Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges, Springer Nature Switzerland AG, 2023, 116 pp., ISBN 9783031170409; AI Ethics, The MIT Press, 2020, 248 pp., ISBN 9780262538190
- Author
-
Goglin, Christian
- Subjects
ARTIFICIAL intelligence & ethics ,NONFICTION - Published
- 2023
- Full Text
- View/download PDF
49. THE FUTURE OF COMPUTING.
- Author
-
Hutson, Matthew
- Subjects
- *
HISTORY of computers , *COMPUTERS , *ARTIFICIAL intelligence & ethics , *COMPUTER algorithms - Abstract
The article discusses the extraordinary advances in computing over the last century, since 1833 when English mathematician Charles Babbage conceived a programmable machine that presaged today's computing architecture. Topics covered include the ways that computers have transformed human lives, and its implications for the future. Also discussed are the issue on ethical artificial intelligence and the ways that algorithms affect people's lives.
- Published
- 2022
50. What Will Happen When Your Company’s Algorithms Go Wrong?
- Author
-
YAMPOLSKIY, ROMAN V.
- Subjects
ARTIFICIAL intelligence in business ,SOFTWARE failures ,BEST practices ,STRATEGIC planning ,ALGORITHMS ,ARTIFICIAL intelligence & ethics - Abstract
An AI designed to do X will eventually fail to do X. Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric systems misrecognize people, transcription software fails to capture what is being said; overall, it is harder to find examples of AIs that don't fail. The failures of today's narrow domain AIs are just the tip of the iceberg; once we develop general artificial intelligence capable of cross-domain performance, embarrassment from such failures will be the least of our concerns. That's why we need to put best practices in place now. [ABSTRACT FROM AUTHOR]
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.