283 results on '"Ethical AI"'
Search Results
152. Modes of Deliberation in Machine Ethics
- Author
-
Gilbert, Thomas
- Subjects
Artificial intelligence ,Philosophy ,Computer science ,AI Safety ,Ethical AI ,fair machine learning ,political economy ,political theory ,vagueness - Abstract
This dissertation is about the purpose of artificial intelligence (AI) research. New learning algorithms, scales of computation, and modes of sensory input make it possible to better predict or simulate decision-making than ever before. But this does not tell us whether or how AI systems should be built. In fact there is much anxiety about how to build AI applications in ways that respect or enact the decision criteria of existing human institutions. But instead of how to better predict or protect how we decide things, my research question is: how can AI tools be used to reorganize the choices we make about how we want to live together? Answering this question requires investigating the conditions under which deliberation is possible about the systems being built—their models, their real-world performance, and their effects on human domains. These three modes of deliberation are philosophically outlined in the introduction and named as sociotechnical specification, normative cybernetics, and machine politics.The first chapter pursues sociotechnical specification in the context of routing algorithms for autonomous vehicle (AV) fleets. It asks what it would mean to relate this emerging transportation model to the other legacy systems adjacent to the travel domain. It sketches proxies in terms of “known unknown” features of the driving environment. These would need to be monitored and serve as targets for optimization in order for the performance of the AV fleet to be considered to be robustly good. The second chapter pursues a normative cybernetics of AVs in terms of a sustained internal critique of reinforcement learning (RL). This introduces new policy questions, whose answers would correspond to types of feedback between the behavior of AV firms and civil society or state organizations. The third chapter outlines the elements of machine politics in terms of concepts borrowed from contemporary analytic philosophy. Ruth Chang’s notion of parity is mobilized to demonstrate the possibility of domain deliberation at different stages of AI development. This comprises a critique of existing schools of thought, represented here in terms of epistemicism (the notion that the structure of human activities can be passively learned and observed) and ontic incomparabilism (the notion that human activities cannot be organically modeled or developed by means of AI). The three types of feedback that are produced through active developmental inquiry are presented in terms of featurization, optimization, and integration, all of which comprise the structural choices at stake in machine politics.
- Published
- 2021
153. Considerations for a More Ethical Approach to Data in AI: On Data Representation and Infrastructure
- Author
-
Alice Baird and Björn Schuller
- Subjects
artificial intelligence ,machine learning ,ethical AI ,decentralization ,selection-bias ,Information technology ,T58.5-58.64 - Abstract
Data shapes the development of Artificial Intelligence (AI) as we currently know it, and for many years centralized networking infrastructures have dominated both the sourcing and subsequent use of such data. Research suggests that centralized approaches result in poor representation, and as AI is now integrated more in daily life, there is a need for efforts to improve on this. The AI research community has begun to explore managing data infrastructures more democratically, finding that decentralized networking allows for more transparency which can alleviate core ethical concerns, such as selection-bias. With this in mind, herein, we present a mini-survey framed around data representation and data infrastructures in AI. We outline four key considerations (auditing, benchmarking, confidence and trust, explainability and interpretability) as they pertain to data-driven AI, and propose that reflection of them, along with improved interdisciplinary discussion may aid the mitigation of data-based AI ethical concerns, and ultimately improve individual wellbeing when interacting with AI.
- Published
- 2020
- Full Text
- View/download PDF
154. ‘Data dregs’ and its implications for AI ethics: Revelations from the pandemic
- Author
-
Lim, Sun Sun and Bouffanais, Roland
- Published
- 2022
- Full Text
- View/download PDF
155. Quarantining online hate speech: technical and ethical perspectives.
- Author
-
Ullmann, Stefanie and Tomalin, Marcus
- Subjects
HATE speech ,AUTOMATIC speech recognition ,FREEDOM of expression ,MALWARE ,SPEECH synthesis ,COMPUTER software ,INTERNET safety - Abstract
In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
156. Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.
- Author
-
Udała, Katarzyna Agnieszka and Udała, Katarzyna Agnieszka
- Abstract
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
- Published
- 2023
157. Swedish Cultural Heritage in the Age of AI : Exploring Access, Practices, and Sustainability
- Author
-
Gränglid, Olivia, Ström, Marika, Gränglid, Olivia, and Ström, Marika
- Abstract
This thesis aims to explore and gain an understanding of the current AI landscape within Swedish Cultural Heritage using purposive interviews with five cultural heritage institutions with ongoing AI projects. This study fills a knowledge gap in the practical implementation of AI at Swedish institutions in addition to the sustainable use of technologies for cultural heritage. The overarching discussion further includes related topics of ethical AI and long-term sustainability, framing it from a perspective of Information Practices and a socio-material entanglement. Findings show that AI technologies can play an important part in cultural heritage, with a range of practical applications if certain issues are overcome. Moreover, the utilisation of AI will increase. The study also indicates a need for regulations, digitisation efforts, and increased investments in resources to adopt the technologies into current practices sustainably. The conclusion highlights a need for the cultural heritage sector to converge and find collectively applicable solutions for implementing AI.
- Published
- 2023
158. Self-Reflection on Chain-of-Thought Reasoning in Large Language Models
- Author
-
Praas, Robert and Praas, Robert
- Abstract
A strong capability of large language models is Chain-of-Thought reasoning. Prompting a model to ‘think step-by-step’ has led to great performance improvements in solving problems such as planning and question answering, and with the extended output it provides some evidence about the rationale behind an answer or decision. In search of better, more robust, and interpretable language model behavior, this work investigates self-reflection in large language models. Here, self-reflection consists of feedback from large language models to medical question-answering and whether the feedback can be used to accurately distinguish between correct and incorrect answers. GPT-3.5-Turbo and GPT-4 provide zero-shot feedback scores to Chain-of-Thought reasoning on the MedQA (medical questionanswering) dataset. The question-answering is evaluated on traits such as being structured, relevant and consistent. We test whether the feedback scores are different for questions that were either correctly or incorrectly answered by Chain-of-Thought reasoning. The potential differences in feedback scores are statistically tested with the Mann-Whitney U test. Graphical visualization and logistic regressions are performed to preliminarily determine whether the feedback scores are indicative to whether the Chain-of-Thought reasoning leads to the right answer. The results indicate that among the reasoning objectives, the feedback models assign higher feedback scores to questions that were answered correctly than those that were answered incorrectly. Graphical visualization shows potential for reviewing questions with low feedback scores, although logistic regressions that aimed to predict whether or not questions were answered correctly mostly defaulted to the majority class. Nonetheless, there seems to be a possibility for more robust output from self-reflecting language systems., En stark förmåga hos stora språkmodeller är Chain-of-Thought-resonerande. Att prompta en modell att tänka stegvis har lett till stora prestandaförbättringar vid lösandet av problem som planering och frågebesvarande, och med den utökade outputen ger det en del bevis rörande logiken bakom ett svar eller beslut. I sökandet efter bättre, mer robust och tolk bart beteende hos språkmodeller undersöker detta arbete självreflektion i stora språkmodeller. Forskningsfrågan är: I vilken utsträckning kan feedback från stora språkmodeller, såsom GPT-3.5-Turbo och GPT-4, på ett korrekt sätt skilja mellan korrekta och inkorrekta svar i medicinska frågebesvarande uppgifter genom användningen av Chainof-Thought-resonerande? Här ger GPT-3.5-Turbo och GPT-4 zero-shot feedback-poäng till Chain-ofThought-resonerande på datasetet för MedQA (medicinskt frågebesvarande). Frågebesvarandet bör vara strukturerat, relevant och konsekvent. Feedbackpoängen jämförs mellan två grupper av frågor, baserat på om dessa besvarades korrekt eller felaktigt i första hand. Statistisk testning genomförs på skillnaden i feedback-poäng med Mann-Whitney U-testet. Grafisk visualisering och logistiska regressioner utförs för att preliminärt avgöra om feedbackpoängen är indikativa för huruvida Chainof-Thought-resonerande leder till rätt svar. Resultaten indikerar att bland resonemangsmålen tilldelar feedbackmodellerna fler positiva feedbackpoäng till frågor som besvarats korrekt än de som besvarats felaktigt. Grafisk visualisering visar potential för granskandet av frågor med låga feedbackpoäng, även om logistiska regressioner som syftade till att förutsäga om frågorna besvarades korrekt eller inte för det mesta majoritetsklassen. Icke desto mindre verkar det finnas potential för robustare från självreflekterande språksystem.
- Published
- 2023
159. Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome them
- Author
-
Hedlund, Matilda, Henriksson, Hanna, Hedlund, Matilda, and Henriksson, Hanna
- Abstract
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) incons
- Published
- 2023
160. Finding differences in perspectives between designers and engineers to develop trustworthyAI for autonomous cars
- Author
-
Larsson, Karl Rikard, Jönelid, Gustav, Larsson, Karl Rikard, and Jönelid, Gustav
- Abstract
In the context of designing and implementing ethical Artificial Intelligence (AI), varying perspectives exist regarding developing trustworthy AI for autonomous cars. This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences. By exploring the diverse viewpoints, we identify key factors contributing to the differences and propose strategies to bridge the gaps. This study goes beyond the trolley problem to visualize the complex challenges of trustworthy and ethical AI. Three pillars of trustworthy AI have been defined: transparency, reliability, and safety. This research contributes to the field of trustworthy AI for autonomous cars, providing practical recommendations to enhance the development of AI systems that prioritize both technological advancement and ethical principles.
- Published
- 2023
161. Reflections on the human role in AI policy formulations: how do national AI strategies view people?
- Author
-
Salo-Pöntinen, Henrikki and Saariluoma, Pertti
- Published
- 2022
- Full Text
- View/download PDF
162. Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem
- Author
-
Shane Reid, Sonya Coleman, Philip Vance, Dermot Kerr, and Siobhan O’Neill
- Subjects
human behaviour analysis ,social signal processing ,video processing ,bias detection ,ethical AI ,machine learning ,Chemical technology ,TP1-1185 - Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods.
- Published
- 2021
- Full Text
- View/download PDF
163. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021
- Author
-
Chaudhry, Muhammad Ali and Kazim, Emre
- Published
- 2022
- Full Text
- View/download PDF
164. Achieving a Data-Driven Risk Assessment Methodology for Ethical AI
- Author
-
Felländer, Anna, Rebane, Jonathan, Larsson, Stefan, Wiggberg, Mattias, and Heintz, Fredrik
- Published
- 2022
- Full Text
- View/download PDF
165. Ethical Regulators and Super-Ethical Systems
- Author
-
Mick Ashby
- Subjects
ethical AI ,superintelligence ,empathy ,sapience ,utopia ,third-order cybernetics ,Systems engineering ,TA168 ,Technology (General) ,T1-995 - Abstract
This paper combines the good regulator theorem with the law of requisite variety and seven other requisites that are necessary and sufficient for a cybernetic regulator to be effective and ethical. The ethical regulator theorem provides a basis for systematically evaluating and improving the adequacy of existing or proposed designs for systems that make decisions that can have ethical consequences; regardless of whether the regulators are humans, machines, cyberanthropic hybrids, organizations, or government institutions. The theorem is used to define an ethical design process that has potentially far-reaching implications for society. A six-level framework is proposed for classifying cybernetic and superintelligent systems, which highlights the existence of a possibility-space bifurcation in our future time-line. The implementation of “super-ethical” systems is identified as an urgent imperative for humanity to avoid the danger that superintelligent machines might lead to a technological dystopia. It is proposed to define third-order cybernetics as the cybernetics of ethical systems. Concrete actions, a grand challenge, and a vision of a super-ethical society are proposed to help steer the future of the human race and our wonderful planet towards a realistically achievable minimum viable cyberanthropic utopia.
- Published
- 2020
- Full Text
- View/download PDF
166. Essays on Machine Ethics
- Author
-
Cook, Tyler Blake
- Subjects
- Philosophy, Ethics, Artificial Intelligence, machine ethics, machine morality, robot ethics, ethical AI, artificial moral agents
- Abstract
This dissertation concerns some issues in machine ethics, the area of artificial intelligence (AI) research that is concerned with questions regarding the design of ethical machines or ethical AI. It consists of four chapters. Chapter 1 sets the stage for the rest of the dissertation by presenting a succinct overview of the field of machine ethics and summaries of each of the succeeding chapters. It also states how this dissertation relates to the current state of AI research and prevailing attitudes about the riskiness of AI. Chapter 2 argues that a certain kind of ethical AI, which I call end-constrained ethical AI, is preferable to merely safe AI and end-autonomous ethical AI. It raises concerns for both merely safe and end-autonomous ethical AI, and it contends that end-constrained ethical AI can evade such concerns. Chapter 3 provides a qualified defense of top-down design approaches in machine ethics. It offers some advantages and limitations of those approaches, and it discusses the proper domain of application for ethical AI trained via top-down approaches. Chapter 4 argues that the development of sophisticated ethical AI systems that are capable of theorizing about ethics would be risky. This is because such AI could opt to cause significant harm upon reaching certain metanormative conclusions.
- Published
- 2023
167. The Challenge of Ethical Interoperability
- Author
-
Danks, David and Trusilo, Daniel
- Published
- 2022
- Full Text
- View/download PDF
168. Opening the path to ethics in artificial intelligence
- Author
-
Forbes, Kelly
- Published
- 2021
- Full Text
- View/download PDF
169. Norm Compliance for Reinforcement Learning Agents
- Author
-
Neufeld, Emeric Alexander
- Subjects
Ethical AI ,Normative Reasoning ,Deontic Logic ,Reinforcement Learning - Abstract
With the impending advent of AI technologies that are deeply embedded in daily life -- such as autonomous vehicles, elder care robots, and robot nannies -- comes a natural apprehension over whether they can integrate smoothly with human society. From these concerns arises a question: can we impose norms -- be they ethical, legal, or social -- on these technologies while preserving the effectiveness of their performance? This proves a difficult question to answer in the presence of machine learning technologies, which are notoriously opaque and unpredictable. Reinforcement learning (RL) is a powerful machine learning technique geared toward teaching autonomous agents goal-directed behaviour in stochastic environments through a utility function. RL agents have proven capable of exhibiting complex behaviours on par with or beyond the abilities of expert human agents, and have also been a subject of interest for machine ethicists; it has been conjectured by many that RL might prove capable of delivering a positive answer to the above question. Indeed, there are already many attempts to implement an ``ethical agent'' with RL. However, these attempts largely ignore the complexities and idiosyncrasies of normative reasoning. Normative reasoning is the purview of the diverse field of Deontic Logic -- the logic of obligations and related notions -- which has yet to receive a meaningful place in the literature on ``ethical'' RL agents. In the following work, we will explore how RL can fall short of the goal of producing an ethical (or rather, normatively compliant) agent; this includes even more powerful developments like safe RL under linear temporal logic (LTL) constraints, due to the limits of LTL as a logic for normative reasoning. Even so, we provide a method for synthesizing LTL specifications that reflect the constraints deducible from certain normative systems. We will then present an alternative framework for imposing normative constraints from the perspective of altering the internal processes of an RL agent to ensure behaviour that complies (as much as possible) with a normative system. To actuate this process, we propose a module called the Normative Supervisor, which facilitates the translation of data from the agent and a normative system into a defeasible deontic logic, leveraging a theorem prover to provide recommendations and judgements to the agent. This allows us to present Online Compliance Checking (OCC) and Norm-Guided Reinforcement Learning (NGRL) for eliciting normatively compliant behaviour from an RL agent. OCC involves, in each state, filtering out from the agent's arsenal actions that do not comply with a normative system in that state, preventing the agent from taking actions that violate the normative system. When no compliant actions exist, a ``lesser evil'' solution is presented. In NGRL, the agent is trained with two objectives; its original task and a normative objective borne out in a utility function that punishes the agent when it transgresses the normative system. We show through a thorough series of experiments on RL agents playing simple computer games -- constrained by the wide variety of normative systems that we present-- that these techniques are effective, albeit flawed, and best utilized in tandem.
- Published
- 2023
- Full Text
- View/download PDF
170. How do decision support systems nudge?
- Author
-
Pedrazzoli, Francesco, D'Asaro, Fabio Aurelio, and Badino, Massimiliano
- Subjects
nudging, decision support systems, ethical AI ,nudging ,ethical AI ,decision support systems - Published
- 2023
171. Empirical Analysis of Ethical Principles Applied to Different AI Uses Cases
- Author
-
Alfonso José López Rivero, M. Encarnación Beato, César Muñoz Martínez, and Pedro Gonzalo Cortiñas Vázquez
- Subjects
Statistics and Probability ,Artificial Intelligence ,Computer Networks and Communications ,Signal Processing ,digital transformation ,IJIMAI ,trust ,Computer Vision and Pattern Recognition ,ethical AI ,artificial intelligence ,ethics ,Computer Science Applications - Abstract
In this paper, we present an empirical study on the perception of the ethical challenges of artificial intelligence groups in the classification made by the European Union (EU). The study seeks to identify the ethical principles that cause the greatest concern among the population, analyzing these characteristics among different actors. The main study analyses the difference between Information and Communications Technology (ICT) professionals and the rest of the population. Along with this study, we conducted a gender study; in addition, we studied differences between university students, classified as future professionals who can work in Artificial Intelligence, and other university students. We believe that this work is a starting point for an informed debate in the scientific community and industry on the ethical implications of artificial intelligence based on the classification of ethical principles made by the EU, which can be extrapolated to any analysis carried out on the use of data to apply them in algorithms based on Artificial Intelligence.
- Published
- 2022
172. From Nanobots to Neural Networks: Multifaceted Revolution of Artificial Intelligence in Surgical Medicine and Therapeutics.
- Author
-
Grezenko H, Alsadoun L, Farrukh A, Rehman A, Shehryar A, Nathaniel E, Affaf M, I Kh Almadhoun MK, and Quinn M
- Abstract
This comprehensive exploration unveils the transformative potential of Artificial Intelligence (AI) within medicine and surgery. Through a meticulous journey, we examine AI's current applications in healthcare, including medical diagnostics, surgical procedures, and advanced therapeutics. Delving into the theoretical foundations of AI, encompassing machine learning, deep learning, and Natural Language Processing (NLP), we illuminate the critical underpinnings supporting AI's integration into healthcare. Highlighting the symbiotic relationship between humans and machines, we emphasize how AI augments clinical capabilities without supplanting the irreplaceable human touch in healthcare delivery. Also, we'd like to briefly mention critical findings and takeaways they can expect to encounter in the article. A thoughtful analysis of the economic, societal, and ethical implications of AI's integration into healthcare underscores our commitment to addressing critical issues, such as data privacy, algorithmic transparency, and equitable access to AI-driven healthcare services. As we contemplate the future landscape, we project an exciting vista where more sophisticated AI algorithms and real-time surgical visualizations redefine the boundaries of medical achievement. While acknowledging the limitations of the present research, we shed light on AI's pivotal role in enhancing patient engagement, education, and data security within the burgeoning realm of AI-driven healthcare., Competing Interests: The authors have declared that no competing interests exist., (Copyright © 2023, Grezenko et al.)
- Published
- 2023
- Full Text
- View/download PDF
173. Letter to the Editor: "How Can Biomedical Engineers Help Empower Individuals With Intellectual Disabilities? The Potential Benefits and Challenges of AI Technologies to Support Inclusivity and Transform Lives".
- Author
-
Di Nuovo A
- Subjects
- Humans, Artificial Intelligence, Biomedical Engineering, Communication, Technology, Intellectual Disability diagnosis
- Abstract
The rapid advancement of Artificial Intelligence (AI) is transforming healthcare and daily life, offering great opportunities but also posing ethical and societal challenges. To ensure AI benefits all individuals, including those with intellectual disabilities, the focus should be on adaptive technology that can adapt to the unique needs of the user. Biomedical engineers have an interdisciplinary background that helps them to lead multidisciplinary teams in the development of human-centered AI solutions. These solutions can personalize learning, enhance communication, and improve accessibility for individuals with intellectual disabilities. Furthermore, AI can aid in healthcare research, diagnostics, and therapy. The ethical use of AI in healthcare and the collaboration of AI with human expertise must be emphasized. Public funding for inclusive research is encouraged, promoting equity and economic growth while empowering those with intellectual disabilities in society., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
174. We need to talk about AI: the case for citizens’ think-ins for citizen-researcher dialogue and deliberation
- Author
-
Clarke, Emma L., Pandit, Harshvardhan J., and Wall, Patrick J.
- Subjects
Artificial intelligence ,ethical AI ,citizen engagement - Abstract
Artificial Intelligence (AI) has become one of the most important and ubiquitous technologies across the world. On a daily basis, we interact with powerful AI-based technologies through our use of mobile phones, voice assistants, and even our cars. Despite the widespread adoption of AI, questions and concerns exist around the ethical use of these technologies and their potential to reconfigure our personal and working lives. The Science Foundation Ireland ADAPT Research Centre has developed the Citizens’ Think-Ins model of citizen-researcher dialogue. ADAPT’s Think-In series to date has focused specifically on AI and the role it increasingly plays in our lives, and its impact on culture and society. This white paper presents an analysis of the various discussions that took place within the Citizens’ Think-Ins series. The discussions are presented with specific reference to citizens and civic society, academia, industry, and policymakers, and provide concrete recommendations to each stakeholder group, to draw parallels between their requirements, and to encourage the periodic use of Citizens’ Think-Ins as part of a larger deliberative and participatory approach comprising all stakeholders.
- Published
- 2022
175. Beyond the promise: implementing ethical AI
- Author
-
Eitel-Porter, Ray
- Published
- 2021
- Full Text
- View/download PDF
176. Representation, justification, and explanation in a value-driven agent: an argumentation-based approach
- Author
-
Liao, Beishui, Anderson, Michael, and Anderson, Susan Leigh
- Published
- 2021
- Full Text
- View/download PDF
177. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021
- Author
-
Emre Kazim and Muhammad Ali Chaudhry
- Subjects
2019-20 coronavirus outbreak ,Learning science ,Fairness ,Coronavirus disease 2019 (COVID-19) ,Energy Engineering and Power Technology ,02 engineering and technology ,Management Science and Operations Research ,Education ,Artificial Intelligence in Education (AIEd) ,Intelligent Tutoring Systems (ITS) ,Artificial Intelligence ,Machine learning ,0202 electrical engineering, electronic engineering, information engineering ,Sociology ,Dimension (data warehouse) ,Original Research ,Potential impact ,business.industry ,Mechanical Engineering ,05 social sciences ,Digital transformation ,050301 education ,Workload ,Learning sciences ,Audience measurement ,020201 artificial intelligence & image processing ,Ethical AI ,Artificial intelligence ,business ,0503 education - Abstract
In the past few decades, technology has completely transformed the world around us. Indeed, experts believe that the next big digital transformation in how we live, communicate, work, trade and learn will be driven by Artificial Intelligence (AI) [83]. This paper presents a high-level industrial and academic overview of AI in Education (AIEd). It presents the focus of latest research in AIEd on reducing teachers’ workload, contextualized learning for students, revolutionizing assessments and developments in intelligent tutoring systems. It also discusses the ethical dimension of AIEd and the potential impact of the Covid-19 pandemic on the future of AIEd’s research and practice. The intended readership of this article is policy makers and institutional leaders who are looking for an introductory state of play in AIEd.
- Published
- 2021
178. Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter?
- Author
-
Manjul Gupta, Carlos M. Parra, and Denis Dennehy
- Subjects
Artificial intelligence ,Computer Networks and Communications ,media_common.quotation_subject ,Culture ,Racial bias ,02 engineering and technology ,Recommender system ,Affect (psychology) ,Article ,Theoretical Computer Science ,020204 information systems ,0502 economics and business ,Realm ,Recommender systems ,0202 electrical engineering, electronic engineering, information engineering ,Gender bias ,Cultural values ,media_common ,Uncertainty avoidance ,Responsible AI ,05 social sciences ,Collectivism ,Algorithmic bias ,Masculinity ,Ethical AI ,Psychology ,Social psychology ,050203 business & management ,Software ,Information Systems - Abstract
One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society’s most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.
- Published
- 2021
179. Artificial Intelligence in Banking: Advanced Risk Management Techniques and Practical Applications for Enhanced Financial Security and Operational Efficiency
- Author
-
Abbasov, Ramin and Abbasov, Ramin
- Abstract
The integration of artificial intelligence (AI) into the banking sector represents a paradigm shift in risk management, financial security, and operational efficiency. This research paper delves into the advanced AI-driven techniques employed in risk management within banking, emphasizing their transformative potential. AI's application in real-time fraud detection, credit scoring, market risk analysis, and regulatory compliance is examined in detail, showcasing how these technologies enhance financial security and streamline operations. Real-time fraud detection leverages machine learning algorithms to identify anomalous transactions, reducing the time between fraud detection and response, thus mitigating potential losses. Credit scoring models, enhanced by AI, utilize vast datasets and sophisticated algorithms to assess creditworthiness more accurately, providing banks with reliable risk assessments and reducing default rates. Market risk analysis is another area where AI exhibits significant potential. AI models can analyze vast amounts of financial data, detect patterns, and predict market trends with higher precision than traditional methods. This capability allows banks to make informed investment decisions and manage market risks effectively. Additionally, AI-driven tools for regulatory compliance ensure that banks adhere to complex regulations, automating compliance processes, and reducing the risk of non-compliance. The practical implementation of AI in banking systems is not without challenges. Integrating AI into existing infrastructures requires substantial investment in technology and personnel training. Moreover, the adoption of AI raises concerns regarding data privacy and security, necessitating robust cybersecurity measures. This paper also explores the ethical considerations of AI in banking, particularly the transparency and fairness of AI algorithms in decision-making processes. Bias in AI models can lead to discriminatory practices, making it im
- Published
- 2022
180. Cognitive architectures for artificial intelligence ethics
- Author
-
Bickley, Steve J., Torgler, Benno, Bickley, Steve J., and Torgler, Benno
- Abstract
As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators beyond those commonly discussed (e.g., trolley problems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the “wrong” choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the “black box” of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society (and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterp
- Published
- 2022
181. Towards Understanding and Practices of Ethical Artificial Intelligence
- Subjects
Machine Learning ,Data Privacy ,Ethical AI ,Algorithmic Fairness ,Natural Language Processing - Abstract
The successes of machine learning (ML) and artificial intelligence (AI) models encourage their widespread deployments in high-stakes domains -- from public transportation to social decision-making such as autonomous driving, criminal justice, and company hiring. Such widespread deployments call for assessing and addressing the ethical concerns of AI systems. The thesis aims to develop practical techniques and theoretical understanding for building ethical AI systems. We divide the thesis into two parts. The first part of the thesis focuses on automatic information extraction using natural language processing (NLP) from policy documents. Policy documents are natural language documents about how different stakeholders (e.g., users and ML services providers) in internet services agree on how the services providers commit to the ethical usage of users' data. Specifically, we develop NLP techniques and benchmarks for privacy policies, a type of policy document describing the practices of using, sharing, and protecting users' data. Such developed NLP techniques could be extended to other natural language law documents describing ethical AI principles and help improve mutual trust among different parties. The second part of the thesis focuses on the theoretical understanding and development of algorithmic interventions for ethical artificial intelligence. In particular, we study the fairness problems for various machine learning tasks, such as classification, regression, and sequential decision-making: (1) we provide bias mitigation techniques for text classification using contrastive representation learning; (2) we provide the theoretical understanding and mitigation techniques for accuracy disparity problem in regression; (3) we propose a fairness notion that requires long-term equality on expected utility for different demographic groups for sequential decision-making and develop methods to achieve the proposed fairness notion. In addition, we also study adversarial representation learning, a technique that has been widely used for algorithmic fairness, and its implications for information obfuscation. We hope the research presented in the thesis will facilitate the practices of building ethical machine learning systems and help increase the understanding and trust among stakeholders towards the machine learning systems.
- Published
- 2022
- Full Text
- View/download PDF
182. The Case for Ethical AI in the Military
- Author
-
Galliott, Jai, Scholz, Jason, Dubber, Markus D., book editor, Pasquale, Frank, book editor, and Das, Sunit, book editor
- Published
- 2020
- Full Text
- View/download PDF
183. Policy Brief - UnLocking the potential of digital disruption for responsible, sustainable and trusted urban decisions
- Author
-
Susie McAleert, Pavel Kogut, Sara Mancini, and Noemi Luna Carmeno
- Subjects
AI ,disruptive technology ,ethical AI - Abstract
This policy brief aims at guiding public organisations in the trustworthy adoption of disruptive technologies with the aim to increase transparency towards both its internal and external stakeholders about how DTs and data are used, what are the fundamental decisional processes that are followed, and how the potential risks are identified and addressed.
- Published
- 2022
- Full Text
- View/download PDF
184. Sustainability Budgets: A Practical Management and Governance Method for Achieving Goal 13 of the Sustainable Development Goals for AI Development
- Author
-
Rebecca Raper, Jona Boeddinghaus, Mark Coeckelbergh, Wolfgang Gross, Paolo Campigotto, and Craig N. Lincoln
- Subjects
Renewable Energy, Sustainability and the Environment ,AI ,artificial intelligence ,sustainability ,AI governance ,ethics ,ethical AI ,differential privacy ,Geography, Planning and Development ,Management, Monitoring, Policy and Law - Abstract
Climate change is a global priority. In 2015, the United Nations (UN) outlined its Sustainable Development Goals (SDGs), which stated that taking urgent action to tackle climate change and its impacts was a key priority. The 2021 World Climate Summit finished with calls for governments to take tougher measures towards reducing their carbon footprints. However, it is not obvious how governments can make practical implementations to achieve this goal. One challenge towards achieving a reduced carbon footprint is gaining awareness of how energy exhaustive a system or mechanism is. Artificial Intelligence (AI) is increasingly being used to solve global problems, and its use could potentially solve challenges relating to climate change, but the creation of AI systems often requires vast amounts of, up front, computing power, and, thereby, it can be a significant contributor to greenhouse gas emissions. If governments are to take the SDGs and calls to reduce carbon footprints seriously, they need to find a management and governance mechanism to (i) audit how much their AI system ‘costs’ in terms of energy consumption and (ii) incentivise individuals to act based upon the auditing outcomes, in order to avoid or justify politically controversial restrictions that may be seen as bypassing the creativity of developers. The idea is thus to find a practical solution that can be implemented in software design that incentivises and rewards and that respects the autonomy of developers and designers to come up with smart solutions. This paper proposes such a sustainability management mechanism by introducing the notion of ‘Sustainability Budgets’—akin to Privacy Budgets used in Differential Privacy—and by using these to introduce a ‘Game’ where participants are rewarded for designing systems that are ‘energy efficient’. Participants in this game are, among others, the Machine Learning developers themselves, which is a new focus for this problem that this text introduces. The paper later expands this notion to sustainability management in general and outlines how it might fit into a wider governance framework.
- Published
- 2022
185. Formalising Trade-Offs Beyond Algorithmic Fairness: Lessons from Ethical Philosophy and Welfare Economics
- Author
-
Lee, Michelle Seng Ah, Floridi, Luciano, Singh, Jatinder, Singh, Jat [0000-0002-5102-6564], and Apollo - University of Cambridge Repository
- Subjects
machine learning ,KEI ,algorithmic ethics ,fairness ,ethical AI ,key ethics indicators ,algorithmic fairness - Abstract
There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented within narrow and targeted fairness toolkits for algorithm assessments that are difficult to integrate into an algorithm’s broader ethical assessment. In this paper, we derive lessons from ethical philosophy and welfare economics as they relate to the contextual factors relevant for fairness. In particular we highlight the debate around the acceptability of particular inequalities and the inextricable links between fairness, welfare and autonomy. We propose Key Ethics Indicators (KEIs) as a way towards providing a more holistic understanding of whether or not an algorithm is aligned to the decision-maker’s ethical values.
- Published
- 2022
186. Ethical AI for Automated Bus Lane Enforcement
- Author
-
John D. Nelson, Caitriona Lannon, and Martin Cunneen
- Subjects
Risk analysis ,Computer science ,As is ,Geography, Planning and Development ,camera surveillance ,TJ807-830 ,Management, Monitoring, Policy and Law ,TD194-195 ,privacy ,Renewable energy sources ,bus lane enforcement ,Use case ,GE1-350 ,Enforcement ,Downstream (petroleum industry) ,ethical risk ,Environmental effects of industries and plants ,Renewable Energy, Sustainability and the Environment ,business.industry ,Frame (networking) ,ethical AI ,Environmental sciences ,Risk analysis (engineering) ,Bus lane ,business ,Social responsibility - Abstract
There is an explosion of camera surveillance in our cities today. As a result, the risks of privacy infringement and erosion are growing, as is the need for ethical solutions to minimise the risks. This research aims to frame the challenges and ethics of using data surveillance technologies in a qualitative social context. A use case is presented which examines the ethical data required to automatically enforce bus lanes using camera surveillance and proposes ways of minimising the risks of privacy infringement and erosion in that scenario. What we seek to illustrate is that there is a challenge in using technologies in positive, socially responsible ways. To do that, we have to better understand the use case and not just the present, but also the downstream risks, and the downstream ethical questions. There is a gap in the literature in this aspect as well as a gap in the actual thinking of researchers in terms of understanding and responding to it. A literature review and detailed risk analysis of automated bus lane enforcement is conducted. Based on this, an ethical design framework is proposed and applied to the use case. Several potential solutions are created and described. The final chosen solution may also be broadly applicable to other use cases. We show how it is possible to provide an ethical AI solution for detecting infringements that incorporates privacy-by-design principles, while being fair to potential transgressors. By introducing positive, pragmatic and adaptable methods to support and uphold privacy, we support access to innovation that can help us mitigate current emerging risks.
- Published
- 2021
- Full Text
- View/download PDF
187. Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem
- Author
-
Sonya Coleman, Siobhan O'Neill, Philip Vance, Shane Reid, and Dermot Kerr
- Subjects
Computer science ,Theft ,social signal processing ,TP1-1185 ,Machine learning ,computer.software_genre ,Biochemistry ,video processing ,Article ,Analytical Chemistry ,Task (project management) ,Electrical and Electronic Engineering ,Instrumentation ,Black box (phreaking) ,Signal processing ,business.industry ,Deep learning ,Chemical technology ,Video processing ,Transparency (human–computer interaction) ,ethical AI ,Atomic and Molecular Physics, and Optics ,machine learning ,Bias detection ,State (computer science) ,Artificial intelligence ,business ,human behaviour analysis ,computer ,bias detection - Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods.
- Published
- 2021
188. Governance of Ethical and Trustworthy Al Systems: Research Gaps in the ECCOLA Method
- Author
-
Ville Vakkuri, Mamia Agbese, Heidi Vainio-Pekka, Hannakaisa Isomäki, Erika Halme, Hanna-Kaisa Alanen, Marianna Jantunen, Jani Antikainen, Rebekah Rousi, Kai-Kristian Kemell, Yue, Tao, and Mirakhorli, Mehdi
- Subjects
FOS: Computer and information sciences ,järjestelmäsuunnittelu ,Knowledge management ,Al governance ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,tekoäly ,GeneralLiterature_MISCELLANEOUS ,Data governance ,Computer Science - Computers and Society ,Al ,Computers and Society (cs.CY) ,Health care ,Information governance ,Ethics ,business.industry ,Corporate governance ,eettisyys ,ECCOLA ,ML ,ComputingMethodologies_PATTERNRECOGNITION ,Trustworthiness ,luottamus ,Ethical concerns ,Ethical AI ,etiikka ,business ,Ai systems - Abstract
Advances in machine learning (ML) technologies have greatly improved Artificial Intelligence (AI) systems. As a result, AI systems have become ubiquitous, with their application prevalent in virtually all sectors. However, AI systems have prompted ethical concerns, especially as their usage crosses boundaries in sensitive areas such as healthcare, transportation, and security. As a result, users are calling for better AI governance practices in ethical AI systems. Therefore, AI development methods are encouraged to foster these practices. This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems to determine if it enables AI governance in development processes through ethical practices. The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices. This indicates that the method can be further improved., Comment: 8 pages, 1 figure, 2 tables, IEEE 29th International Requirements Engineering Conference Workshops (REW)
- Published
- 2021
189. Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations
- Author
-
van der Waa, J.S. (author), Verdult, Sabine (author), van den Bosch, Karel (author), van Diggelen, Jurriaan (author), Haije, Tjalling (author), van der Stigchel, Birgit (author), Cocu, Ioana (author), van der Waa, J.S. (author), Verdult, Sabine (author), van den Bosch, Karel (author), van Diggelen, Jurriaan (author), Haije, Tjalling (author), van der Stigchel, Birgit (author), and Cocu, Ioana (author)
- Abstract
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent’s part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human’s understanding in the agent’s reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team’s moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team des, BUS/TNO STAFF, Interactive Intelligence
- Published
- 2021
- Full Text
- View/download PDF
190. Green Artificial Intelligence: Towards an Efficient, Sustainable and Equitable Technology for Smart Cities and Futures
- Author
-
Yigitcanlar, Tan, Mehmood, Rashid, Corchado, Juan, Yigitcanlar, Tan, Mehmood, Rashid, and Corchado, Juan
- Abstract
Smart cities and artificial intelligence (AI) are among the most popular discourses in urban policy circles. Most attempts at using AI to improve efficiencies in cities have nevertheless ei-ther struggled or failed to accomplish the smart city transformation. This is mainly due to short-sighted, technologically determined and reductionist AI approaches being applied to complex urbanization problems. Besides this, as smart cities are underpinned by our ability to engage with our environments, analyze them, and make efficient, sustainable and equitable decisions, the need for a green AI approach is intensified. This perspective paper, reflecting authors’ opinions and interpretations, concentrates on the “green AI” concept as an enabler of the smart city trans-formation, as it offers the opportunity to move away from purely technocentric efficiency solutions towards efficient, sustainable and equitable solutions capable of realizing the desired urban fu-tures. The aim of this perspective paper is two-fold: first, to highlight the fundamental shortfalls in mainstream AI system conceptualization and practice, and second, to advocate the need for a consolidated AI approach—i.e., green AI—to further support smart city transformation. The methodological approach includes a thorough appraisal of the current AI and smart city litera-tures, practices, developments, trends and applications. The paper informs authorities and plan-ners on the importance of the adoption and deployment of AI systems that address efficiency, sustainability and equity issues in cities.
- Published
- 2021
191. Green Artificial Intelligence: Towards an Efficient, Sustainable and Equitable Technology for Smart Cities and Futures
- Author
-
Tan Yigitcanlar, Juan M. Corchado, and Rashid Mehmood
- Subjects
green AI ,Geography, Planning and Development ,responsible AI ,TJ807-830 ,Management, Monitoring, Policy and Law ,artificial intelligence (AI) ,TD194-195 ,GeneralLiterature_MISCELLANEOUS ,Renewable energy sources ,explainable AI ,Urbanization ,Smart city ,Mainstream ,GE1-350 ,Sustainable development ,Conceptualization ,Environmental effects of industries and plants ,Renewable Energy, Sustainability and the Environment ,business.industry ,Equity (finance) ,Building and Construction ,ethical AI ,Environmental sciences ,Enabling ,Sustainability ,sustainable AI ,Artificial intelligence ,business - Abstract
Smart cities and artificial intelligence (AI) are among the most popular discourses in urban policy circles. Most attempts at using AI to improve efficiencies in cities have nevertheless either struggled or failed to accomplish the smart city transformation. This is mainly due to short-sighted, technologically determined and reductionist AI approaches being applied to complex urbanization problems. Besides this, as smart cities are underpinned by our ability to engage with our environments, analyze them, and make efficient, sustainable and equitable decisions, the need for a green AI approach is intensified. This perspective paper, reflecting authors’ opinions and interpretations, concentrates on the “green AI” concept as an enabler of the smart city transformation, as it offers the opportunity to move away from purely technocentric efficiency solutions towards efficient, sustainable and equitable solutions capable of realizing the desired urban futures. The aim of this perspective paper is two-fold: first, to highlight the fundamental shortfalls in mainstream AI system conceptualization and practice, and second, to advocate the need for a consolidated AI approach—i.e., green AI—to further support smart city transformation. The methodological approach includes a thorough appraisal of the current AI and smart city literatures, practices, developments, trends and applications. The paper informs authorities and planners on the importance of the adoption and deployment of AI systems that address efficiency, sustainability and equity issues in cities.
- Published
- 2021
192. The achievement of algorithmic accountability through the incorporation of machine learning monitoring methodology
- Author
-
Peeters, T.J.
- Subjects
Algorithmic accountability ,machine learning ,machine learning monitoring ,ethical AI - Abstract
these algorithms can be held accountable. Opacity as to how algorithmic decisions came to be seems to be the norm and, while principles to which algorithms should adhere have been formulated, these lack proven methods that translate them into practice. Drawing on theories of design-science this research aims to fill that gap by the design of an artefact in the form of a checklist of machine learning monitoring methods that can be used to incorporate algorithmic accountability goals into decision-support systems. A qualitative research approach was taken where, after identifying algorithmic accountability goals from literature, experts in the field of data science were interviewed as to which machine learning monitoring methods could aid in the realisation of these goals. Findings from this stage were later validated using a focus group. Results indicate that the checklist, if embedded in an organisation in a similar strain as security or architectural principles, can aid professionals in the incorporation of algorithmic accountability goals in their decision-support systems.
- Published
- 2021
193. ReSPEcT: privacy respecting thermal-based specific person recognition
- Author
-
Ngai Seng Chan, Giovanni Pau, Su-Kit Tang, Rita Tse, Ka Ian Chan, Jiang, Xudong, Chan N.S., Chan K.I., Tse R., Tang S.-K., and Pau G.
- Subjects
Machine Learning ,Thermal Video Analytic ,Human–computer interaction ,Computer science ,Ethical AI ,Person recognition ,Privacy Preserving Computing - Abstract
Video analytic techniques have been used to extract high level information from video streams. The technique leverages advances on machine learning to summarize complex image data into simple alert-signal to attract the attention of human operators. For example, in a station for the underground video analytic can help the operator to focus on an event from a specific camera rather than leaving this only to the human eye. A concern of such techniques is privacy as they expose people identity and enable profiling of personal habits and orientations. This work introduces ReSPEcT (Privacy Respecting theRmal basEd Specific Person rECogniTion), a privacy preserving video analytic system based on thermal video streams. ReSPEcT is able to identify a specific-human in thermal video streams from low-cost, low resolution cameras. The system leverages recent advances in machine learning (CNNs) and a plethora of pre-processing mechanisms, such as image automatic labeling, image segmentation, and image augmentation to reduce the stream background noise, improve resilience, strengthen human-body classification, and finally enable a specific human-target identification. ReSPEcT’s automatic labeling tool significantly reduces time thus automatically performing labeling using a model that can be retrained by an interactive web application. The experimental evaluation shows that overall ReSPEcT achieve 96.83% accuracy in identifying a specific person. Furthermore, is important to notice that while ReSPEcT can identify a specific human, the tool is not aware of the real-identity as it operates only on thermal images. ReSPEcT paves the way to use video analytic in a variety of privacy-protected scenarios, such as confidential meetings, sensitive spaces, or even public toilets.
- Published
- 2021
194. Ethical data curation for AI: An approach based on feminist epistemology and critical theories of race
- Author
-
Barry O'Sullivan, Susan Leavy, and Eugenia Siapera
- Subjects
Virtue ethics ,Data curation ,Guiding Principles ,Computer science ,Feminist theory ,Critical theories of race ,Social constructionism ,Social dynamics ,Reflexivity ,Feminist epistemology ,Engineering ethics ,Ethical AI - Abstract
The potential for bias embedded in data to lead to the perpetuation of social injustice though Artificial Intelligence (AI) necessitates an urgent reform of data curation practices for AI systems, especially those based on machine learning. Without appropriate ethical and regulatory frameworks there is a risk that decades of advances in human rights and civil liberties may be undermined. This paper proposes an approach to data curation for AI, grounded in feminist epistemology and informed by critical theories of race and feminist principles. The objective of this approach is to support critical evaluation of the social dynamics of power embedded in data for AI systems. We propose a set of fundamental guiding principles for ethical data curation that address the social construction of knowledge, call for inclusion of subjugated and new forms of knowledge, support critical evaluation of theoretical concepts within data and recognise the reflexive nature of knowledge. In developing this ethical framework for data curation, we aim to contribute to a virtue ethics for AI and ensure protection of fundamental and human rights.
- Published
- 2021
195. Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems
- Author
-
Schildt, Alexandra, Luo, Jenny, Schildt, Alexandra, and Luo, Jenny
- Abstract
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while pr, AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecif
- Published
- 2020
196. Cognitive architectures for artificial intelligence ethics
- Author
-
Bickley, Steve J. and Torgler, Benno
- Subjects
Ethics ,Cognitive Architectures ,Artificial Intelligence ,ddc:330 ,Ethical AI ,Society ,Intelligent Systems - Abstract
As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite humaninvolvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its "black box" characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators beyond those commonly discussed (e.g., trolley problems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the "wrong" choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the "black box" of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society (and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also.
- Published
- 2021
197. The Use and Misuse of Counterfactuals in Ethical Machine Learning
- Author
-
Andrew Smart and Atoosa Kasirzadeh
- Subjects
Counterfactual thinking ,FOS: Computer and information sciences ,ethics of AI ,Counterfactual conditional ,fairness ,Machine learning ,computer.software_genre ,Semantics ,explainable AI ,social ontology ,social category ,Computer Science - Computers and Society ,Race (biology) ,Computers and Society (cs.CY) ,Set (psychology) ,Social ontology ,philosophy ,business.industry ,social kind ,ethical AI ,algorithmic fairness ,philosophy of AI ,counterfactuals ,machine learning ,Falsity ,Causal inference ,Artificial intelligence ,explanation ,business ,Psychology ,computer - Abstract
The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can require an incoherent theory of what social categories are. Our findings suggest that most often the social categories may not admit counterfactual manipulation, and hence may not appropriately satisfy the demands for evaluating the truth or falsity of counterfactuals. This is important because the widespread use of counterfactuals in machine learning can lead to misleading results when applied in high-stakes domains. Accordingly, we argue that even though counterfactuals play an essential part in some causal inferences, their use for questions of algorithmic fairness and social explanations can create more problems than they resolve. Our positive result is a set of tenets about using counterfactuals for fairness and explanations in machine learning., Comment: 9 pages, 1 table, 1 figure
- Published
- 2021
- Full Text
- View/download PDF
198. Explainable and Ethical AI: A Perspective on Argumentation and Logic Programming
- Author
-
Roberta Calegari, Giovanni Sartor, Andrea Omicini, Matteo Baldoni, Stefania Bandini, and Roberta Calegari, Andrea Omicini, Giovanni Sartor
- Subjects
business.industry ,Computer science ,Perspective (graphical) ,Intelligent decision support system ,Inductive LP ,Logic programming ,Sketch ,Argumentation theory ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Argumentation ,Explainable AI ,Ethical AI ,Abduction ,Software engineering ,business ,AND gate ,Probabilistic LP - Abstract
In this paper we sketch a vision of explainability of intelligent systems as a logic approach suitable to be injected into and exploited by the system actors once integrated with sub-symbolic techniques. In particular, we show how argumentation could be combined with different extensions of logic programming – namely, abduction, inductive logic programming, and probabilistic logic programming – to address the issues of explainable AI as well as some ethical concerns about AI.
- Published
- 2021
199. O Regulamento Geral sobre a Proteção de Dados como um instrumento de regulação de uma inteligência artificial de confiança à luz das orientações éticas da Comissão Europeia
- Author
-
Leal, Victor Moreira Mulin, Abreu, Joana Rita Sousa Covelo, Novais, Paulo, and Universidade do Minho
- Subjects
Risk based regulation ,Ciências Sociais::Direito ,Regulação baseada em risco ,Avaliação de risco ,Princípios éticos ,Ethical AI ,Ethical principles ,IA ética ,Risk assessment - Abstract
Dissertação de mestrado em Direito e Informática, É inegável que a inteligência artificial (IA) gera importantes oportunidades para a sociedade, como o aumento do bem estar dos indivíduos e o avanço da economia. Contudo, se mal utilizada, a IA também pode provocar riscos aos interesses da sociedade e aos direitos fundamentais dos indivíduos. A ausência de uma regulação específica voltada para a IA torna esse cenário ainda mais complexo, uma vez que os danos à privacidade, equidade e segurança podem ser irreparáveis. A partir d esse contexto, princípios éticos e de direitos se apresentam para construir uma IA ética e de confiança. No entanto, ainda pela ausência de uma legislação específica, torna se importante perceber se o Regulamento Geral de Proteção de Dados (RGPD) possui os instrumentos adequados para transformar o desígnio europeu de uma IA ética e de confiança em realidade. A fim de verificar essa possibilidade, os dispositivos do RGPD, e especialmente a Avaliação de Impacto na Proteção de Dados (AIPD), serão analisados a luz de uma IA europeia ética e de confiança., It is undeniable that artificial intelligence (AI) generates significant opportunities for society, such as increasing the well being of individuals and advancing the economy. However, if misused, AI can also pose risks to the interests of society and the fundamental rights of individuals. The lack of specific regulation for AI makes this scenario even more complex since the harm to privacy, fairness, and safety can be irreversible. From this context, ethical and rights principles are presented to build a t rustworthy and ethical AI. However, by the absence of specific legislation, it becomes important to understand whether the General Data Protection Regulation (GDPR) is an appropriate instrument to turn the European design of an ethical and trustworthy AI i nto reality. In order to verify this possibility, the provisions of the GDPR, and mainly the Data Protection Impact Assessment (DPIA), will be analyzed in the light of an ethical and trustworthy European AI.
- Published
- 2021
200. Ethical Artificial Intelligence – On Principles and Processes
- Author
-
Jobin, Anna
- Subjects
Artificial Intelligence ,Ethical AI - Abstract
Forget Artificial Intelligence, it’s all about 'Ethical AI'. (© 2020 Ecowin Verlag by Benevento Publishing/mit freundlicher Genehmigung), {"references":["Allhutter, D. / Cech, F. / Fischer, F. / Grill, G. / Mager, A. (2020): Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. In: Frontiers in Big Data, 3. https://doi.org/10.3389/fdata.2020.00005","Belinchon, E. / Bollmann, B. / Cussins Newman, J. / Jobin, A. / Nakonz, J. (2019): Towards an Inclusive Future in AI. A Global Participatory Process [Policy Recipes]. foraus – Forum Aussenpolitik. https://www.foraus.ch/wp-content/uploads/2019/10/20191022_Policy-Kitchen-AI_WEB-1.pdf","Broussard, M. (2018): Artificial Unintelligence: How Computers Misunderstand the World. The MIT Press, Cambridge MA","Buolamwini, J. / Gebru, T. (2018): Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In: Proceedings of Machine Learning Research, 81, pp. 1–15","Crawford, K. / Joler, V. (2018): Anatomy of an AI System. Anatomy of an AI System. http://www.anatomyof.ai","Elkin, C. / Witherspoon, S. (2019): Machine learning can boost the value of wind energy. DeepMind. https://deepmind.com/blog/article/machine-learning- can-boost-value-wind-energy","Eubanks, V. (2018): Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, New York NY","Greene, D. / Hoffmann, A. L. / Stark, L. (2019): Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. In: Proceedings of the 52nd Hawaii International Conference on System Sciences. http://scholarspace.manoa.hawaii.edu/handle/10125/59651","Hagendorff, T. (2020): The Ethics of AI Ethics: An Evaluation of Guidelines. In: Minds and Machines. https://doi.org/10.1007/s11023-020-09517-8","Harwell, D. (2020): AI baby monitors attract anxious parents: ›Fear is the quickest way to get people's a ention‹. In: Washington Post. https://www.washingtonpost.com/technology/2020/02/25/ai-baby-monitors/","Jobin, A. (2019): Diagnose: Ambivalenz. Freiheit im digitalen Zeitalter. In: Marti, M. L. / Strub J. (Ed.), Freiheit. hier + jetzt, Baden","Jobin, A. (2020): Ethische KI-Richtlinien zuhauf – und jetzt? In: Zukunftsblog | ETH Zürich. https://ethz.ch/de/news-und-veranstaltungen/eth-news/ news/2020/01/blog-anna-jobin-ki-richtlinien.html","Jobin, A. / Ienca, M. / Vayena, E. (2019): The global landscape of AI ethics guidelines. In: Nature Machine Intelligence, 1, pp. 389–399. https://doi. org/10.1038/s42256-019-0088-2","Mittelstadt, B. (2019): Principles alone cannot guarantee ethical AI. In: Nature Machine Intelligence, 1, pp. 501–507. https://doi.org/10.1038/ s42256-019-0114-4","O'Neil, C. (2016): Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, New York NY","Vaidhyanathan, S. (2011): The Googlization of Everything: (And why we should worry). University of California Press, Berkeley CA","Vaidhyanathan, S. (2018): Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy. Oxford University Press, Oxford","Whittlestone, J. / Nyrup, R. / Alexandrova, A. / Cave, S. (2019): The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 195–200. https://doi.org/10.1145/3306618.3314289"]}
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.