319 results
Search Results
2. SOCIAL ISSUES IN MANAGEMENT Conference Paper Abstracts.
- Subjects
MANAGEMENT science ,SOCIAL responsibility of business ,TRADE associations ,BUSINESS ethics ,SOCIAL networks ,ORGANIZATIONAL behavior ,SCANDALS ,SOCIAL change ,ENTREPRENEURSHIP ,NEW business enterprises - Abstract
The article presents several conference paper abstracts on social issues in management. "Constructing Corporate Social Responsibility: The Role of Industry Associations," discusses the nature of trade associations and the role they play in society. "Path Dependence in Firm-Stakeholder Relations," focuses on social responsibility by businesses. "Exploring Recent Business Scandals and Entrepreneurial Antecedents: Ethical Leadership Implications," investigates the relationship between scandals and entrepreneurship.
- Published
- 2005
- Full Text
- View/download PDF
3. Effects of explanations communicated in announcements of alleged labor abuses on valuation of a firm’s stock
- Author
-
Daly, Joseph Patrick, Pouder, Richard W., and McNeil, Chris R.
- Published
- 2017
- Full Text
- View/download PDF
4. Explanations: if, when, and how they aid service recovery
- Author
-
Bradley, Graham and Sparks, Beverley
- Published
- 2012
- Full Text
- View/download PDF
5. A cross‐cultural comparison of perceived informational fairness with service failure explanations
- Author
-
Wang, Chen‐ya and Mattila, Anna S.
- Published
- 2011
- Full Text
- View/download PDF
6. The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study
- Author
-
Claire Woodcock, Grant Blank, Brent Mittelstadt, and Dan Busbridge
- Subjects
FOS: Computer and information sciences ,Counterfactual thinking ,knowledge ,virtual health care ,digital health ,Computer Science - Human-Computer Interaction ,Health Informatics ,Affect (psychology) ,symptom checker ,Human-Computer Interaction (cs.HC) ,Treatment and control groups ,diagnostics ,eHealth ,Humans ,mHealth ,clinical communication ,Original Paper ,mobile phone ,conversational agent ,business.industry ,chatbot ,trust ,Social proof ,artificial intelligence ,Digital health ,Exploratory factor analysis ,Cross-Sectional Studies ,explanations ,symptoms ,Artificial intelligence ,Psychology ,business ,Delivery of Health Care ,Software - Abstract
BackgroundArtificial intelligence (AI)–driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople.ObjectiveThe aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease.MethodsA cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison-of-means tests were used to evaluate group differences in trust.ResultsDepending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user’s specific question and discount the diseases that they may also be aware of.ConclusionsSystem builders developing explanations for symptom-checking apps should consider the recipient’s knowledge of a disease and tailor explanations to each user’s specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.
- Published
- 2022
7. Negotiating Knowledge Through Mathematical Activities in Classroom Interactions.
- Author
-
Ingram, Jenni
- Abstract
Copyright of JMD: Journal für Mathematik-Didaktik is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
8. Demystifying XAI: Requirements for Understandable XAI Explanations.
- Author
-
STODT, Jan, REICH, Christoph, and KNAHL, Martin
- Abstract
This paper establishes requirements for assessing the usability of Explainable Artificial Intelligence (XAI) methods, focusing on non-AI experts like healthcare professionals. Through a synthesis of literature and empirical findings, it emphasizes achieving optimal cognitive load, task performance, and task time in XAI explanations. Key components include tailoring explanations to user expertise, integrating domain knowledge, and using non-propositional representations for comprehension. The paper highlights the critical role of relevance, accuracy, and truthfulness in fostering user trust. Practical guidelines are provided for designing transparent and user-friendly XAI explanations, especially in high-stakes contexts like healthcare. Overall, the paper's primary contribution lies in delineating clear requirements for effective XAI explanations, facilitating human-AI collaboration across diverse domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Are there essential forms in the social domain?
- Author
-
Jansen, Ludger
- Subjects
- *
SOCIAL history , *OBJECTIONS (Evidence) - Abstract
Traditionally, nature has often been thought to be structured by essential forms providing the generic features of natural things and thus the foundations for scientific explanations. In contrast, human history and the social domain have been thought to be the realm of ever‐changing appearances, where contingency prevails. The paper argues that the existence of essential forms is compatible with the contingent, mind‐dependent and historical character of the social world, and that essential forms can also be found in the social domain. Two categories of entities are discussed that suggest themselves to be identified as social forms, namely social kinds and social identities. To this end, it is shown that standard arguments for the existence of essential forms also apply to the social domain, and objections to the existence of social forms are rebutted. Particular attention is paid to the explanatory appeal of social forms. The paper concludes by suggesting that, far from being oppressive, essential social forms are presupposed by liberating social practices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Can Non-Causal Explanations Answer the Leibniz Question?
- Author
-
Lemanski, Jens
- Subjects
EXPLANATION ,INTENTION ,ARGUMENT - Abstract
Leibniz is often cited as an authority when it comes to the formulation and answer strategy of the question "Why is there something rather than nothing?" Yet much current research assumes that Leibniz advocates an unambiguous question and strategy for the answer. In this respect, one repeatedly finds the argument in the literature that alternative explanatory approaches to this question violate Leibniz's intention, since he derives the question from the principle of sufficient reason and also demands a causal explanation to the question. In particular, the new research on non-causal explanatory strategies to the Leibniz question seems to concern this counter-argument. In this paper, however, I will argue that while Leibniz raises the question by means of the principle of sufficient reason, he even favours a non-causal explanatory strategy to the question. Thus, a more accurate Leibniz interpretation seems not only to legitimise but also to support non-causal explanations to the Leibniz question. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. "So, Why Were You Late Again?": Social Account's Influence on the Behavioral Transgression of Being Late to a Meeting.
- Author
-
Allen, Joseph A., Eden, Emilee, Castro, Katherine C., Smith, McKaylee, and Mroz, Joseph E.
- Subjects
SOCIAL accounting ,INTERPERSONAL relations ,FORGIVENESS ,HELPING behavior ,LEADERSHIP - Abstract
People often offer an excuse or an apology after they do something wrong in an attempt to mitigate any potential negative consequences. In this paper, we examine how individuals employ social accounts when explaining their interpersonal transgression of meeting lateness to others in actual work settings. We examined the different combinations of social accounts and the social outcomes (forgiveness, helping behaviors, and intentions to continue interaction) of being late to a meeting. Across two studies using complementary experimental and survey methods, we found that a majority of late arrivers' explanations included remorse and that including remorse significantly influences helping behaviors. Furthermore, we found no interaction between excuses and offering remorse. Implications of these findings and future directions are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Explanations in Autonomous Driving: A Survey.
- Author
-
Omeiza, Daniel, Webb, Helena, Jirotka, Marina, and Kunze, Lars
- Abstract
The automotive industry has witnessed an increasing level of development in the past decades; from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. With the recent developments in Artificial Intelligence (AI), automotive companies now employ blackbox AI models to enable vehicles to perceive their environment and make driving decisions with little or no input from a human. With the hope to deploy autonomous vehicles (AV) on a commercial scale, the acceptance of AV by society becomes paramount and may largely depend on their degree of transparency, trustworthiness, and compliance with regulations. The assessment of the compliance of AVs to these acceptance requirements can be facilitated through the provision of explanations for AVs’ behaviour. Explainability is therefore seen as an important requirement for AVs. AVs should be able to explain what they have ‘seen’, done, and might do in environments in which they operate. In this paper, we provide a comprehensive survey of the existing work in explainable autonomous driving. First, we open by providing a motivation for explanations by highlighting the importance of transparency, accountability, and trust in AVs; and examining existing regulations and standards related to AVs. Second, we identify and categorise the different stakeholders involved in the development, use, and regulation of AVs and elicit their AV explanation requirements. Third, we provide a rigorous review of previous work on explanations for the different AV operations (i.e., perception, localisation, planning, vehicle control, and system management). Finally, we discuss pertinent challenges and provide recommendations including a conceptual framework for AV explainability. This survey aims to provide the fundamental knowledge required of researchers who are interested in explanation provisions in autonomous driving. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Explanations of Research Misconduct, and How They Hang Together.
- Author
-
Haven, Tamarinde and van Woudenberg, René
- Subjects
RATIONAL choice theory ,PROSPECT theory ,NEW public management ,IMPLICIT bias - Abstract
In this paper, we explore different possible explanations for research misconduct (especially falsification and fabrication), and investigate whether they are compatible. We suggest that to explain research misconduct, we should pay attention to three factors: (1) the beliefs and desires of the misconductor, (2) contextual affordances, (3) and unconscious biases or influences. We draw on the three different narratives (individual, institutional, system of science) of research misconduct as proposed by Sovacool to review six different explanations. Four theories start from the individual: Rational Choice theory, Bad Apple theory, General Strain Theory and Prospect Theory. Organizational Justice Theory focuses on institutional factors, while New Public Management targets the system of science. For each theory, we illustrate the kinds of facts that must be known in order for explanations based on them to have minimal plausibility. We suggest that none can constitute a full explanation. Finally, we explore how the different possible explanations interrelate. We find that they are compatible, with the exception of explanations based on Rational Choice Theory and Prospect Theory respectively, which are incompatible with one another. For illustrative purposes we examine the case of Diederik Stapel. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. The Effect of Online Review Portal Design: The Moderating Role of Explanations for Review Filtering.
- Author
-
Guangxu WANG, David (Jingjun) Xu, and Stephen Shaoyi LIAO
- Subjects
CONSUMER behavior ,DECISION making ,ALGORITHMS ,PUBLIC opinion ,CONJOINT analysis - Abstract
The flood of non-constructive and fake online consumer reviews erects a considerable barrier to consumers making efficient decisions. Various review filtering algorithms have been developed to address this challenge, but the design of post-development review portals continues to lack a consensus. In review portals, disclosing more transparent reviews is efficient for enhancing users’ trust. However, it will cause users’ diminished focus on recommended reviews, leading to sub-optimal decisions. A research model is then developed to investigate users’ cognitive processes in their responses to three review exhibition designs (i.e., informed silent display design, filtered review display design, and composite display design) regarding trust in the review portal and perceived decision quality. We also suggest that explanations for review filtering play a moderating role in users’ perceptions, which appears to be a viable resolution to this dilemma. This paper provides significant theoretical and practical insights for the review portal design and implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
15. Research Agenda of Ethical Recommender Systems based on Explainable AI.
- Author
-
Guttmann, Mike and Ge, Mouzhi
- Subjects
RECOMMENDER systems ,ARTIFICIAL intelligence ,LITERATURE reviews ,DIGITAL technology ,EVIDENCE gaps - Abstract
In the digital era, recommender systems (RS) have become an integral part of our daily interactions, exerting a significant impact on users and society. However, this also raises ethical challenges related to RS that should be considered. Addressing these challenges requires the application of explainable artificial intelligence (XAI) models to make RS more understandable. Based on the current state-of-the-art literature, this paper aims to provide a comprehensive overview of XAI for RS and its ethical implications, with the aim of proposing a research agenda for ethical RS based on XAI. The findings of the literature review show that neural network-based RS have received much attention in terms of offering explanations, while there is a research gap in explaining context-based RS and in evaluating explanations. In addition, a set of ethical challenges for RS are discussed by exploring how explanations for recommendations can contribute to the ethical use of RS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Clinical needs and preferences for AI-based explanations in clinical simulation training.
- Author
-
Kathrine Kollerup, Naja, Johansen, Stine S., Tolsgaard, Martin Grønnebæk, Lønborg Friis, Mikkel, Skov, Mikael B., and van Berkel, Niels
- Abstract
Medical training is a key element in maintaining and improving today's healthcare standards. Given the nature of medical work, students must master not only theory but also develop their hands-on abilities and skills in clinical practice. Medical simulators play an increasing role in supporting the active learning of these students due to their ability to present a large variety of tasks allowing students to train and experiment indefinitely without causing any patient harm. While the criticality of explainable AI systems has been extensively discussed in the literature, the medical training context presents unique user needs for explanations. In this paper, we explore the potential gap of current limitations within simulation-based training, and the role Artificial Intelligence (AI) holds in supporting the needs of medical students in training. Through contextual inquiries and interviews with clinicians in training (
N = 9) and subsequent validation with medical experts (N = 4), we obtain an understanding of the shortcomings in current simulation-based training and offer recommendations for future AI-driven training. Our results stress the need for continuous and actionable feedback that resembles the interaction between clinical supervisor and resident in real-world training scenarios while adjusting training material to the residents' skills and prior performance. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
17. Positive and negative explanation effects in human–agent teams
- Author
-
Lavender, Bryan, Abuhaimed, Sami, and Sen, Sandip
- Published
- 2024
- Full Text
- View/download PDF
18. Institutional history or ‘quid-pro-quo’? Exploring revenue collection in two Ugandan districts.
- Author
-
Kjaer, Anne Mette
- Subjects
- *
INTERNAL revenue , *TAX administration & procedure , *TAX auditing , *TAXATION - Abstract
ABSTRACT This paper explores why financial decentralization and political pressure to lower graduated personal tax has had different impacts in two Ugandan districts. It examines three possible explanations to these differences focusing on different aspects of the local context. The quid-pro-quo explanation focuses on whether services are delivered in return for the tax paid. Different perceived service delivery levels would then explain differences in compliance and tax takes. The neo-patrimonial explanation argues that differences in the degree to which personal relations dominate over formal rules would explain the differences in tax takes. The extractive capacity explanation focuses on established practices with regard to tax collection and stresses the fact that administrative autonomy differ among districts. The paper argues that the latter explanation is most plausible. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
19. Knowing When to Be An Honest Broker: Impartiality and Third-Party Support for Peace Implementation After Civil Wars.
- Author
-
Schmidt, Holger
- Subjects
- *
BROKERS , *CIVIL war , *VIOLENCE , *INTERVENTION (Administrative procedure) , *HONESTY - Abstract
Rationalist bargaining theory suggests that commitment problems often play a central role in both the outbreak and the perpetuation of violent conflict. Empirical studies indicate that commitment problems are especially pervasive in intrastate disputes, where war termination requires combatants to demobilize or disarm, leaving them highly vulnerable to unilateral defection by the opponent. Ending civil wars therefore often requires the involvement of third parties who help disputants overcome their fears of exploitation through outside monitoring and enforcement. Although this argument has come to be widely accepted, disagreement exists as to whether third parties need to be impartial to serve this function. Whereas some scholars believe that interveners must play the role of an "honest broker"; to succeed, others maintain that because of their greater willingness to commit resources, biased interveners make better guarantors. This paper suggests that these seemingly contradictory views can be reconciled with the help of two additional variables. Specifically, I argue that whether impartiality enhances or undermines the effectiveness of interventions aimed at resolving problems of credible commitment depends on (1) whether the intervener pursues an informational or enforcement strategy and (2) the type of commitment problem that the intervener seeks to address. The empirical part of the paper tests this argument against seventeen cases of third-party intervention aimed at supporting the implementation of negotiated civil war settlements. The results of this test provide substantial support for the hypotheses presented in the theoretical section of the paper, but also indicate that greater attention needs to be paid to the role of selection effects in shaping the relationship between impartiality and intervention outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
20. Tobacco Control in Comparative Perspective: Framing the Problems and the Puzzles.
- Author
-
Marmor, Theodore R. and Lieberman, Evan S.
- Subjects
- *
PREVENTION of tobacco use , *SMOKING prevention , *POLITICAL planning ,DEVELOPED countries - Abstract
This paper comments on the similarities and differences in the contemporary regulatory behavior towards smoking in eight industrial democracies. It takes as its central source of information extended case studies of Australia, Canada, Denmark, France, Germany, Japan, the United Kingdom, the United States written by specialists for a Robert Wood Johnson supported research project on the control of tobacco use. Assessing the relevance of competing theories of public policy formation, the paper concludes that the structure of political institutions provides the best guide to differences in regulatory regimes. In particular, the suggestion is that federal regimes --as compared with unitary political structures--are more likely to have moralistic campaigns against tobacco use, with health reformers choosing more attractive settings to advance their anti-tobacco agenda. The international flow of information and regulatory strategies provides convergence in the arguments employed and strategies tried, but there are clear differences in the strength of current regulations of tobacco use among these rich democracies. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
21. Fictional mechanism explanations: clarifying explanatory holes in engineering science.
- Author
-
Barman, Kristian González
- Abstract
This paper discusses a class of mechanistic explanations employed in engineering science where the activities and organization of nonstandard entities (voids, cracks, pits…) are cited as core factors responsible for failures. Given the use of mechanistic language by engineers and the manifestly mechanistic structure of these explanations, I consider several interpretations of these explanations within the new mechanical framework (among others: voids should be considered as shorthand expressions for other entities, voids should be reduced to lower-level mechanisms, or the explanations are simply abstract mechanistic explanations). I argue that these interpretations fail to solve several philosophical problems and propose an account of fictional mechanism explanations instead. According to this account, fictional mechanism explanations provide descriptions of fictional mechanisms that enable the tracking of counterfactual dependencies of the physical system they model by capturing system constraints. Engineers use these models to learn about and understand properties of materials, to build computational simulations of their behaviour, and to design new materials. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. The two images of economics: why the fun disappears when difficult questions are at stake?
- Author
-
Aydinonat, N.Emrah
- Subjects
ECONOMICS ,FINANCIAL crises ,PUBLIC opinion ,LABOR incentives ,SEAT belts - Abstract
The image of economics got somewhat puzzling after the crisis of 2008. Many economists now doubt that economics is able to provide answers to some of its core questions. The crisis was not so fun for economics. However, this not so fun image of economics is not the only image in the eyes of the general public. When one looks at economics-made-fun (EMF) books (e.g. Freakonomics, The Undercover Economist, etc.), economics seems to be an explanatory science which is able to provide interesting, unconventional, entertaining and enlightening explanations for almost every aspect of our lives. Isn't there a great contradiction between these two images of economics? Not necessarily. The present paper explicates why. Nevertheless, the paper also shows that EMF books run the risk of creating a false sense of understanding and explains how one should read the basic insights provided by EMF books to remove this risk. The paper contrasts the EMF version of the explanation of the effects of mandatory seat belt laws with actual research concerning the subject to illustrate its arguments. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
23. Transparency as design publicity: explaining and justifying inscrutable algorithms
- Author
-
Loi, Michele, Ferrario, Andrea, and Viganò, Eleonora
- Published
- 2021
- Full Text
- View/download PDF
24. Crying in a Russian preschool: Teachers' pragmatic acts in response to children's distress.
- Author
-
Moore, Ekaterina
- Subjects
- *
CRYING , *PRESCHOOL teachers , *BEHAVIORAL assessment , *DISCOURSE analysis , *SOCIAL values , *DATA analysis - Abstract
The paper examines teacher pragmatic acts (verbal and embodied) in response to children's crying in a Russian preschool. It focuses on crying episodes associated with children's inability or unwillingness to follow the norms and expectations surrounding children's conduct (e.g., crying when not allowed to do something). The paper employs discourse analysis; the data are 40 h of video- and audio-recorded interactions. The analysis shows that teachers used directives and other pragmatic acts (e.g., assessments, explanations) that encouraged children to stop crying. The directives were produced in multi-party participation frameworks where peers were referred to or recruited as active participants. Teachers used positive and negative assessments of crying. Crying was presented in a negative light in contrast with positive non-crying behaviors; positive assessments (praise and affectionate touch) were offered when crying stopped. The consequences of crying were presented as negatively affecting others. It is argued that such discursive treatment of crying contributes to socializing children to conformity. The paper contributes to our understanding of how through the use of pragmatic acts, child crying is discursively treated in ways that invoke norms of pragmatic conduct (e.g., not crying when asked to participate in a planned activity) and social values regarding group membership. • Analysis examines teachers' pragmatic acts (verbal and embodied) in response to crying in a Russian preschool. • Methods utilize discourse analysis. • Findings explore teachers' use of directive trajectories and other pragmatic acts that encouraged children to stop crying. • Conclusions are discussed in relation to norms of pragmatic conduct and social values. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Advancing Explanation in Comparative Politics by Applying Social Mechanisms and Statistical Models.
- Author
-
Zuckerman, Alan S.
- Subjects
- *
COMPARATIVE government , *INDIVIDUALISM , *STATISTICS , *EMPIRICAL research ,GERMAN politics & government ,BRITISH politics & government - Abstract
The paper addresses the problem of explanation in comparative politics. Accepting the principles of methodological individualism, the analysis equates social mechanism with the general class of explanatory principles. Successful explanations join social mechanisms with rigorous empirical analyses. Advances in statistical modeling allow the application of these methods of detailing and defending the validity of the empirical claims to complex, reciprocal, and multi-level political phenomena, the stuff of comparative politics. The paper illustrates these general principles. It offers theoretical support for reciprocal political influence in households and it tests this interpretation against alternative social mechanisms, matters of importance in the politics of Germany and Britain, and it employs appropriate statistical methods to model these complex reciprocal relationships. More generally, seeking to demonstrate the power of the explanation offered -and not just to offer a plausible account or a story in line with theoretical principles the explanatory power of its claims -the analysis insists on the need of both social mechanisms and rigorous empirical tests. ..PAT.-Unpublished Manuscript [ABSTRACT FROM AUTHOR]
- Published
- 2006
26. To Excuse or Not to Excuse: Effect of Explanation Type and Provision on Reactions to a Workplace Behavioral Transgression
- Author
-
Mroz, Joseph E. and Allen, Joseph A.
- Published
- 2020
- Full Text
- View/download PDF
27. Pronunciation, Grammar, and Vocabulary Explanations in Pedagogical Interaction
- Author
-
Mark Romig
- Subjects
explanations ,conversation analysis ,pronunciation ,vocabulary ,grammar ,Theory and practice of education ,LB5-3640 ,English language ,PE1-3729 - Abstract
This article reviews conversation analytic research on explanations in pedagogical interaction, particularly in language learning classrooms. In reviewing this literature, this paper aims to provide a comprehensive account of what is interactionally involved when giving pedagogical explanations so that future research investigating the effectiveness of these kinds of explanations can be appropriately measured. The paper first discusses characteristics of explanation as interactional phenomena, namely that they are sequentially organized, either planned or unplanned, and either monologically or dialogically organized. Then, the paper details how explanations in three particular linguistic domains (i.e., pronunciation, grammar, and vocabulary) are accomplished interactionally. In doing so, this paper highlights similarities and differences across linguistic domains that are frequently found in language learning classrooms. The paper ends by identifying patterns across pedagogical explanations and by suggesting directions for future research.
- Published
- 2023
- Full Text
- View/download PDF
28. An explanation-based approach for experiment reproducibility in recommender systems.
- Author
-
Polatidis, Nikolaos, Papaleonidas, Antonios, Pimenidis, Elias, and Iliadis, Lazaros
- Subjects
RECOMMENDER systems ,STANDARD deviations - Abstract
The offline evaluation of recommender systems is typically based on accuracy metrics such as the Mean Absolute Error and the Root Mean Squared Error for error rating prediction and Precision and Recall for measuring the quality of the top-N recommendations. However, it is difficult to reproduce the results since there are various libraries that can be used for running experiments and also within the same library there are many different settings that if not taken into consideration when replicating the results might vary. In this paper, we show that within the use of the same library an explanation-based approach can be used to assist in the reproducibility of experiments. Our proposed approach has been experimentally evaluated using a wide range of recommendation algorithms ranging from collaborative filtering to complicated fuzzy recommendation approaches that can solve the filter bubble problem, a real dataset, and the results show that it is both practical and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency.
- Author
-
Sokol, Kacper and Flach, Peter
- Abstract
The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee's mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend "Wizard of Oz" studies as a proxy for testing and evaluating standalone interactive explainability algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Recommending Learning Objects with Arguments and Explanations.
- Author
-
Heras, Stella, Palanca, Javier, Rodriguez, Paula, Duque-Méndez, Néstor, and Julian, Vicente
- Subjects
RECOMMENDER systems ,SMART materials ,USER interfaces ,ARGUMENT ,EXPLANATION ,VIRTUAL classrooms - Abstract
The massive presence of online learning resources leads many students to have more information than they can consume efficiently. Therefore, students do not always find adaptive learning material for their needs and preferences. In this paper, we present a Conversational Educational Recommender System (C-ERS), which helps students in the process of finding the more appropriated learning resources considering their learning objectives and profile. The recommendation process is based on an argumentation-based approach that selects the learning objects that allow a greater number of arguments to be generated to justify their suitability. Our system includes a simple and intuitive communication interface with the user that provides an explanation to any recommendation. This allows the user to interact with the system and accept or reject the recommendations, providing reasons for such behavior. In this way, the user is able to inspect the system's operation and understand the recommendations, while the system is able to elicit the actual preferences of the user. The system has been tested online with a real group of undergraduate students in the Universidad Nacional de Colombia, showing promising results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Towards a Theory of Explanations for Human–Robot Collaboration.
- Author
-
Sridharan, Mohan and Meadows, Ben
- Abstract
This paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture's capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. The influence of directive explanations on users’ business process compliance performance.
- Author
-
Hadasch, Frank, Maedche, Alexander, and Gregor, Shirley
- Subjects
BUSINESS process management ,LEGAL compliance ,INFORMATION technology ,KNOWLEDGE-based theory of the firm ,CLUSTER analysis (Statistics) - Abstract
Purpose – In organizations, individual user’s compliance with business processes is important from a regulatory and efficiency point of view. The restriction of users’ choices by implementing a restrictive information system is a typical approach in many organizations. However, restrictions and mandated compliance may affect employees’ performance negatively. Especially when users need a certain degree of flexibility in completing their work activity. The purpose of this paper is to introduce the concept of directive explanations (DEs). DEs provide context-dependent feedback to users, but do not force users to comply. Design/methodology/approach – The experimental study used in this paper aims at investigating how DEs influence users’ process compliance. The authors used a laboratory experiment to test the proposed hypotheses. Every participant underwent four trials for which business process compliance was measured. Two trial blocks were used to cluster the four trials. Diagrammatic DEs were provided in one of the trial blocks, while textual DEs were provided in the other. Trial blocks were counterbalanced. Findings – The results of the experiment show that DEs influence a user’s compliance, but the effect varies for different types of DEs. The authors believe this study is significant as it empirically examines design characteristics of explanations from knowledge-based systems in the context of business processes. Research limitations/implications – This study is certainly not without limitations. The sample used for this study was drawn from undergraduate information systems management students. The sample is thus not representative of the general population of organizations’ IT users. However, a student sample adequately represents novice IT users, who are not very familiar with a business process. They are particularly suitable to study how users react to first-time contact with a DE. Practical implications – The findings of this study are important to designers and implementers of systems that guide users to follow business processes. As the authors have illustrated with a real-world scenario, an ERP system’s explanation can lack details on how a user can resolve a blocked activity. In situations in which users bypass restricted systems, DEs can guide them to comply with a business process. Particularly diagrammatic explanations, which depict actors, activities, and constraints for a business process, have been found to increase the probability that users’ behavior is business process compliant. Less time may be needed to resolve a situation, which can result in very efficient user-system cooperation. Originality/value – This study makes several important contributions to research on explanations, which are provided by knowledge-based systems. First, the authors conceptualized, designed, and investigated a novel type of explanations, namely, DEs. The results of this study show how dramatic the difference in process compliance performance is when exposed to certain types of DEs (in one group from 57 percent on the initial trial to 82 percent on the fourth trial). This insight is important to derive design guidelines for DE, particularly when multimedia material is used. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Deathbed Visions: Visitors and Vistas.
- Author
-
Claxton-Oldfield, Stephen
- Subjects
DREAMS ,ATTITUDES toward death ,PSYCHOLOGICAL distress ,DEATH ,PALLIATIVE treatment ,HOSPITAL nursing staff ,HALLUCINATIONS ,TERMINALLY ill ,HOSPICE care - Abstract
This review article examines the recent (i.e., since the late-1990s) research on deathbed visions (DBVs). The reviewed material includes the features of DBV experiences, terminology and definitional issues in the literature, and prevalence reports of DBVs by family members/caregivers of dying persons, healthcare professionals, terminally ill patients, hospice palliative care volunteers, and nursing home staff. The impact of DBVs on dying persons, why deathbed visitors appear, and possible explanations for DBVs are also considered. The lessons learned from the literature review include the following: DBVs are common experiences that cannot be easily explained, and they typically have positive impacts on dying persons, not the least of which is lessening the fear of death. The literature review also highlights the need for training and education about DBVs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Using explanations for recommender systems in learning design settings to enhance teachers' acceptance and perceived experience.
- Author
-
Karga, Soultana and Satratzemi, Maya
- Subjects
EFFECTIVE teaching ,RECOMMENDER systems ,MOBILE learning ,DECISION making ,INFORMATION & communication technologies - Abstract
The reuse of Learning Designs can bring significant advantages to the educational community such as the diffusion of best teaching practices and the improvement of teaching quality and learning outcomes. Although various tools, including Recommender Systems, have been developed to implement the notion of reusing Learning Designs, their adoption by teachers falls short of expectations. This paper investigates the results of providing explanations for Learning Design recommendations to teachers. To this end, we designed and implemented an explanatory mechanism incorporated into a Recommender System, which propose pre-existing Learning Designs to teachers. We then conducted a user-centric evaluation experiment. Overall, this study provides evidence that explanations should be incorporated into Recommender Systems that propose Learning Designs, as a way of improving the teacher-perceived experience and promoting their wider adoption by teachers. The more teachers accept and adopt Recommender Systems that propose Learning Designs, the more the educational community gains the benefits of reusing Learning Designs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Representational gesturing as an epistemic tool for the development of mechanistic explanatory models.
- Author
-
Mathayas, Nitasha, Brown, David E., Wallon, Robert C., and Lindgren, Robb
- Subjects
SCIENTIFIC terminology ,MIDDLE school students ,SENSEMAKING theory (Communication) ,SCIENCE education ,PRIOR learning - Abstract
Constructing explanatory models, in which students learn to visualize the mechanisms of unobservable entities (e.g., molecules) to explain the working of observable phenomena (e.g., air pressure), is a key practice of science. Yet, students struggle to develop and utilize such models to articulate causal‐mechanistic explanations. In this paper, we argue that representational gesturing with the hands (i.e., gesturing that models semantic content) can support the development of explanatory models. Through case studies examining middle school students gesturing during sensemaking, we show that representational gestures can support students in at least four ways: (a) they make underlying mechanisms visible, (b) they facilitate translation of a spatial model to a verbal explanation, (c) they enable model articulation while relying less on scientific terminology, and (d) they present opportunities for students to embody causal agents. In these ways, representational gesturing can be considered an epistemic tool supporting students during sensemaking and communication. We argue that instruction should attend to students' gestures and, as appropriate, encourage students to gesture as a means of aiding the construction and articulation of causal‐mechanistic explanations. While our study explores one form of embodied representation, we encourage the field to explore embodied expressions as epistemic tools for learning. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Explanatory Framework for Popular Physics Lectures.
- Author
-
Kapon, Shulamit, Ganiel, Uri, and Eylon, Bat Sheva
- Subjects
PHYSICS education ,LECTURES & lecturing ,THEORY of knowledge ,THEORY ,METAPHOR - Abstract
Popular physics lectures provide a ‘translation’ that bridges the gap between the specialized knowledge that formal scientific content is based on, and the audience's informal prior knowledge. This paper presents an overview of a grounded theory explanatory framework for Translated Scientific Explanations (TSE) in such lectures, focusing on one of its aspects, the conceptual blending cluster. The framework is derived from a comparative study of three exemplary popular physics lectures from two perspectives: the explanations in the lecture (as artifacts), and the design of the explanation from the lecturer's point of view. The framework consists of four clusters of categories: 1. Conceptual blending (e.g. metaphor). 2. Story (e.g. narrative). 3. Content (e.g. selection of level). 4. Knowledge organization (e.g. structure). The framework shows how the lecturers customized the content of the presentation to the audience's knowledge. Lecture profiles based upon this framework can serve as guides for utilizing popular physics lectures when teaching contemporary physics to learners lacking the necessary science background. These features are demonstrated through the conceptual blending cluster. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
37. Issue Importance in the 2004 Election: The Role of Policy-related Controvelrsies Concerning Foreign Policiy, Traditional Family Values, and Economic Inequality.
- Author
-
Shanks, J. Merrill, Strand, Douglas A., Carmines, Edward G., and Brady, Henry E.
- Subjects
- *
INTERNATIONAL relations , *FAMILY values , *VIEWS , *VOTERS , *POLITICAL candidates ,UNITED States presidential elections - Abstract
A conference paper about the effect of policy-related controversies on the 2004 U.S. presidential election is presented. It presents evidence that policy-related controversies influence the views of the U.S. voters on the political candidates of their choice. It discusses controversies related to foreign policy, traditional family values, and economic inequality. It explains how voters' policy-related views concerning the three controversies contributed to the victory of George W. Bush in the popular vote.
- Published
- 2005
38. A Structuration Framework for Gendered Organizations: The Case of American Law Schools.
- Author
-
Rubineau, Brian
- Subjects
POWER (Social sciences) ,SOCIAL sciences ,SOCIOLOGY ,LAW schools - Abstract
Research on difference and inequality in organizations needs to take a practical turn, with an emphasis on process-oriented explanations of observed disparities. Such process-oriented explanations can help not only to understand the generation of inequality, but also to inform efforts to achieve its reduction. This paper uses a structuration framework to synthesize the gendered organizations literature with documented and original research on the gendered nature of American law schools. The result, gendered structuration, is a process-oriented framework of gendering in law schools that both indicts some unexplored factors and exculpates some prime suspects. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
39. Interpretable classifiers for prediction of disability trajectories using a nationwide longitudinal database
- Author
-
Wu, Yafei, Xiang, Chaoyi, Jia, Maoni, and Fang, Ya
- Published
- 2022
- Full Text
- View/download PDF
40. Effects of Recommendation Neutrality and Sponsorship Disclosure on Trust vs. Distrust in Online Recommendation Agents: Moderating Role of Explanations for Organic Recommendations.
- Author
-
Wang, Weiquan, Xu, Jingjun (David), and Wang, May
- Subjects
WEBSITES ,TECHNOLOGICAL innovations ,CORPORATE sponsorship ,CONSUMER behavior ,RHEUMATOID arthritis - Abstract
We extend the extant research on neutral recommendation agents (RAs) to those that lack recommendation neutrality and are biased toward sponsors. We first investigate the effects of recommendation neutrality on users' trust and distrust in RAs by comparing a biased RA with sponsorship disclosure with a neutral RA. We then apply a contingency approach to examine the effects of sponsorship disclosure on users' trust and distrust in biased RAs, with explanations for organic recommendations as a contingent factor. A laboratory experiment was conducted in the United States. We determine that users' trust in the biased RA with sponsorship disclosure is lower and that their distrust is higher than that in the neutral RA. Results also show that user trust in a biased RA increases only when explanations for organic recommendations and sponsorship disclosure are both provided. Users' perceived psychological contract violations of an RA have been verified as a key mediator of the examined effects. However, explanations for organic recommendations, sponsorship disclosure, or their combination fail to significantly lower users' distrust in a biased RA. A second experiment conducted in Hong Kong confirms the major findings of the experiment conducted in the United States. Theoretical contributions and practical implications for e-commerce RAs are discussed. This paper was accepted by Anandhi Bharadwaj, information systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. On the Rationale for Distinguishing Arguments from Explanations.
- Author
-
McKeon, Matthew
- Subjects
CRITICAL thinking ,CONVERGENT thinking ,DEBATE ,ARGUMENT ,REASONING - Abstract
Even with the lack of consensus on the nature of an argument, the thesis that explanations and arguments are distinct is near orthodoxy in well-known critical thinking texts and in the more advanced argumentation literature. In this paper, I reconstruct two rationales for distinguishing arguments from explanations. According to one, arguments and explanations are essentially different things because they have different structures. According to the other, while some explanations and arguments may have the same structure, they are different things because explanations are used for different purposes than arguments. I argue that both rationales fail to motivate a distinction between arguments and explanations. Since these are the only rationales for distinguishing arguments from explanations that I am prepared to take seriously, I don't see why we should exclude explanations from being arguments. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. Understanding Mechanistic Explanation as A Strategy of Analytical Sociology
- Author
-
Danjuma Sheidu Asaka and Olabode Awarun
- Subjects
analysis ,explanations ,social mechanism ,social reality ,strategy ,Social Sciences - Abstract
Although Analytical Sociology is not often used in the mainstream Sociology, its history is however, traceable to the classical works of scholars such as Emile Durkheim, Max Weber, Alexis de Tocqueville as well as contemporary sociological thinkers like Talcott Parsons and Robert Merton, among others. This paper provides a contemporary argument for the application of mechanistic explanation in the overall understanding of Analytical Sociology using relevant and practical examples. In the course of this, attention has been paid to the concept of explanation and its various types in a sociological discourse. This paper therefore argues that social reality can significantly be understood only when explanations are systematic and detailed in content and context. The conclusion is that analytical sociology has the capacity to explain the actions of social actors within the social environment beyond some social doubts, even though, not all situations, can be sufficiently explained with the strategy.
- Published
- 2020
- Full Text
- View/download PDF
43. Evaluating the effectiveness of explanations for recommender systems.
- Author
-
Tintarev, Nava and Masthoff, Judith
- Subjects
RECOMMENDER systems ,CONSUMER preferences ,FILTERING software ,SOFTWARE measurement ,SOFTWARE engineering - Abstract
When recommender systems present items, these can be accompanied by explanatory information. Such explanations can serve seven aims: effectiveness, satisfaction, transparency, scrutability, trust, persuasiveness, and efficiency. These aims can be incompatible, so any evaluation needs to state which aim is being investigated and use appropriate metrics. This paper focuses particularly on effectiveness (helping users to make good decisions) and its trade-off with satisfaction. It provides an overview of existing work on evaluating effectiveness and the metrics used. It also highlights the limitations of the existing effectiveness metrics, in particular the effects of under- and overestimation and recommendation domain. In addition to this methodological contribution, the paper presents four empirical studies in two domains: movies and cameras. These studies investigate the impact of personalizing simple feature-based explanations on effectiveness and satisfaction. Both approximated and real effectiveness is investigated. Contrary to expectation, personalization was detrimental to effectiveness, though it may improve user satisfaction. The studies also highlighted the importance of considering opt-out rates and the underlying rating distribution when evaluating effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. One ethnic minority, two cultural identities and more.
- Author
-
Feng-bing
- Subjects
MINORITIES ,CULTURAL identity ,CHILDREN of migrant laborers ,CHINESE people ,PARTICIPANT observation ,SELF-perception ,LIFESTYLES ,ETHNICITY ,EXPLANATION ,NARRATIVES ,HABITUS (Sociology) - Abstract
Most of current research seems to proceed from a homogeneous conception of the Chinese ethnic minority in the North America and Europe. The present paper focuses on ethnic Chinese migrant children living in Northern Ireland whose parents come from Mainland China and Hong Kong, respectively, and examines their intra-ethnic and inter-generational experiences based on in-depth interviews and participant observations. The purpose is to identify and explain both historical dynamics and intra-ethnic diversities within these Chinese children’s accounts. The research employs culturally grounded versions of explanations, argumentations, narratives, ‘habitus’, etc. as analytical tools. The research discovers the changes in and differences between these two Chinese sub-groups in views and values on self-concept, norms, country of origin, host society and life-style choices. [ABSTRACT FROM AUTHOR]
- Published
- 2011
45. Games as formal tools versus games as explanations in logic and science.
- Author
-
Pietarinen, Ahti-Veikko
- Subjects
GAME theory ,LOGIC ,SCIENCE ,PHILOSOPHY ,COGNITIVE science - Abstract
This paper addresses the theoretical notion of a game as it arises across scientific inquiries, exploring its uses as a technical and formal asset in logic and science versus an explanatory mechanism. While games comprise a widely used method in a broad intellectual realm (including, but not limited to, philosophy, logic, mathematics, cognitive science, artificial intelligence, computation, linguistics, physics, economics), each discipline advocates its own methodology and a unified understanding is lacking. In the first part of this paper, a number of game theories in formal studies are critically surveyed. In the second part, the doctrine of games as explanations for logic is assessed, and the relevance of a conceptual analysis of games to cognition discussed. It is suggested that the notion of evolution plays a part in the game-theoretic concept of meaning. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
46. The methodological role of mechanistic-computational models in cognitive science
- Author
-
Harbecke, Jens
- Published
- 2021
- Full Text
- View/download PDF
47. Organizational Learning Supported by Machine Learning Models Coupled with General Explanation Methods: A Case of B2B Sales Forecasting.
- Author
-
BOHANEC, Marko, ROBNIK-ŠIKONJA, Marko, and KLJAJIĆ BORŠTNAR, Mirjana
- Subjects
ORGANIZATIONAL learning ,MACHINE learning ,BUSINESS-to-business transactions ,SALES forecasting ,PARTICIPATORY design - Abstract
Background and Purpose: The process of business to business (B2B) sales forecasting is a complex decision-making process. There are many approaches to support this process, but mainly it is still based on the subjective judgment of a decision-maker. The problem of B2B sales forecasting can be modeled as a classification problem. However, top performing machine learning (ML) models are black boxes and do not support transparent reasoning. The purpose of this research is to develop an organizational model using ML model coupled with general explanation methods. The goal is to support the decision-maker in the process of B2B sales forecasting. Design/Methodology/Approach: Participatory approach of action design research was used to promote acceptance of the model among users. ML model was built following CRISP-DM methodology and utilizes R software environment. Results: ML model was developed in several design cycles involving users. It was evaluated in the company for several months. Results suggest that based on the explanations of the ML model predictions the users' forecasts improved. Furthermore, when the users embrace the proposed ML model and its explanations, they change their initial beliefs, make more accurate B2B sales predictions and detect other features of the process, not included in the ML model. Conclusions: The proposed model promotes understanding, foster debate and validation of existing beliefs and thus contributes to single and double-loop learning. Active participation of the users in the process of development, validation, and implementation has shown to be beneficial in creating trust and promotes acceptance in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. A novel framework for augmenting the quality of explanations in recommender systems.
- Author
-
Karacapilidis, Nikos, Malefaki, Sonia, and Charissiadis, Andreas
- Subjects
RECOMMENDER systems ,AUGMENTED reality ,ROBUST control - Abstract
A significant challenge being faced in recommender systems research concerns the provision of robust explanations about why a particular option is suggested. These explanations may exploit diverse data types concerning the users and items under consideration. In line with the above, this paper introduces a novel framework for automatic explanations building in recommender systems. The proposed solution follows a hybrid approach that meaningfully integrates collaborative filtering and sentiment analysis features into classical multi-attribute based ranking. A comprehensive evaluation of the proposed solution advocates the exploitation of additional and diverse information in explanation building, since this better fulfils a series of recommendation related aims such as transparency, persuasiveness, effectiveness and satisfaction. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. Is This System Biased? - How Users React to Gender Bias in an Explainable AI System.
- Author
-
Jussupow, Ekaterina, Meza Martínez, Miguel Angel, Maedche, Alexander, and Heinzl, Armin
- Abstract
Biases in Artificial Intelligence (AI) can reinforce social inequality. Increasing transparency of AI systems through explanations can help to avoid the negative consequences of those biases. However, little is known about how users evaluate explanations of biased AI systems. Thus, we apply the Psychological Contract Violation Theory to investigate the implications of a gender-biased AI system on user trust. We allocated 339 participants into three experimental groups, each with a different loan forecasting AI system version: explainable gender-biased, explainable neutral, and nonexplainable AI system. We demonstrate that only users with moderate to high general awareness of gender stereotypes in society, i.e., stigma consciousness, perceive the gender-biased AI system as not trustworthy. Users with low stigma consciousness perceive the gender-biased AI system as trustworthy as it is more transparent than a system without explanations. Our findings show that AI biases can reinforce social inequality if they match with human stereotypes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
50. Counterfactual attribute-based visual explanations for classification
- Author
-
Gulshad, Sadaf and Smeulders, Arnold
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.