391 results
Search Results
152. Constructing Explanations of Flight: A Study of Instructional Discourse in Primary Science.
- Author
-
Rowell, Patricia M. and Ebbers, Margaretha
- Subjects
- *
SCIENCE education (Primary) , *EDUCATION , *LECTURE method in teaching , *LECTURES & lecturing , *CLASSROOMS - Abstract
In this paper, we examine the instructional discourse of science lessons in two primary classrooms for explanations of bird adaptations for flight. We draw on case study data to describe ways in which student construction of explanations is scaffolded by the teachers. We recognised three categories of explanations developed in the discourse: descriptive explanations, relational explanations and explanatory models. Descriptive explanations predominated. We argue that teachers cannot assume that, although the construction of oral explanations in science lessons may move from descriptive to relational or explanatory models, students will be able to write such explanations without specific attention to text structure. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
153. Explanations for over-constrained problems using QuickXPlain with speculative executions.
- Author
-
Vidal, Cristian, Felfernig, Alexander, Galindo, José, Atas, Müslüm, and Benavides, David
- Subjects
KNOWLEDGE base ,ALGORITHMS ,EXPLANATION ,DECISION making ,EXECUTIONS & executioners ,DIAGNOSIS - Abstract
Conflict detection is used in various scenarios ranging from interactive decision making (e.g., knowledge-based configuration) to the diagnosis of potentially faulty models (e.g., using knowledge base analysis operations). Conflicts can be regarded as sets of restrictions (constraints) causing an inconsistency. Junker's QuickXPlain is a divide-and-conquer based algorithm for the detection of preferred minimal conflicts. In this article, we present a novel approach to the detection of such conflicts which is based on speculative programming. We introduce a parallelization of QuickXPlain and empirically evaluate this approach on the basis of synthesized knowledge bases representing feature models. The results of this evaluation show significant performance improvements in the parallelized QuickXPlain version. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
154. Generating and evaluating explanations of attended and error-inducing input regions for VQA models.
- Author
-
Ray, Arijit, Cogswell, Michael, Xiao Lin, Alipour, Kamran, Divakaran, Ajay, Yi Yao, and Burachas, Giedrius
- Subjects
VISUAL analytics ,STATISTICS ,COMPUTATION laboratories ,REACTION time ,ARTIFICIAL intelligence - Abstract
Attention maps, a popular heatmap-based explanation method for Visual Question Answering, are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (1>0:97) with how well users can predict model correctness. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
155. Benefits of Writing an Explanation During Pauses in Multimedia Lessons.
- Author
-
Lawson, Alyssa P. and Mayer, Richard E.
- Subjects
EDUCATIONAL outcomes ,ACTIVE learning ,LEARNING strategies ,LEARNING ,EXPLANATION - Abstract
Generative learning theory posits that learners engage more deeply and produce better learning outcomes when they engage in selecting, organizing, and integrating processes during learning. The present experiments examine whether the generative learning activity of generating explanations can be extended to online multimedia lessons and whether prompts to engage in this generative learning activity work better than more passive instruction. Across three experiments, college students learned about greenhouse gasses from a 4-part online lesson involving captioned animations and subsequently took a posttest. After each part, learners were asked to generate an explanation (write-an-explanation), write an explanation using provided terms (write-a-focused-explanation), rewrite a provided explanation (rewrite-an-explanation), read a provided explanation (read-an-explanation), or simply move on to the next part (no-activity). Overall, students in the write-an-explanation group (Experiments 2 and 3), write-a-focused-explanation group (Experiment 2), and rewrite-an-explanation group (Experiment 3) performed significantly better on a delayed posttest than the no-activity group, but the groups did not differ significantly on an immediate posttest (Experiment 1). These results are consistent with generative learning theory and help identify generative learning strategies that improve online multimedia learning, thereby priming active learning with passive media. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
156. Consistency restoration and explanations in dynamic CSPs—Application to configuration
- Author
-
Amilhastre, Jérôme, Fargier, Hélène, and Marquis, Pierre
- Subjects
- *
CONSTRAINT satisfaction , *CONFIGURATION management - Abstract
Most of the algorithms developed within the Constraint Satisfaction Problem (CSP) framework cannot be used as such to solve interactive decision support problems, like product configuration. Indeed, in such problems, the user is in charge of assigning values to variables. Global consistency maintaining is only one among several functionalities that should be offered by a CSP-based platform in order to help the user in her task; other important functionalities include providing explanations for some user''s choices and ways to restore consistency.This paper presents an extension of the CSP framework in this direction. The key idea consists in considering and handling the user''s choices as assumptions. From a theoretical point of view, the complexity issues of various computational tasks involved in interactive decision support problems are investigated. The results cohere with what is known when Boolean constraints are considered and show all the tasks intractable in the worst case. Since interactivity requires short response times, intractability must be circumvented some way. To this end, we present a new method for compiling configuration problems, that can be generalized to valued CSPs. Specifically, an automaton representing the set of solutions of the CSP is first computed off-line, then this data structure is exploited so as to ensure both consistency maintenance and computation of maximal consistent subsets of user''s choices in an efficient way. [Copyright &y& Elsevier]
- Published
- 2002
- Full Text
- View/download PDF
157. Co-learner presence and praise alters the effects of learner-generated explanation on learning from video lectures
- Author
-
Pi, Zhongling, Liu, Caixia, Meng, Qian, and Yang, Jiumin
- Published
- 2022
- Full Text
- View/download PDF
158. The nature of the laws of nature.
- Author
-
Maturana, Humberto
- Subjects
NATURE ,ECOLOGY ,LAW ,NATURAL law ,LANGUAGE & languages - Abstract
We human beings live in the explanations of our existence as living beings. These explanations of our existence include what we call the ‘laws of nature’. Though we name them laws, we cannot claim that they have an existence independent of us. We human beings do not exist in nature, nature arises with us, and we ourselves arise with it. In this dynamic co-arising, we explain ourselves and our circumstances while operating as observers. The laws of nature are abstractions of the regularities of our operation as living systems that we distinguish as we explain our experiences with the coherences of our experiences. Copyright © 2000 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
159. A discussion frame for explaining records that are based on algorithmic output
- Author
-
Herbjørn Andresen
- Subjects
Ethics ,Philosophy of science ,Explanations ,Computer science ,Records management ,05 social sciences ,Control (management) ,Frame (networking) ,050801 communication & media studies ,02 engineering and technology ,Library and Information Sciences ,Data science ,Management Information Systems ,0508 media and communications ,Predictions ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Filterings ,Conceptual frame ,Affect (linguistics) ,Algorithms - Abstract
PurposeThe purpose of this paper is to raise attention within the records management community about evolving demands for explanations that make it possible to understand the content of records, also when they reflect output from algorithms.Design/methodology/approachThe methodological approach is a conceptual analysis based in records management theory and the philosophy of science. The concepts that are developed are thereafter applied to “the right to an explanation” and “an algorithmic ethics approach,” respectively, to further examine their viability.FindingsDifferent forms of explanations, ranging from “certain” explanations to predictions, as well as varying degrees of control over the input data to algorithms, affect the nature of the explanations and what kinds of records the explanations may reside in.Originality/valueThis paper contributes to a conceptual frame for discussing where explanations to algorithms may be documented, within different kinds of records, emanating from different kinds of processes.
- Published
- 2019
160. Two machine-learning techniques for mining solutions of the ReleasePlanner™ decision support system.
- Author
-
Du, Gengshen and Ruhe, Guenther
- Subjects
- *
MACHINE learning , *COMPUTER software , *DECISION support systems , *PROBLEM solving , *DECISION making , *ROUGH sets - Abstract
Abstract: Decision support systems (DSSs) perform complex computations to provide suggestions regarding decision-making and problem solving. Quite often, the DSS solutions are not fully accepted by users because DSSs work as a black box so that the users cannot fully understand where the results came from and how they were derived. Explanations of the generated DSSs solutions are expected to mitigate this situation. In this paper, two machine-learning techniques, called rough set analysis (RSA) and dependency network analysis (DNA), are proposed for mining DSS solutions. The mining results are provided to the users as explanations for those solutions. Two parts of research results are described. First, a framework applying RSA and DNA for generating explanations for DSS solutions is presented. This framework is generic and applicable to many other DSSs. Second, as a proof-of-concept, the applications of RSA and DNA techniques are demonstrated through a case study of mining patterns from input-output pairs of ReleasePlanner™, a specific DSS for product release planning. Our evaluation indicates that the explanations generated by RSA and DNA improve the overall user acceptance of results provided by this specific DSS. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
161. Explaining the cumulative propagator
- Author
-
Schutt, Andreas, Feydy, Thibaut, Stuckey, Peter J., and Wallace, Mark G.
- Published
- 2011
- Full Text
- View/download PDF
162. "That's (not) the output I expected!" On the role of end user expectations in creating explanations of AI systems.
- Author
-
Riveiro, Maria and Thill, Serge
- Subjects
- *
ARTIFICIAL intelligence , *EXPECTATION (Psychology) , *EXPLANATION - Abstract
Research in the social sciences has shown that expectations are an important factor in explanations as used between humans: rather than explaining the cause of an event per se, the explainer will often address another event that did not occur but that the explainee might have expected. For AI-powered systems, this finding suggests that explanation-generating systems may need to identify such end user expectations. In general, this is a challenging task, not the least because users often keep them implicit; there is thus a need to investigate the importance of such an ability. In this paper, we report an empirical study with 181 participants who were shown outputs from a text classifier system along with an explanation of why the system chose a particular class for each text. Explanations were both factual , explaining why the system produced a certain output or counterfactual , explaining why the system produced one output instead of another. Our main hypothesis was explanations should align with end user expectations; that is, a factual explanation should be given when the system's output is in line with end user expectations, and a counterfactual explanation when it is not. We find that factual explanations are indeed appropriate when expectations and output match. When they do not, neither factual nor counterfactual explanations appear appropriate, although we do find indications that our counterfactual explanations contained at least some necessary elements. Overall, this suggests that it is important for systems that create explanations of AI systems to infer what outputs the end user expected so that factual explanations can be generated at the appropriate moments. At the same time, this information is, by itself, not sufficient to also create appropriate explanations when the output and user expectations do not match. This is somewhat surprising given investigations of explanations in the social sciences, and will need more scrutiny in future studies. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
163. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.
- Author
-
Langer, Markus, Oster, Daniel, Speith, Timo, Hermanns, Holger, Kästner, Lena, Schmidt, Eva, Sesing, Andreas, and Baum, Kevin
- Subjects
- *
ARTIFICIAL intelligence , *CONCEPTUAL models , *INTERDISCIPLINARY research , *INTERDISCIPLINARY approach to knowledge , *GOAL (Psychology) , *HUMAN-computer interaction - Abstract
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these " stakeholders' desiderata ") in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
164. Another View of Conventions
- Author
-
Tilly, Charles
- Published
- 2010
- Full Text
- View/download PDF
165. Multiple Explanations in Darwinian Evolutionary Theory
- Author
-
Bock, Walter J.
- Published
- 2010
- Full Text
- View/download PDF
166. "صيغة )تفعّل( عند الصرفيين ومعانيها في القرآن الكريم".
- Author
-
عبد الوهاب, نور اسلام, and عطاء الله
- Subjects
ACOUSTICS ,INJUNCTIONS ,CONFEDERATION of states ,EXPLANATION ,VOCABULARY - Abstract
The article discusses “Seeghat Tafaggul” and its meaning in the light of Quranic injunctions. It compares the scholarly opinion about the proposed study and argues that they have been extracted from Quran can be linked to Quranic explanations which include: Mutawagha, مطاوعة Al-takalluff, تكلف Al- ittekhaz, الاتخاذ Al –tajannub, التجنب Altadreej, التدريج Al-igh-naa, الإغناء Al-takseer, التكثير and Al-Mu wafaqah, الموافقة etc . The school of thoughts which have been taken into considerate these words. The article also taken into confederation the saying of Mufasserin like Allama Zamhashri, Alama Baizavi,Alama Abu-al Sauad, and Ibni Ashor in order to give a sound conclusion to the study of 'Seeghat tafaggul'. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
167. Explaining inferences in Bayesian networks
- Author
-
Yap, Ghim-Eng, Tan, Ah-Hwee, and Pang, Hwee-Hwa
- Published
- 2008
- Full Text
- View/download PDF
168. Reproduction of Experiments in Recommender Systems Evaluation Based on Explanations
- Author
-
Polatidis, Nikolaos, Pimenidis, Elias, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Pimenidis, Elias, editor, and Jayne, Chrisina, editor
- Published
- 2018
- Full Text
- View/download PDF
169. Evaluating Explanations by Cognitive Value
- Author
-
Chander, Ajay, Srinivasan, Ramya, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Holzinger, Andreas, editor, Kieseberg, Peter, editor, Tjoa, A Min, editor, and Weippl, Edgar, editor
- Published
- 2018
- Full Text
- View/download PDF
170. Towards Complementary Explanations Using Deep Neural Networks
- Author
-
Silva, Wilson, Fernandes, Kelwin, Cardoso, Maria J., Cardoso, Jaime S., Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Stoyanov, Danail, editor, Taylor, Zeike, editor, Kia, Seyed Mostafa, editor, Oguz, Ipek, editor, Reyes, Mauricio, editor, Martel, Anne, editor, Maier-Hein, Lena, editor, Marquand, Andre F., editor, Duchesnay, Edouard, editor, Löfstedt, Tommy, editor, Landman, Bennett, editor, Cardoso, M. Jorge, editor, Silva, Carlos A., editor, Pereira, Sergio, editor, and Meier, Raphael, editor
- Published
- 2018
- Full Text
- View/download PDF
171. Mathematically and practically-based explanations: individual preferences and sociomathematical norms
- Author
-
Levenson, Esther, Tirosh, Dina, and Tsamir, Pessia
- Published
- 2006
- Full Text
- View/download PDF
172. EXPLANATIONS FROM INTELLIGENT SYSTEMS: THEORETICAL FOUNDATIONS AND IMPLICATIONS FOR PRACTICE.
- Author
-
Gregor, Shirley and Benbasat, Izak
- Subjects
- *
DECISION support systems , *MANAGEMENT information systems , *INFORMATION resources management , *DECISION making , *MANAGEMENT science , *DECISION theory , *INTELLIGENT agents , *EXPERT systems , *COGNITIVE learning , *INFORMATION technology , *KNOWLEDGE management , *INFORMATION resources - Abstract
Information systems with an "intelligent" or "knowledge" component are now prevalent and include knowledge-based systems, decision support systems, intelligent agents, and knowledge management systems. These systems are in principle capable of explaining their reasoning or justifying their behavior. There appears to be a lack of under, standing, however, of the benefits that can flow from explanation use, and how an explanation function should be constructed. Work with newer types of intelligent systems and help functions for everyday systems, such as word-processors, appears in many cases to neglect lessons learned in the past. This paper attempts to rectify this situation by drawing together the considerable body of work on the nature and use of explanations. Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a sound theoretical base. The theoretical base combines a cognitive effort perspective, cognitive learning theory, and Toulmin's model of argumentation. Conclusions drawn from the review have both practical and theoretical significance. Explanations are important to users in a number of circumstances--when the user perceives an anomaly, when they want to learn, or when they need a specific piece of knowledge to participate properly in problem solving. Explanations, when suitably designed, have been shown to improve performance and learning and result in more positive user perceptions of a system. The design is important, however, because it appears that explanations will not be used if the user has to exert "too much" effort to get them. Explanations should be provided automatically if this can be done relatively unobtrusively, or by hypertext links, and should be context-specific rather than generic. Explanations that conform to Toulmin's model of argumentation, in that they provide adequate justification for the knowledge offered, should be more persuasive and lead to greater trust, agreement, satisfaction, and acceptance--of the explanation and possibly also of the system as a whole. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
173. Can explanations improve applicant reactions towards gamified assessment methods?
- Author
-
Georgiou, Konstantina
- Subjects
EMPLOYEE selection ,PROCEDURAL justice ,ORGANIZATIONAL justice ,EXPLANATION ,DISTRIBUTIVE justice ,GAMIFICATION ,TERRORIST recruiting - Abstract
Gamification is increasingly being used by organizations in hiring decisions. However, the use of gamification in assessment has advanced quicker than corresponding research. One area in need of research is how applicants' perceptions of fairness are formed when gamified assessments are used in employee selection. Therefore, two studies were conducted to explore the impact of using gamified assessments to applicants' justice perceptions and the role of providing explanations to applicants. Adopting an experimental design to explore organizational justice model in the context of gamified assessments, results indicated that individuals' perceptions of job relatedness are higher when a situational judgment tests (SJT) is used rather than a gamified version, leading to more positive perceptions of procedural fairness and organizational attractiveness (Study 1). The mediating effects of the procedural rules of ease of faking and opportunity to perform were not supported. Subsequently, a 2 × 2 design was used (Study 2) to explore the role of providing explanations. It seems that the provision of explanations on the assessment's faking difficulty generates more positive reactions towards gamified SJTs than text‐based SJTs, in relation to ease of faking and procedural justice, and a spillover effect, invoking favorable reactions to the recruiting organization as well (Study 2). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
174. Social Accounts in Conflict Situations: Using Explanations to Manage Conflict.
- Author
-
Sitkin, Sim B. and Bies, Robert J.
- Subjects
CONFLICT management ,CONFLICT theory ,CRISIS management ,NEGOTIATION ,INTERPERSONAL conflict ,SOCIAL conflict - Abstract
Considerable attention has been given to different behavioral strategies of conflict management (e.g., avoidance, compromise, collaboration). However, conflict theory and research has overlooked a simple, but effective strategy for managing conflict: the use of social accounts or explanations. In this paper, we review the literature on the use of social accounts in conflict situations and find it supports the argument that social accounts can be an effective conflict-management strategy. Based on this analysis, we propose several promising directions for future theory development and research concerning the role of social accounts in conflict situations. In addition, we identify tradeoffs and dilemmas created when social accounts are used to manage conflict. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
175. Same-Decision Probability: threshold robustness and application to explanation
- Author
-
Renooij, S., Kratochvíl, Václav, Studený, Milan, Decision Support Systems, and Sub Decision Support Systems
- Subjects
threshold-based decisions ,explanations ,MathematicsofComputing_GENERAL ,Bayesian network classifiers ,robustness ,Same-decision probability - Abstract
The same-decision probability (SDP) is a confidence measure for threshold-based decisions. In this paper we detail various properties of the SDP that allow for studying its robustness to changes in the threshold value upon which a decision is based. In addition to expressing confidence in a decision, the SDP has proven to be a useful tool in other contexts, such as that of information gathering. We demonstrate that the properties of the SDP as established in this paper allow for its application in the context of explaining Bayesian network classifiers as well.
- Published
- 2018
176. Same-Decision Probability: threshold robustness and application to explanation
- Subjects
threshold-based decisions ,explanations ,MathematicsofComputing_GENERAL ,Bayesian network classifiers ,robustness ,Same-decision probability - Abstract
The same-decision probability (SDP) is a confidence measure for threshold-based decisions. In this paper we detail various properties of the SDP that allow for studying its robustness to changes in the threshold value upon which a decision is based. In addition to expressing confidence in a decision, the SDP has proven to be a useful tool in other contexts, such as that of information gathering. We demonstrate that the properties of the SDP as established in this paper allow for its application in the context of explaining Bayesian network classifiers as well.
- Published
- 2018
177. Trends and Trajectories for Explainable, Accountable and Intelligible Systems
- Author
-
Ashraf Abdul, Danding Wang, Brian Y. Lim, Mohan S. Kankanhalli, Jo Vermeulen, Mandryk, Regan, and Hancock , Mark
- Subjects
Topic model ,Explanations ,Interpretable machine learning ,Learnability ,Computer science ,business.industry ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Data science ,Software ,explainable artificial intelli-gence, explanations, intelligibility, interpretable machine learning ,Accountability ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,business ,Intelligibility ,Explainable artificial intelli-gence ,050107 human factors - Abstract
Advances in artificial intelligence, sensors and big data man-agement have far-reaching societal impacts. As these sys-tems augment our everyday lives, it becomes increasingly important for people to understand them and remain in con-trol. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explainable systems, as well as 12,412 citing papers. Using topic modeling, co-oc-currence and network analysis, we mapped the research space from diverse domains, such as algorithmic accounta-bility, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly iso-lated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible in-terfaces built-in. From our results, we propose several impli-cations and directions for future research towards this goal.
- Published
- 2018
178. 'What Does My Classifier Learn?' A Visual Approach to Understanding Natural Language Text Classifiers
- Author
-
Winkler, Jonas Paul, Vogelsang, Andreas, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Frasincar, Flavius, editor, Ittoo, Ashwin, editor, Nguyen, Le Minh, editor, and Métais, Elisabeth, editor
- Published
- 2017
- Full Text
- View/download PDF
179. Lexical meaning in Albanian language textbooks of pre-university education.
- Author
-
Metani, Idriz and Dano, Sidita (Hoxhiq)
- Subjects
TEXTBOOKS ,LANGUAGE & languages ,SEMANTICS - Abstract
Lexical meaning, as an important and essential aspect of the word, has long attracted the attention of scholars, who, in trying to know its nature, have sometimes mystified it by seeing it as an inexplicable thing and sometimes simplified it, equating it with the function of the word, with the concept, even with the realiiii itself that it signifies. Each meaning is explained separately, but keeping in mind the other meanings of the word, when it is polysemantic. The purpose of explaining meaning is to connect us with realie or somethingiii of the reality that the word signifies in that sense, to identify and distinguish it intact from other realities similar to it. In addition to the common semantic components, some differentiating semantic components are given that serve to distinguish the meanings of the respective words. By thoroughly analyzing the corpus of words unknown to students present in today's Albanian language textbooks of pre-university education, our article aims to provide an almost complete picture of the typology of explaining language units unknown to students, as it turns out in the textbooks we are talking about while giving some methodological recommendations to improve the work with students' vocabulary in the future. In general, we can say from the beginning that textbook compilers, in order to explain language units unknown to students, have used at least four types of explanations: descriptive explanation, definition explanation, paraphrasing explanation and synonymous explanation, which will be discussed in more detail in this article. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
180. DIFF: a relational interface for large-scale data explanation.
- Author
-
Abuzaid, Firas, Kraft, Peter, Suri, Sahaana, Gan, Edward, Xu, Eric, Shenoy, Atul, Ananthanarayan, Asvin, Sheu, John, Meijer, Erik, Wu, Xi, Naughton, Jeff, Bailis, Peter, and Zaharia, Matei
- Abstract
A range of explanation engines assist data analysts by performing feature selection over increasingly high-volume and high-dimensional data, grouping and highlighting commonalities among data points. While useful in diverse tasks such as user behavior analytics, operational event processing, and root-cause analysis, today's explanation engines are designed as stand-alone data processing tools that do not interoperate with traditional, SQL-based analytics workflows; this limits the applicability and extensibility of these engines. In response, we propose the DIFF operator, a relational aggregation operator that unifies the core functionality of these engines with declarative relational query processing. We implement both single-node and distributed versions of the DIFF operator in MB SQL, an extension of MacroBase, and demonstrate how DIFF can provide the same semantics as existing explanation engines while capturing a broad set of production use cases in industry, including at Microsoft and Facebook. Additionally, we illustrate how this declarative approach to data explanation enables new logical and physical query optimizations. We evaluate these optimizations on several real-world production applications and find that DIFF in MB SQL can outperform state-of-the-art engines by up to an order of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
181. Explaining Review-Based Recommendations: Effects of Profile Transparency, Presentation Style and User Characteristics.
- Author
-
Hernandez-Bocanegra, Diana C. and Ziegler, Jürgen
- Subjects
RECOMMENDER systems ,INDIVIDUAL differences ,VISUALIZATION ,SENSORY perception - Abstract
Providing explanations based on user reviews in recommender systems (RS) may increase users' perception of transparency or effectiveness. However, little is known about how these explanations should be presented to users, or which types of user interface components should be included in explanations, in order to increase both their comprehensibility and acceptance. To investigate such matters, we conducted two experiments and evaluated the differences in users' perception when providing information about their own profiles, in addition to a summarized view on the opinions of other customers about the recommended hotel. Additionally, we also aimed to test the effect of different display styles (bar chart and table) on the perception of review-based explanations for recommended hotels, as well as how useful users find different explanatory interface components. Our results suggest that the perception of an RS and its explanations given profile transparency and different presentation styles, may vary depending on individual differences on user characteristics, such as decision-making styles, social awareness, or visualization familiarity. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
182. Visualizing image content to explain novel image discovery.
- Author
-
Lee, Jake H. and Wagstaff, Kiri L.
- Subjects
CONVOLUTIONAL neural networks ,BIG data - Abstract
The initial analysis of any large data set can be divided into two phases: (1) the identification of common trends or patterns and (2) the identification of anomalies or outliers that deviate from those trends. We focus on the goal of detecting observations with novel content, which can alert us to artifacts in the data set or, potentially, the discovery of previously unknown phenomena. To aid in interpreting and diagnosing the novel aspect of these selected observations, we recommend the use of novelty detection methods that generate explanations. In the context of large image data sets, these explanations should highlight what aspect of a given image is new (color, shape, texture, content) in a human-comprehensible form. We propose DEMUD-VIS, the first method for providing visual explanations of novel image content by employing a convolutional neural network (CNN) to extract image features, a method that uses reconstruction error to detect novel content, and an up-convolutional network to convert CNN feature representations back into image space. We demonstrate this approach on diverse images from ImageNet, freshwater streams, and the surface of Mars. Finally, we evaluate the utility of the visual explanations with a user study. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
183. “So, Why Were You Late Again?”: Social Account’s Influence on the Behavioral Transgression of Being Late to a Meeting
- Author
-
Mroz, Joseph A. Allen, Emilee Eden, Katherine C. Castro, McKaylee Smith, and Joseph
- Subjects
explanations ,excuses ,meetings ,interpersonal relationships - Abstract
People often offer an excuse or an apology after they do something wrong in an attempt to mitigate any potential negative consequences. In this paper, we examine how individuals employ social accounts when explaining their interpersonal transgression of meeting lateness to others in actual work settings. We examined the different combinations of social accounts and the social outcomes (forgiveness, helping behaviors, and intentions to continue interaction) of being late to a meeting. Across two studies using complementary experimental and survey methods, we found that a majority of late arrivers’ explanations included remorse and that including remorse significantly influences helping behaviors. Furthermore, we found no interaction between excuses and offering remorse. Implications of these findings and future directions are discussed.
- Published
- 2023
- Full Text
- View/download PDF
184. Prompting Children's Belief Revision About Balance Through Primary and Secondary Sources of Evidence.
- Author
-
Larsen, Nicole E., Venkadasalam, Vaunam P., and Ganea, Patricia A.
- Subjects
CENTROID ,PICTURE books ,EVIDENCE ,REVISIONS - Abstract
Prior evidence has shown that children's understanding of balance proceeds through stages. Children go from a stage where they lack a consistent theory (No Theory), to becoming Center Theorists at around age 6 (believing that all objects balance in their geometric center), to Mass Theorists at around age 8, when they begin to consider the distribution of objects' mass. In this study we adapted prior testing paradigms to examine 5-year-olds' understanding of balance and compared children's learning about balance from evidence presented through primary sources (a guided activity) or secondary sources (picture books). Most of the research on young children's understanding of balance has been conducted using a single object, weighted either proportionally (symmetrical object) or disproportionally (asymmetrical object). In this study, instead of using a single object, 5-year-olds (N = 102) were shown 4 pairs of objects, two with the same weight and two with different weight. Children were told to place the objects on a beam where they thought they would balance. We found evidence for an intermediate level of understanding. Transition Theorists represent children who have two distinct theories, one for balancing same weight objects, and one for balancing different weight objects, but one of these theories is incorrect. Following the assessment of children's understanding, we compared their learning about balance from evidence that was either presented through primary sources (a guided activity) or secondary sources (picture books). Children learn equally well from both sources of evidence. Findings are discussed in terms of theoretical and practical implications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
185. Leveraging Arguments in User Reviews for Generating and Explaining Recommendations.
- Author
-
Donkers, Tim and Ziegler, Jürgen
- Abstract
Review texts constitute a valuable source for making system-generated recommendations both more accurate and more transparent. Reviews typically contain statements providing argumentative support for a given item rating that can be exploited to explain the recommended items in a personalized manner. We propose a novel method called Aspect-based Transparent Memories (ATM) to model user preferences with respect to relevant aspects and compare them to item properties to predict ratings, and, by the same mechanism, explain why an item is recommended. The ATM architecture consists of two neural memories that can be viewed as arrays of slots for storing information about users and items. The first memory component encodes representations of sentences composed by the target user while the second holds an equivalent representation for the target item based on statements of other users. An offline evaluation was performed with three datasets, showing advantages over two baselines, the well-established Matrix Factorization technique and a recent competitive representative of neural attentional recommender techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
186. Embedding Scientific Explanations Into Storybooks Impacts Children's Scientific Discourse and Learning.
- Author
-
Leech, Kathryn A., Haber, Amanda S., Jalkh, Youmna, and Corriveau, Kathleen H.
- Subjects
DYADIC communication ,ELECTRIC circuits ,DISCOURSE ,EXPLANATION ,BOOKS & reading ,COMPREHENSION - Abstract
Children's understanding of unobservable scientific entities largely depends on testimony from others, especially through parental explanations that highlight the mechanism underlying a scientific entity. Mechanistic explanations are particularly helpful in promoting children's conceptual understanding, yet they are relatively rare in parent–child conversations. The current study aimed to increase parent–child use of mechanistic conversation by modeling this language in a storybook about the mechanism of electrical circuits. We also examined whether an increase in mechanistic conversation was associated with science learning outcomes, measured at both the dyadic- and child-level. In the current study, parents and their 4- to 5-year-old children (N = 60) were randomly assigned to read a book containing mechanistic explanations (n = 32) or one containing non-mechanistic explanations (n = 28). After reading the book together, parent–child joint understanding of electricity's mechanism was tested by asking the dyad to assemble electrical components of a circuit toy so that a light would turn on. Finally, child science learning outcomes were examined by asking children to assemble a novel circuit toy and answer comprehension questions to gauge their understanding of electricity's mechanism. Results indicate that dyads who read storybooks containing mechanistic explanations were (1) more successful at completing the circuit (putting the pieces together to make the light turn on) and (2) used more mechanistic language than dyads assigned to the non-mechanistic condition. Children in the mechanistic condition also had better learning outcomes, but only if they engaged in more mechanistic discourse with their parent. We discuss these results using a social interactionist framework to highlight the role of input and interaction for learning. We also highlight how these results implicate everyday routines such as book reading in supporting children's scientific discourse and understanding. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
187. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.
- Author
-
Selvaraju, Ramprasaath R., Cogswell, Michael, Das, Abhishek, Vedantam, Ramakrishna, Parikh, Devi, and Batra, Dhruv
- Subjects
ARTIFICIAL neural networks ,PATTERN recognition systems ,REINFORCEMENT learning ,FAILURE mode & effects analysis ,CONVOLUTIONAL neural networks ,SUPERIOR colliculus - Abstract
We propose a technique for producing 'visual explanations' for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our approach—Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say 'dog' in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g.VGG), (2) CNNs used for structured outputs (e.g.captioning), (3) CNNs used in tasks with multi-modal inputs (e.g.visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are robust to adversarial perturbations, (d) are more faithful to the underlying model, and (e) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show that even non-attention based models learn to localize discriminative regions of input image. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names (Bau et al. in Computer vision and pattern recognition, 2017) to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a 'stronger' deep network from a 'weaker' one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo on CloudCV (Agrawal et al., in: Mobile cloud visual media computing, pp 265–290. Springer, 2015) (http://gradcam.cloudcv.org) and a video at http://youtu.be/COjUB9Izk6E. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
188. Explanations for medically unexplained symptoms: a qualitative study on GPs in daily practice consultations.
- Author
-
Terpstra, Tom, Gol, Janna M, Lucassen, Peter L B J, Houwen, Juul, Dulmen, Sandra van, Berger, Marjolein Y, Rosmalen, Judith G M, Hartman, Tim C olde, van Dulmen, Sandra, and Olde Hartman, Tim C
- Subjects
MEDICALLY unexplained symptoms ,QUALITATIVE research ,GENERAL practitioners ,EXPLANATION ,THEMATIC analysis - Abstract
Background: General practice is the centre of care for patients with medically unexplained symptoms (MUS). Providing explanations for MUS, i.e. making sense of symptoms, is considered to be an important part of care for MUS patients. However, little is known how general practitioners (GPs) do this in daily practice.Objective: This study aimed to explore how GPs explain MUS to their patients during daily general practice consultations.Methods: A thematic content analysis was performed of how GPs explained MUS to their patients based on 39 general practice consultations involving patients with MUS.Results: GP provided explanations in nearly all consultations with MUS patients. Seven categories of explanation components emerged from the data: defining symptoms, stating causality, mentioning contributing factors, describing mechanisms, excluding explanations, discussing the severity of symptoms and normalizing symptoms. No pattern of how GPs constructed explanations with the various categories was observed. In general, explanations were communicated as a possibility and in a patient-specific way; however, they were not very detailed.Conclusion: Although explanations for MUS are provided in most MUS consultations, there seems room for improving the explanations given in these consultations. Further studies on the effectiveness of explanations and on the interaction between patients and GP in constructing these explanations are required in order to make MUS explanations more suitable in daily primary care practice. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
189. Building relatedness explanations from knowledge graphs.
- Author
-
Kejriwal, Mayank, Lopez, Vanessa, Sequeda, Juan F., Pirrò, Giuseppe, and Sequeda, Juan F.
- Subjects
EXPLANATION ,ALGORITHMS ,MACHINERY - Abstract
Knowledge graphs (KGs) are a key ingredient to complement search results, discover entities and their relations and support several knowledge discovery tasks. We face the problem of building relatedness explanations, that is, graphs that can explain how a pair of entities is related in a KG. Explanations can be used in a variety of tasks; from exploratory search to query answering. We formalize the notion of explanation and present two algorithms. The first, E4D (Explanations from Data), assembles explanations starting from all paths interlinking the source and target entity in the data. The second algorithm E4S (Explanations from Schema) builds explanations focused on a specific relatedness perspective expressed by providing a predicate. E4S first generates candidate explanation patterns at the level of schema; then, it assembles explanations by proceeding to their verification in the data. Given a set of paths, found by E4D or E4S , we describe different criteria to build explanations based on information-theory, diversity, and their combination. As a concrete use-case of relatedness explanations, we introduce relatedness-based KG querying, which revisits the query-by-example paradigm from the perspective of relatedness explanations. We implemented all the machinery in the RECAP tool, which is based on RDF and SPARQL. We discuss an evaluation of the explanation building algorithms and a comparison of RECAP with related systems on real-world data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
190. The Development of Narrative Discourse in French by 5 to 10 Years Old Children: Some Insights from a Conversational Interaction Method
- Author
-
Veneziano, Edy, Joshi, R. Malatesha, Series editor, Perera, Joan, editor, Aparici, Melina, editor, Rosado, Elisa, editor, and Salas, Naymé, editor
- Published
- 2016
- Full Text
- View/download PDF
191. What am I not Seeing? An Interactive Approach to Social Content Discovery in Microblogs
- Author
-
Kang, Byungkyu, Tintarev, Nava, Höllerer, Tobias, O’Donovan, John, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Spiro, Emma, editor, and Ahn, Yong-Yeol, editor
- Published
- 2016
- Full Text
- View/download PDF
192. IT TAKES TWO TO TANGO: CHOREOGRAPHING THE INTERACTIONS BETWEEN HUMAN AND ARTIFICIAL INTELLIGENCE.
- Author
-
Te'eni, Dov, Avital, Michel, Hevner, Alan, Schoop, Mareike, and Schwartz, David G.
- Subjects
ARTIFICIAL intelligence ,HUMAN intelligence (Intelligence service) ,AUTOMATION ,DECISION support systems ,COMPUTER algorithms - Abstract
Academics and policymakers alike are concerned with the potential impact and repercussions of artificial intelligence on our lives and the world we live in. In the light of the inherent chasm between human intelligence and artificial intelligence logics, the inevitable need to integrate human and artificial intelligence into symbiotic forms is particularly challenging to Information Systems researchers and designers. This panel aims to explore meaningful research directions on human-artificial intelligence, which could lead to a better understanding of its impact and better designs. Building on their expertise in design, HCI, AI, and generative systems, the panelists will explore the following challenges: * What is unique in the combination of human and artificial intelligence compared with systems that are solely based on one or the other? * Can we and should we insist on a similar range of considerations when studying and designing systems based on human-augmented artificial intelligence as we do when studying and designing systems based solely on human intelligence? * Can performance improvements expected of human-artificial intelligence, compared with AI, be effectively studied independently of considerations such as control and trust? The panel will seek to evoke provocative ideas and generative thinking that can initiate research on the relationship between human and artificial intelligence in the IS discipline and perhaps also contribute to the general discourse thereof. [ABSTRACT FROM AUTHOR]
- Published
- 2019
193. Counterfactual Explanations for Data-Driven Decisions.
- Author
-
Fernandez, Carlos, Provost, Foster, and Xintian Han
- Subjects
PREDICTION models ,DECISION making ,DATA structures ,ELECTRONIC data processing ,ELECTRONIC file management - Abstract
Users' lack of understanding of systems that use predictive models to make automated decisions is one of the main barriers for their adoption. We adopt the increasingly accepted view of a counterfactual explanation for a system decision: a set of the system inputs that is causal (meaning that removing them changes the decision) and irreducible (meaning that removing any subset of the inputs in the explanation does not change the decision). We generalize previous work on counterfactual explanations in three ways: we explain system decisions rather than model predictions; we do not enforce any specific method for removing inputs, and our explanations can incorporate inputs with arbitrary data structures. We also show how model-agnostic algorithms can be tweaked to find the most useful explanations depending on the context. Finally, we showcase our approach using a real data set to illustrate its advantages over other explanation methods when the goal is to understand system decisions better. [ABSTRACT FROM AUTHOR]
- Published
- 2019
194. A process framework for inducing and explaining Datalog theories
- Author
-
Gromowski, Mark, Siebers, Michael, and Schmid, Ute
- Published
- 2020
- Full Text
- View/download PDF
195. Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs.
- Author
-
WEIQUAN WANG and BENBASAT, IZAK
- Subjects
ELECTRONIC commerce ,CONSUMER behavior ,INTERNET marketing ,INTERNET sales ,INTERNET users ,TRUST ,ECONOMICS - Abstract
We empirically test the effects of explanation facilities on consumers' initial trusting beliefs concerning online recommendation agents (RAs). RAs provide online shopping advice based on user-specified needs and preferences. The characteristics of RAs that may hamper consumers' trust building in the RAs are identified, and the provision of explanation facilities is proposed as a knowledge-based approach to enhance consumers' trusting beliefs by dealing with these obstacles. This study examines the effects of three types of explanations about an RA and its use--how, why, and trade-off explanations--on consumers' trusting beliefs in an RA's competence, benevolence, and integrity. An RA was built as the experimental platform and a laboratory experiment was conducted. The results confirm the important role of explanation facilities in enhancing consumers' initial trusting beliefs and indicate that consumers' use of different types of explanations enhances different trusting beliefs: the use of how explanations increases their competence and benevolence beliefs, the use of why explanations increases their benevolence beliefs, and the use of trade-off explanations increases their integrity beliefs. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
196. The Use of Explanations in Knowledge-Based Systems: Cognitive Perspective and a Process-Tracing Analysis.
- Author
-
Ji-Ye Mao and Benbasat, Izak
- Subjects
EXPERT systems ,ARTIFICIAL intelligence ,KNOWLEDGE management ,DECISION making ,INFORMATION resources management ,KNOWLEDGE base - Abstract
This exploratory research investigates the nature of explanation use and factors that influence it during users' interaction with a knowledge-based system (KBS) for decision-making. It draws upon several cognitive perspectives to help understand when, why, and how explanations are used. A verbal protocol analysis was conducted based on a laboratory experiment involving a KBS for financial analysis. Major categories of explanation use were identified, and accounted for with relevant cognitive perspectives. Results show that explanations were requested to deal with comprehension difficulties caused by various types of perceived anomalies in KBS output. There were qualitative and quantitative differences in the nature and extent of explanation use between novices and experienced professionals. These results offer new insights to why explanations are useful and important, what factors influence explanation use, and what information should be included in explanations. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
197. Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability
- Author
-
Mireia Yurrita, Tim Draws, Agathe Balayn, Dave Murray-Rust, Nava Tintarev, and Alessandro Bozzon
- Subjects
contestability ,algorithmic decision-making ,explanations ,human oversight ,fairness perceptions - Abstract
Recent research claims that information cues and system attributes of algorithmic decision-making processes affect decision subjects’ fairness perceptions. However, little is still known about how these factors interact. This paper presents a user study (N = 267) investigating the individual and combined effects of explanations, human oversight, and contestability on informational and procedural fairness perceptions for high- and low-stakes decisions in a loan approval scenario. We find that explanations and contestability contribute to informational and procedural fairness perceptions, respectively, but we find no evidence for an effect of human oversight. Our results further show that both informational and procedural fairness perceptions contribute positively to overall fairness perceptions but we do not find an interaction effect between them. A qualitative analysis exposes tensions between information overload and understanding, human involvement and timely decision-making, and accounting for personal circumstances while maintaining procedural consistency. Our results have important design implications for algorithmic decision-making processes that meet decision subjects’ standards of justice.
- Published
- 2023
198. Strengthening the Rationale of Recommendations Through a Hybrid Explanations Building Framework
- Author
-
Charissiadis, Andreas, Karacapilidis, Nikos, Howlett, Robert J., Series editor, Jain, Lakhmi C., Series editor, and Neves-Silva, Rui, editor
- Published
- 2015
- Full Text
- View/download PDF
199. A Rule-Based Trust Negotiation System.
- Author
-
Bonatti, Piero, De Coi, J.L., Olmedilla, Daniel, and Sauro, Luigi
- Abstract
Open distributed environments, such as the World Wide Web, facilitate information sharing but provide limited support to the protection of sensitive information and resources. Trust negotiation (TN) frameworks have been proposed as a better solution for open environments, in which parties may get in touch and interact without being previously known to each other. In this paper, we illustrate Protune, a rule-based TN system. By describing Protune, we will illustrate the advantages that arise from an advanced rule-based approach in terms of deployment efforts, user friendliness, communication efficiency, and interoperability. The generality and technological feasibility of Protune's approach are assessed through an extensive analysis and experimental evaluations. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
200. Explainability with Association Rule Learning for Weather Forecast
- Author
-
Coulibaly, Lassana, Kamsu-Foguem, Bernard, and Tangara, Fana
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.