391 results
Search Results
102. Making Recommendations More Effective Through Framings: Impacts of User- Versus Item-Based Framings on Recommendation Click-Throughs.
- Author
-
Gai, Phyliss Jia and Klesse, Anne-Kathrin
- Subjects
RECOMMENDER systems ,FRAMES (Social sciences) ,PRODUCT information management ,PRODUCT orientation ,EXPLANATION ,USER-generated content - Abstract
Companies frequently offer product recommendations to customers, according to various algorithms. This research explores how companies should frame the methods they use to derive their recommendations, in an attempt to maximize click-through rates. Two common framings—user-based and item-based—might describe the same recommendation. User-based framing emphasizes the similarity between customers (e.g., "People who like this also like..."); item-based framing instead emphasizes similarities between products (e.g., "Similar to this item"). Six experiments, including two field experiments within a mobile app, show that framing the same recommendation as user-based (vs. item-based) can increase recommendation click-through rates. The findings suggest that user-based (vs. item-based) framing informs customers that the recommendation is based on not just product matching but also taste matching with other customers. Three theoretically derived and practically relevant boundary conditions related to the recommendation recipient, the products, and other users also offer practical guidance for managers regarding how to leverage recommendation framings to increase recommendation click-throughs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
103. Explaining any black box model using real data
- Author
-
Anton Björklund, Andreas Henelius, Emilia Oikarinen, Kimmo Kallonen, and Kai Puolamäki
- Subjects
XAI (explainable artificial intelligence) ,model-agnostic explanation ,interpretable machine learning ,local explanation ,explanations ,interpretability ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In recent years the use of complex machine learning has increased drastically. These complex black box models trade interpretability for accuracy. The lack of interpretability is troubling for, e.g., socially sensitive, safety-critical, or knowledge extraction applications. In this paper, we propose a new explanation method, SLISE, for interpreting predictions from black box models. SLISE can be used with any black box model (model-agnostic), does not require any modifications to the black box model (post-hoc), and explains individual predictions (local). We evaluate our method using real-world datasets and compare it against other model-agnostic, local explanation methods. Our approach solves shortcomings in other related explanation methods by only using existing data instead of sampling new, artificial data. The method also generates more generalizable explanations and is usable without modification across various data domains.
- Published
- 2023
- Full Text
- View/download PDF
104. Knowledge Graph-Based Explainable Artificial Intelligence for Business Process Analysis.
- Author
-
Füßl, Anne, Nissen, Volker, and Heringklee, Stefan Horst
- Subjects
ARTIFICIAL intelligence ,KNOWLEDGE graphs ,BUSINESS intelligence ,MACHINE learning ,DECISION trees - Abstract
For critical operational decisions (e.g. consulting services), explanations and interpretable results of powerful Artificial Intelligence (AI) systems are becoming increasingly important. Knowledge graphs possess a semantic model that integrates heterogeneous information sources and represents knowledge elements in a machine-readable form. The integration of knowledge graphs and machine learning methods represents a new form of hybrid intelligent systems that benefit from each other's strengths. Our research aims at an explainable system with a specific knowledge graph architecture that generates human-understandable results even when no suitable domain experts are available. Against this background, the interpretability of a knowledge graph-based explainable AI approach for business process analysis is focused. We design a framework of interpretation, show how interpretable models are generated by a single case study and evaluate the applicability of our approach by different expert interviews. Result paths on weaknesses and improvement measures related to a business process are used to produce stochastic decision trees, which improve the interpretability of results. This can lead to interesting consulting self-services for clients or be applied as a device for accelerating classical consulting projects. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
105. Children's developing understanding of economic inequality and their place within it.
- Author
-
Dickinson, Julie, Leman, Patrick J., and Easterbrook, Matthew J.
- Subjects
SOCIALIZATION ,ETHICS ,AGE distribution ,COGNITION ,SOCIOECONOMIC factors ,PARENTS ,CHILDREN - Abstract
Income inequality is growing in many parts of the world and, for the poorest children in a society, is associated with multiple, negative, developmental outcomes. This review of the research literature considers how childrens' and adolescents' understanding of economic inequality changes with age. It highlights shifts in conceptual understanding (from 'having and not having', to social structural and moral explanations), moral reasoning and the impact of the agents of socialization from parents to the media and cultural norms and discourses. It also examines how social processes affect judgements and the importance of an emerging sense of self in relation to questions of economic inequality. Finally, the review covers methodological considerations and suggests pathways for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
106. Assessing the quality of scientific explanations with networks.
- Author
-
Wagner, S. and Priemer, B.
- Subjects
SCHOOL children ,SCIENCE students ,SCIENCE education ,SCIENCE teachers ,ELEMENTARY schools ,STUDENT development - Abstract
This article introduces a network approach to describe the quality of written scientific explanations. Existing approaches evaluate explanations mainly on the level of sentences or as a whole but not on the elementary level of single terms. Moreover, evaluation of explanations is often based on highly inferential scoring techniques. We addressed both issues by converting the elementary structure of terms in explanations into networks (so-called element maps) and analysing these with mathematical measures, thus extracting the size and complexity of an explanation, adequacy, coherence, and use of key terms. A total of 65 explanations of experts and students were analysed quantitatively and qualitatively. Differences between expert and student maps' measures can be interpreted meaningfully against the background of existing research findings. Thus, we argue that our approach using network analysis provides a precise, fine-grained, and low-inferential tool that complements and refines existing approaches. Element maps have the potential to improve teaching and research by precisely revealing the strengths and weaknesses of explanations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
107. Declarative Approaches to Counterfactual Explanations for Classification.
- Author
-
BERTOSSI, LEOPOLDO
- Subjects
COUNTERFACTUALS (Logic) ,DISTRIBUTION (Probability theory) ,EXPLANATION ,CLASSIFICATION ,LOGIC - Abstract
We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
108. Towards Scenario-Based and Question-Driven Explanations in Autonomous Vehicles
- Author
-
Zhang, Yiwen, Guo, Weiwei, Chi, Cheng, Hou, Lu, Sun, Xiaohua, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Krömker, Heidi, editor
- Published
- 2022
- Full Text
- View/download PDF
109. MoReXAI - A Model to Reason About the eXplanation Design in AI Systems
- Author
-
de Oliveira Carvalho, Niltemberg, Libório Sampaio, Andréia, de Vasconcelos, Davi Romero, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Degen, Helmut, editor, and Ntoa, Stavroula, editor
- Published
- 2022
- Full Text
- View/download PDF
110. Explanation Plug-In for Stream-Based Collaborative Filtering
- Author
-
Leal, Fátima, García-Méndez, Silvia, Malheiro, Benedita, Burguillo, Juan C., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rocha, Alvaro, editor, Adeli, Hojjat, editor, Dzemyda, Gintautas, editor, and Moreira, Fernando, editor
- Published
- 2022
- Full Text
- View/download PDF
111. Understanding Mechanistic Explanation as A Strategy of Analytical Sociology
- Author
-
Olabode Awarun and Danjuma Sheidu Asaka
- Subjects
social reality ,analysis ,Analytical sociology ,Social reality ,Social Sciences ,Social environment ,Context (language use) ,social mechanism ,Epistemology ,Argument ,explanations ,Mainstream ,Sociology ,strategy ,Content (Freudian dream analysis) - Abstract
Although Analytical Sociology is not often used in the mainstream Sociology, its history is however, traceable to the classical works of scholars such as Emile Durkheim, Max Weber, Alexis de Tocqueville as well as contemporary sociological thinkers like Talcott Parsons and Robert Merton, among others. This paper provides a contemporary argument for the application of mechanistic explanation in the overall understanding of Analytical Sociology using relevant and practical examples. In the course of this, attention has been paid to the concept of explanation and its various types in a sociological discourse. This paper therefore argues that social reality can significantly be understood only when explanations are systematic and detailed in content and context. The conclusion is that analytical sociology has the capacity to explain the actions of social actors within the social environment beyond some social doubts, even though, not all situations, can be sufficiently explained with the strategy.
- Published
- 2020
112. Quod erat demonstrandum? - Towards a typology of the concept of explanation for the design of explainable AI.
- Author
-
Cabitza, Federico, Campagner, Andrea, Malgieri, Gianclaudio, Natali, Chiara, Schneeberger, David, Stoeger, Karl, and Holzinger, Andreas
- Subjects
- *
ARTIFICIAL intelligence , *EXPLANATION , *MACHINE learning - Abstract
In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods. • We propose a framework for defining different types of explanations of AI systems. • We contextualize current XAI discourses within the proposed framework. • We highlight two broad perspectives for defining quality criteria for explainability. • We discuss the relevance of our framework in light of current and upcoming AI regulation. • We confer fundamental aspects for future research of XAI scholars. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
113. Generic preparation for upcoming explanations: intra- and inter-domain effects of a digital training intervention.
- Author
-
Hefter, Markus H., Roelle, Julian, Renkl, Alexander, and Berthold, Kirsten
- Subjects
COMPUTER assisted instruction ,COGNITIVE load ,TRANSFER of students ,EXPLANATION ,EDUCATIONAL outcomes - Abstract
Learning from instructional explanations is one of the most established, prevalent, and obvious ways of learning--but it carries the risk of shallow processing. Unlike previous research that focused on providing digital just-in-time support measures for learning with explanations, we strived to prepare learners on how to make the most of upcoming explanations. We thus developed a short-term computer-based training intervention on the focused processing of instructional explanations. In two experiments (N
1 = 47, N2 = 42), we tested its effects on learning processes and outcomes of a subsequent learning phase. Our results revealed that the training intervention fostered domain-general knowledge about explanations. Furthermore, it enabled learners to benefit from future instructional explanations in other domains (inter-domain transfer for university students, Experiment 1) or at least on other topics (intra-domain transfer for primary school fourth graders, Experiment 2). The digital training intervention did not trigger more cognitive load in the subsequent learning phase. All in all, we describe an initial promising step toward a generic training effect that has the potential advantage of enhancing learning from explanations without altering the actual learning material. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
114. Conviction Narrative Theory: A theory of choice under radical uncertainty.
- Author
-
Johnson, Samuel G. B., Bilovich, Avri, and Tuckett, David
- Subjects
CHOICE (Psychology) ,SOCIAL perception ,MENTAL representation ,SOCIAL support ,RESEARCH personnel ,CAUSAL models - Abstract
Conviction Narrative Theory (CNT) is a theory of choice under radical uncertainty – situations where outcomes cannot be enumerated and probabilities cannot be assigned. Whereas most theories of choice assume that people rely on (potentially biased) probabilistic judgments, such theories cannot account for adaptive decision-making when probabilities cannot be assigned. CNT proposes that people use narratives – structured representations of causal, temporal, analogical, and valence relationships – rather than probabilities, as the currency of thought that unifies our sense-making and decision-making faculties. According to CNT, narratives arise from the interplay between individual cognition and the social environment, with reasoners adopting a narrative that feels "right" to explain the available data; using that narrative to imagine plausible futures; and affectively evaluating those imagined futures to make a choice. Evidence from many areas of the cognitive, behavioral, and social sciences supports this basic model, including lab experiments, interview studies, and econometric analyses. We identify 12 propositions to explain how the mental representations (narratives) interact with four inter-related processes (explanation, simulation, affective evaluation, and communication), examining the theoretical and empirical basis for each. We conclude by discussing how CNT can provide a common vocabulary for researchers studying everyday choices across areas of the decision sciences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
115. Causes of Outcome Learning: a causal inference-inspired machine learning approach to disentangling common combinations of potential causes of a health outcome.
- Author
-
Rieckmann, Andreas, Dworzynski, Piotr, Arras, Leila, Lapuschkin, Sebastian, Samek, Wojciech, Arah, Onyebuchi Aniweta, Rod, Naja Hulvej, and Ekstrøm, Claus Thorn
- Subjects
PUBLIC health ,ATTRIBUTION (Social psychology) ,IMPACT of Event Scale ,RESEARCH funding - Abstract
Nearly all diseases are caused by different combinations of exposures. Yet, most epidemiological studies focus on estimating the effect of a single exposure on a health outcome. We present the Causes of Outcome Learning approach (CoOL), which seeks to discover combinations of exposures that lead to an increased risk of a specific outcome in parts of the population. The approach allows for exposures acting alone and in synergy with others. The road map of CoOL involves (i) a pre-computational phase used to define a causal model; (ii) a computational phase with three steps, namely (a) fitting a non-negative model on an additive scale, (b) decomposing risk contributions and (c) clustering individuals based on the risk contributions into subgroups; and (iii) a post-computational phase on hypothesis development, validation and triangulation using new data before eventually updating the causal model. The computational phase uses a tailored neural network for the non-negative model on an additive scale and layer-wise relevance propagation for the risk decomposition through this model. We demonstrate the approach on simulated and real-life data using the R package 'CoOL'. The presentation focuses on binary exposures and outcomes but can also be extended to other measurement types. This approach encourages and enables researchers to identify combinations of exposures as potential causes of the health outcome of interest. Expanding our ability to discover complex causes could eventually result in more effective, targeted and informed interventions prioritized for their public health impact. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
116. Traditional lectures versus active learning - A false dichotomy?
- Author
-
Dietrich, Heiko and Evans, Tanya
- Subjects
ACTIVE learning ,LECTURES & lecturing ,MATHEMATICS education ,POSTSECONDARY education ,EDUCATIONAL psychology - Abstract
Traditional lectures are commonly understood to be a teacher-centered mode of instruction where the main aim is a provision of explanations by an educator to the students. Recent literature in higher education overwhelmingly depicts this mode of instruction as inferior compared to the desired student-centered models based on active learning techniques. First, using a four-quadrant model of educational environments, we address common confusion related to a conflation of two prevalent dichotomies by focusing on two key dimensions: (1) the extent to which students are prompted to engage actively and (2) the extent to which expert explanations are provided. Second, using a case study, we describe an evolution of tertiary mathematics education, showing how traditional instruction can still play a valuable role, provided it is suitably embedded in a student-centered course design. We support our argument by analyzing the teaching practice and learning environment in a third-year abstract algebra course through the lens of Stanislas Dehaene's theoretical framework for effective teaching and learning. The framework, comprising "four pillars of learning", is based on a state-of-the-art conception of how learning can be facilitated according to cognitive science, educational psychology and neuroscience findings. In the case study, we illustrate how, over time, the unit design and the teaching approach have evolved into a learning environment that aligns with the four pillars of learning. We conclude that traditional lectures can and do evolve to optimize learning environments and that the erection of the dichotomy "traditional instruction versus active learning" is no longer relevant. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
117. Effects of instructor-provided visuals on learner-generated explanations.
- Author
-
Kuhlmann, Shelbi and Fiorella, Logan
- Subjects
COLLEGE teachers ,LEARNING ,UNDERGRADUATES ,EMOTIONS ,EDUCATIONAL relevance - Abstract
This study explored whether different types of instructional visuals—knowledge maps and pictorial illustrations—encourage students to focus on specific types of conceptual relationships during learning. Undergraduates (n = 134) studied a text lesson on the human nervous system accompanied by maps (text-with-maps group), illustrations (text-with-illustrations group), or no visuals (text-only group). Then all students orally explained what they learned as if they were teaching a peer. The text-with-maps group generated more hierarchical relationships than the other two groups, and both visual groups generated more temporal relationships than the text-only group. The groups did not significantly differ in the number of structural relationships generated. On a subsequent post-test, only the text-with-maps group significantly outperformed the text-only group, and the two visual groups did not significantly differ from each other. These findings highlight how different visuals affect the types of relationships students focus on when learning from the same text. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
118. Causal reasoning with mental models
- Author
-
Sangeet eKhemlani, Aron K Barbey, and Philip Nicholas Johnson-Laird
- Subjects
causal reasoning ,lateral prefrontal cortex ,Mental Models ,explanations ,enabling conditions ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
This paper outlines the model-based theory of causal reasoning. It postulates that the core meanings of causal assertions are deterministic and refer to temporally-ordered sets of possibilities: A causes B to occur means that given A, B occurs, whereas A enables B to occur means that given A, it is possible for B to occur. The paper shows how mental models represent such assertions, and how these models underlie deductive, inductive, and abductive reasoning yielding explanations. It reviews evidence both to corroborate the theory and to account for phenomena sometimes taken to be incompatible with it. Finally, it reviews neuroscience evidence indicating that mental models for causal inference are implemented within lateral prefrontal cortex.
- Published
- 2014
- Full Text
- View/download PDF
119. A Conversion of Feature Models into an Executable Representation in Microsoft Excel
- Author
-
Le, Viet-Man, Tran, Thi Ngoc Trang, Felfernig, Alexander, Kacprzyk, Janusz, Series Editor, Stettinger, Martin, editor, Leitner, Gerhard, editor, Felfernig, Alexander, editor, and Ras, Zbigniew W., editor
- Published
- 2021
- Full Text
- View/download PDF
120. Explanatory Pluralism in Explainable AI
- Author
-
Yao, Yiheng, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Holzinger, Andreas, editor, Kieseberg, Peter, editor, Tjoa, A Min, editor, and Weippl, Edgar, editor
- Published
- 2021
- Full Text
- View/download PDF
121. Changing Salty Food Preferences with Visual and Textual Explanations in a Search Interface
- Author
-
Berge, A., Sjaen, V. V., Alain Dominique Starke, and Trattner, C.
- Subjects
FOS: Computer and information sciences ,Marketing and Consumer Behaviour ,Explanations ,Design of information systems ,User behaviour ,Informasjons og kommunikasjonsteknologi ,VDP::Informasjons- og kommunikasjonsteknologi: 550 ,Salt ,Brukeradferd ,Computer Science - Human-Computer Interaction ,Sodium Replacement ,VDP::Information and communication technology: 550 ,Human-Computer Interaction (cs.HC) ,Search Interface ,Information and communication technology: 550 [VDP] ,Human-Computer Interaction ,Food Preferences ,Brukergrensesnitt ,Marktkunde en Consumentengedrag ,Design av informasjonssystemer ,Informasjons- og kommunikasjonsteknologi: 550 [VDP] ,Information and communication technology - Abstract
Salt is consumed at too high levels in the general population, causing high blood pressure and related health problems. In this paper, we present results of ongoing research that tries to reduce salt intake via technology and in particular from an interface perspective. In detail, this paper features results of a study that examines the extent to which visual and textual explanations in a search interface can change salty food preferences. An online user study with 200 participants demonstrates that this is possible in food search results by accompanying recipes with a visual taste map that includes salt-replacer herbs and spices in the calculation of salty taste., 8 pages, 6 figures, HEALTHI workshop in conjunction with the ACM IUI conference
- Published
- 2021
122. Changing Salty Food Preferences with Visual and Textual Explanations in a Search Interface
- Subjects
Marketing and Consumer Behaviour ,Food Preferences ,Explanations ,Salt ,Sodium Replacement ,Marktkunde en Consumentengedrag ,Search Interface - Abstract
Salt is consumed at too high levels in the general population, causing high blood pressure and related health problems. In this paper, we present results of ongoing research that tries to reduce salt intake via technology and in particular from an interface perspective. In detail, this paper features results of a study that examines the extent to which visual and textual explanations in a search interface can change salty food preferences. An online user study with 200 participants demonstrates that this is possible in food search results by accompanying recipes with a visual taste map that includes salt-replacer herbs and spices in the calculation of salty taste.
- Published
- 2021
123. Beliefs that manifest through newspaper items in relation to peoples’ life challenges and their potential to enhance a sustainable learning environment in school science
- Author
-
Thapelo L. Mamiala
- Subjects
belief ,belief models ,teaching ,explanations ,Science ,Social Sciences - Abstract
The paper documents beliefs that manifest themselves through newspaper items and elaborates on their potential to enhance a sustainable learning environment in a school science lesson. “Learning environment” is depicted from different angles and includes virtual and real learning environments, school environments and classroom environments. Descriptive and item analyses were conducted on sixty-eight newspaper items that were identified. The nature of problems and prescriptions/solutions was categorised for each item and the paper further provides elaboration on the types of problems and recommended solutions. The results show that the “believed” structure contents in their newspaper items to catch the attention of the “believer”. Lessons on the power of belief must be learnt by school science teachers if they are to succeed in creating a sustainable learning environment with improved performance in school science.
- Published
- 2013
- Full Text
- View/download PDF
124. Conflict resolution when axioms are materialized in semantic-based smart environments.
- Author
-
Gravier, Christophe, Subercaze, Julien, and Zimmermann, Antoine
- Subjects
SEMANTIC Web ,AMBIENT intelligence ,HUMAN-computer interaction ,USER-generated content ,WEB-based user interfaces - Abstract
In Semantic Web applications, reasoning engines that are data intensive commonly materialise inferences to speed up processing at query time. However, in evolving systems, such as smart environments, semantic-based context aware systems (SCAS) [6] or social software with user-generated data, knowledge does not grow monotonically: newer facts may contradict older ones, knowledge may be deprecated, discarded or updated such that knowledge must sometimes be retracted. We are describing a technique to retract explicit and inferred statements, when some information becomes obsolete, as well as retracting any statement that would lead to get back the removed explicit statements. This technique is based on OWL justifications and is triggered whenever a knowledge base becomes inconsistent, such that the system stays in a consistent state all the time, in spite of uncontrolled evolution. We prove termination and correctness of the algorithm, and describe the implementation and evaluation of the proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
125. On adequate and constructive models of processing of meanings.
- Author
-
Pogossian, Edward
- Abstract
Humans promote themselves in the universe, the totality of their realities, while by processing of meanings they enhance the effectiveness and efficiency of the promotion. Humans communicate by explanation and acquisition of meanings, while acquire meanings by their understanding and learning. In paper we provide language explanations of meanings on some realities tended to specifications of constructive and adequate models of meaning processing. We discuss constrains on meanings induced by the models and corresponding constrains on cognition of realities as well as provide experimental evidence supporting a viability of the models. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
126. Personalising Explainable Recommendations: Literature and Conceptualisation
- Author
-
Naiseh, Mohammad, Jiang, Nan, Ma, Jianbing, Ali, Raian, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Rocha, Álvaro, editor, Adeli, Hojjat, editor, Reis, Luís Paulo, editor, Costanzo, Sandra, editor, Orovic, Irena, editor, and Moreira, Fernando, editor
- Published
- 2020
- Full Text
- View/download PDF
127. Exploring the Effect of Explanations During Robot-Guided Emergency Evacuation
- Author
-
Nayyar, Mollik, Zoloty, Zachary, McFarland, Ciera, Wagner, Alan R., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wagner, Alan R., editor, Feil-Seifer, David, editor, Haring, Kerstin S., editor, Rossi, Silvia, editor, Williams, Thomas, editor, He, Hongsheng, editor, and Sam Ge, Shuzhi, editor
- Published
- 2020
- Full Text
- View/download PDF
128. Exploiting Answer Set Programming for Building explainable Recommendations
- Author
-
Teppan, Erich, Zanker, Markus, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Helic, Denis, editor, Leitner, Gerhard, editor, Stettinger, Martin, editor, Felfernig, Alexander, editor, and Raś, Zbigniew W., editor
- Published
- 2020
- Full Text
- View/download PDF
129. Explanation-based large neighborhood search
- Author
-
Prud’homme, Charles, Lorca, Xavier, and Jussien, Narendra
- Published
- 2014
- Full Text
- View/download PDF
130. Lazy Explanations for Constraint Propagators.
- Author
-
Gent, Ian P., Miguel, Ian, and Moore, Neil C. A.
- Abstract
Explanations are a technique for reasoning about constraint propagation, which have been applied in many learning, backjumping and user-interaction algorithms for constraint programming. To date explanations for constraints have usually been recorded ˵eagerly″ when constraint propagation happens, which leads to inefficient use of time and space, because many will never be used. In this paper we show that it is possible and highly effective to calculate explanations retrospectively when they are needed. To this end, we implement ˵lazy″ explanations in a state of the art learning framework. Experimental results confirm the effectiveness of the technique: we achieve reduction in the number of explanations calculated up to a factor of 200 and reductions in overall solve time up to a factor of 5. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
131. Explaining the Non-compliance between Templates and Agreement Offers in WS-Agreement.
- Author
-
Müller, Carlos, Resinas, Manuel, and Ruiz-Cortés, Antonio
- Abstract
A common approach to the process of reaching agreements is the publication of templates that guide parties to create agreement offers that are then sent for approval to the template publisher. In such scenario, a common issue the template publisher must address is to check whether the agreement offer received is compliant or not with the template. Furthermore, in the latter case, an automated explanation of the reasons of such non-compliance is very appealing. Unfortunately, although there are proposals that deal with checking the compliance, the problem of providing an automated explanation to the non-compliance has not yet been studied in this context. In this paper, we take a subset of the WS-Agreement recommendation as a starting point and we provide a rigorous definition of the explanation for the non-compliance between templates and agreement offers. Furthermore, we propose the use of constraint satisfaction problem (CSP) solvers to implement it and provide a proof-of-concept implementation. The advantage of using CSPs is that it allows expressive service level objectives inside SLAs. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
132. African-American, Hispanic, and White Explanations of the Black/White Gap in Socioeconomic Status, 1977-2000.
- Author
-
Hunt, Matthew O.
- Subjects
SOCIAL status ,AFRICAN American social conditions ,WHITE people ,RACIAL & ethnic attitudes ,RACE ,RACE relations ,SOCIAL history - Abstract
Research into lay perceptions of the causes of the black/white gap in socioeconomic status (SES) has neglected the beliefs of non-whites in favor of examining trends in the beliefs of whites, and links between whites' racial attitudes and policy preferences. While such research has been fruitful in mapping attitudinal change, dimensions of whites' beliefs, and in testing various explanations of the nature of contemporary race relations, it has been less enlightening from the standpoint of revealing what other important subgroups of Americans -- including blacks themselves, and a rapidly-growing Hispanic minority -- believe about the economic divide separating blacks and whites in the United States. In this study, I use data from the 1977-2000 General Social Surveys (GSS) to pursue two primary goals. First, I update our knowledge of whites' beliefs for the 1990s, analyzing whether trends documented for the two prior decades have continued, and exploring whether new trends have emerged. Second, I incorporate African Americans' and Hispanics' views into our understanding of the black/white SES gap. Results suggest continuation of some established trends (e.g., decline in popularity of attributions representing "traditional racism") as well as some heretofore undocumented patterns. Further, race/ethnic differences are found in both levels of support for, and in the determinants of, various beliefs about why African-Americans, compared with whites, continue to be relatively disadvantaged in areas such as housing, income, and jobs. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
133. ChemInformatics Model Explorer (CIME): exploratory analysis of chemical model explanations.
- Author
-
Humer, Christina, Heberle, Henry, Montanari, Floriane, Wolf, Thomas, Huber, Florian, Henderson, Ryan, Heinrich, Julian, and Streit, Marc
- Subjects
CHEMICAL models ,ANALYTICAL chemistry ,CHEMINFORMATICS ,ARTIFICIAL intelligence ,SMALL molecules - Abstract
The introduction of machine learning to small molecule research– an inherently multidisciplinary field in which chemists and data scientists combine their expertise and collaborate - has been vital to making screening processes more efficient. In recent years, numerous models that predict pharmacokinetic properties or bioactivity have been published, and these are used on a daily basis by chemists to make decisions and prioritize ideas. The emerging field of explainable artificial intelligence is opening up new possibilities for understanding the reasoning that underlies a model. In small molecule research, this means relating contributions of substructures of compounds to their predicted properties, which in turn also allows the areas of the compounds that have the greatest influence on the outcome to be identified. However, there is no interactive visualization tool that facilitates such interdisciplinary collaborations towards interpretability of machine learning models for small molecules. To fill this gap, we present CIME (ChemInformatics Model Explorer), an interactive web-based system that allows users to inspect chemical data sets, visualize model explanations, compare interpretability techniques, and explore subgroups of compounds. The tool is model-agnostic and can be run on a server or a workstation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
134. "Knowing me, knowing you": personalized explanations for a music recommender system.
- Author
-
Martijn, Millecamp, Conati, Cristina, and Verbert, Katrien
- Subjects
RECOMMENDER systems ,EXPLANATION ,CONDUCT of life - Abstract
Due to the prominent role of recommender systems in our daily lives, it is increasingly important to inform users why certain items are recommended and personalize these explanations to the user. In this study, we explored how explanations in a music recommender system should be designed to fit the preference of different personal characteristics. More specifically, we investigated three personal characteristics that influence the perception of explanations in music recommender system interfaces: need for cognition, musical sophistication, and openness. For each of these personal characteristics, we designed explanations for users with lower and higher levels of the personal characteristic. Afterward, we conducted for each personal characteristic a within-subject user study in which we compared the two explanations. Based on the results of these user studies, we provide design suggestions to adapt explanations to different levels of these three personal characteristics. In general, we suggest providing explanations up-front for all recommendations at once. For users low in need for cognition, displaying these explanations must be optional. To support users with low musical sophistication, we suggest providing brief explanations that do not require domain knowledge. For users with low openness, we suggest providing explanations with a lower number of explanation elements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
135. Mapping an expanding territory: computer simulations in evolutionary biology.
- Author
-
Huneman, Philippe
- Subjects
- *
COMPUTER simulation , *ELECTRONIC artificial life games , *EXPLANATION , *MODELS & modelmaking , *BIOLOGICAL evolution - Abstract
The pervasive use of computer simulations in the sciences brings novel epistemological issues discussed in the philosophy of science literature since about a decade. Evolutionary biology strongly relies on such simulations, and in relation to it there exists a research program (Artificial Life) that mainly studies simulations themselves. This paper addresses the specificity of computer simulations in evolutionary biology, in the context (described in Sect. 1) of a set of questions about their scope as explanations, the nature of validation processes and the relation between simulations and true experiments or mathematical models. After making distinctions, especially between a weak use where simulations test hypotheses about the world, and a strong use where they allow one to explore sets of evolutionary dynamics not necessarily extant in our world, I argue in Sect. 2 that (weak) simulations are likely to represent in virtue of the fact that they instantiate specific features of causal processes that may be isomorphic to features of some causal processes in the world, though the latter are always intertwined with a myriad of different processes and hence unlikely to be directly manipulated and studied. I therefore argue that these simulations are merely able to provide candidate explanations for real patterns. Section 3 ends up by placing strong and weak simulations in Levins' triangle, that conceives of simulations as devices trying to fulfil one or two among three incompatible epistemic values (precision, realism, genericity). [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
136. Trying to improve communication skills: the challenge of joint sense making in classroom interactions
- Author
-
Ingram, J, Andrews, N, and University of Oxford [Oxford]
- Subjects
explanations ,[SHS.EDU]Humanities and Social Sciences/Education ,ComputingMilieux_COMPUTERSANDEDUCATION ,classroom interaction ,video-based professional development ,mathematical communication ,[MATH]Mathematics [math] ,sense-making - Abstract
International audience; In this paper we examine the efforts of one teacher working to improve her students' communication skills as part of a collaborative project with teachers and teacher educators/researchers. The paper reports on a project meeting where the teacher presents a short video clip featuring two student explanations. Yet only one explanation is treated in the lesson as an example of good communication. Following discussion and multiple re-viewings of the video clip in the meeting, what counts as good communication is critiqued by the teachers. Driven by an emphasis on the two-way nature of communication, the need for joint sense-making between teacher and students, and privileging explanations that communicate mathematical understanding, alternative teacher actions are suggested during the meeting that are related to how different teachers interpreted the students' explanation.
- Published
- 2020
137. Knowledge graphs as tools for explainable machine learning: A survey.
- Author
-
Tiddi, Ilaria and Schlobach, Stefan
- Subjects
- *
KNOWLEDGE graphs , *MACHINE learning , *KNOWLEDGE representation (Information theory) , *MACHINE tools , *HYBRID systems , *ARTIFICIAL intelligence - Abstract
This paper provides an extensive overview of the use of knowledge graphs in the context of Explainable Machine Learning. As of late, explainable AI has become a very active field of research by addressing the limitations of the latest machine learning solutions that often provide highly accurate, but hardly scrutable and interpretable decisions. An increasing interest has also been shown in the integration of Knowledge Representation techniques in Machine Learning applications, mostly motivated by the complementary strengths and weaknesses that could lead to a new generation of hybrid intelligent systems. Following this idea, we hypothesise that knowledge graphs, which naturally provide domain background knowledge in a machine-readable format, could be integrated in Explainable Machine Learning approaches to help them provide more meaningful, insightful and trustworthy explanations. Using a systematic literature review methodology we designed an analytical framework to explore the current landscape of Explainable Machine Learning. We focus particularly on the integration with structured knowledge at large scale, and use our framework to analyse a variety of Machine Learning domains, identifying the main characteristics of such knowledge-based, explainable systems from different perspectives. We then summarise the strengths of such hybrid systems, such as improved understandability, reactivity, and accuracy, as well as their limitations, e.g. in handling noise or extracting knowledge efficiently. We conclude by discussing a list of open challenges left for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
138. Testing the limits of structural thinking about gender.
- Author
-
Yang, Xin, Naas, Ragnhild, and Dunham, Yarrow
- Subjects
GENDER ,GENDER differences (Sociology) ,CHILD psychology ,SOCIOCULTURAL factors ,LABOR market - Abstract
When seeking to explain social regularities (such as gender differences in the labor market) people often rely on internal features of the targets, frequently neglecting structural and systemic factors external to the targets. For example, people might think women leave the job market after childbirth because they are less competent or are better suited for child‐rearing than men, thereby eliding socio‐cultural and economic factors that disadvantage women. Across two studies (total N = 192) we probe 4‐ and 5‐year‐olds and 7‐ and 8‐year‐olds' internal versus structural reasoning about gender. We explore the evaluative and behavioral implications of this reasoning process with both novel gendered behaviors that were experimentally created and familiar gendered behaviors that exist outside of a lab context. We show that children generate more structural explanations, evaluate the structural explanation more positively, expect behaviors to be more mutable, and evaluate gender non‐conforming behaviors more positively when structural cues are provided. However, we also show that such information may be of limited effectiveness at reducing pre‐existing group‐based discriminatory behaviors: children continue to report less willingness to affiliate with peers who display non‐conforming behaviors even in the presence of structural cues. Taken together, these results provide evidence concerning children's structural reasoning about gender categories and shed new light on how such reasoning might affect social evaluations and behavioral intentions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
139. Hybrid Data Set Optimization in Recommender Systems Using Fuzzy T-Norms
- Author
-
Papaleonidas, Antonios, Pimenidis, Elias, Iliadis, Lazaros, Rannenberg, Kai, Editor-in-Chief, Sakarovitch, Jacques, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Pras, Aiko, Editorial Board Member, Tröltzsch, Fredi, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Reis, Ricardo, Editorial Board Member, Furnell, Steven, Editorial Board Member, Furbach, Ulrich, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, MacIntyre, John, editor, Maglogiannis, Ilias, editor, Iliadis, Lazaros, editor, and Pimenidis, Elias, editor
- Published
- 2019
- Full Text
- View/download PDF
140. Ambient Explanations: Ambient Intelligence and Explainable AI
- Author
-
Cassens, Jörg, Wegener, Rebekah, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Chatzigiannakis, Ioannis, editor, De Ruyter, Boris, editor, and Mavrommati, Irene, editor
- Published
- 2019
- Full Text
- View/download PDF
141. Explanation of Recommenders Using Formal Concept Analysis
- Author
-
Diaz-Agudo, Belen, Caro-Martinez, Marta, Recio-Garcia, Juan A., Jorro-Aragoneses, Jose, Jimenez-Diaz, Guillermo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bach, Kerstin, editor, and Marling, Cindy, editor
- Published
- 2019
- Full Text
- View/download PDF
142. Analysis of Abstention in the Elections to the Catalan Parliament by Means of Decision Trees
- Author
-
Armengol, Eva, Vicente, Zaida, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Torra, Vicenç, editor, Narukawa, Yasuo, editor, Pasi, Gabriella, editor, and Viviani, Marco, editor
- Published
- 2019
- Full Text
- View/download PDF
143. Formación de profesores de química a partir de la explicación de fenómenos cotidianos: una propuesta con resultados.
- Author
-
Pérez, Roy Waldhiersen Morales and Rodrígue, Franklin Alberto Manrique
- Subjects
- *
TRAINING of chemistry teachers , *CHEMISTRY study & teaching aids , *TEACHER education , *CHEMISTRY experiments , *EXPERIMENTAL design - Abstract
This paper shows the research results obtained with a group of pre-service chemistry teachers at Universidad Pedagógica Nacional, employing didactic units focused in everyday chemistry as strategy for chemistry teaching. The explanations of pre-service chemistry teachers for the analyzed everyday chemical phenomena were characterized according to its admissibility and chemical levels of representation. The project allowed consolidating a space which employs coherently disciplinary and didactic aspects of chemistry, contributing to the pre-service teachers with resources and methodologies to transform the traditional forms of teaching, learning and evaluation of chemistry. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
144. Dealing with complex queries in decision-support systems
- Author
-
Fernández del Pozo, J.A. and Bielza, C.
- Subjects
- *
QUERYING (Computer science) , *DECISION support systems , *UNCERTAINTY (Information theory) , *DECISION making , *BAYESIAN analysis , *INFORMATION theory - Abstract
Abstract: In decision-making problems under uncertainty, a decision table consists of a set of attributes indicating what is the optimal decision (response) within the different scenarios defined by the attributes. We recently introduced a method to give explanations of these responses. In this paper, the method is extended. To do this, it is combined with a query system to answer expert questions about the preferred action for a given instantiation of decision table attributes. The main difficulty is to accurately answer queries associated with incomplete instantiations. Incomplete instantiations are the result of the evaluation of a partial model outputting decision tables that only include a subset of the whole problem, leading to uncertain responses. Our proposal establishes an automatic and interactive dialogue between the decision-support system and the expert to elicit information from the expert to reduce uncertainty. Typically, the process involves learning a Bayesian network structure from a relevant part of the decision table and computing some interesting conditional probabilities that are revised accordingly. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
145. Can we explain increases in young people’s psychological distress over time?
- Author
-
Sweeting, Helen, West, Patrick, Young, Robert, and Der, Geoff
- Subjects
- *
BEHAVIOR modification , *STATISTICAL correlation , *HEALTH behavior , *HIGH school students , *INTERVIEWING , *LONGITUDINAL method , *PARENT-child relationships , *RESEARCH funding , *SCALE analysis (Psychology) , *SELF-evaluation , *SEX distribution , *SOCIAL change , *PSYCHOLOGICAL stress , *PSYCHOLOGY of students , *TIME , *UNEMPLOYMENT , *FAMILY relations , *SOCIAL context , *AT-risk people , *ADOLESCENCE - Abstract
Abstract: This paper aims to explain previously described increases in self-reported psychological distress between 1987 and 2006 among samples identical in respect of age (15 years), school year and geographical location (West of Scotland). Such increases might be explained by changes in exposure (changes in levels of risk or protective factors) and/or by changes in vulnerability (changes in the relationship between risk/protective factors and psychological distress). Key areas of social change over this time period allow identification of potential explanatory factors, categorised as economic, family, educational, values and lifestyle and represented by variables common to each study. Psychological distress was measured via the 12-item General Health Questionnaire, Likert scored. Analyses were conducted on those with complete data on all variables (N = 3276 of 3929), and separately for males and females. Between 1987 and 2006, levels of almost every potential explanatory factor changed in line with general societal trends. Associations between explanatory factors and GHQ tended to be stronger among females, and at the later date. The strongest associations were with worries, arguments with parents, and, at the later date, school disengagement. The factors which best accounted for the increase in mean GHQ between 1987 and 2006 were arguments with parents, school disengagement, worry about school and, for females, worry about family relationships, reflecting both increasing exposure and vulnerability to these risk factors. A number of limitations to our analysis can be identified. However, our results reinforce the conclusions of others in highlighting the role of family and educational factors as plausible explanations for increases in young people’s psychological distress. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
146. Scalable highly expressive reasoner (SHER).
- Author
-
Dolby, Julian, Fokoue, Achille, Kalyanpur, Aditya, Schonberg, Edith, and Srinivas, Kavitha
- Subjects
COMPUTER networks ,SCALABILITY ,COMPUTER architecture ,ONTOLOGIES (Information retrieval) ,SEMANTIC computing ,QUERYING (Computer science) ,COMPUTER algorithms ,SOURCE code - Abstract
Abstract: In this paper, we describe scalable highly expressive reasoner (SHER), a breakthrough technology that provides semantic querying of large relational datasets using OWL ontologies. SHER relies on a unique algorithm based on ontology summarization and combines a traditional in-memory description logic reasoner with a database backed RDF Store to scale reasoning to very large Aboxes. In our latest experiments, SHER is able to do sound and complete conjunctive query answering up to 7 million triples in seconds, and scales to datasets with 60 million triples, responding to queries in minutes. We describe the SHER system architecture, discuss the underlying components and their functionality, and briefly highlight two concrete use-cases of scalable OWL reasoning based on SHER in the Health Care and Life Science space. The SHER system, with the source code, is available for download (free for academic use) at: http://www.alphaworks.ibm.com/tech/sher. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
147. Explanations of unsupervised learning clustering applied to data security analysis
- Author
-
Corral, G., Armengol, E., Fornells, A., and Golobardes, E.
- Subjects
- *
MACHINE learning , *DATA security , *COMPUTER network security , *SELF-organizing maps , *ARTIFICIAL intelligence , *COMPUTER security , *COMPUTER-aided engineering - Abstract
Abstract: Network security tests should be periodically conducted to detect vulnerabilities before they are exploited. However, analysis of testing results is resource intensive with many data and requires expertise because it is an unsupervised domain. This paper presents how to automate and improve this analysis through the identification and explanation of device groups with similar vulnerabilities. Clustering is used for discovering hidden patterns and abnormal behaviors. Self-organizing maps are preferred due to their soft computing capabilities. Explanations based on anti-unification give comprehensive descriptions of clustering results to analysts. This approach is integrated in Consensus, a computer-aided system to detect network vulnerabilities. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
148. Using explanations for determining carcinogenecity in chemical compounds
- Author
-
Armengol, Eva
- Subjects
- *
MACHINE theory , *NEURAL computers , *ARTIFICIAL intelligence , *INTELLIGENT agents - Abstract
Abstract: The goal of predictive toxicology is the automatic construction of carcinogenecity models. Most common artificial intelligence techniques used to construct these models are inductive learning methods. In a previous work we presented an approach that uses lazy learning methods for solving the problem of predicting carcinogenecity. Lazy learning methods solve new problems based on their similarity to already solved problems. Nevertheless, a weakness of these kind of methods is that sometimes the result is not completely understandable by the user. In this paper we propose an explanation scheme for a concrete lazy learning method. This scheme is particularly interesting to justify the predictions about the carcinogenesis of chemical compounds. In addition we propose that these explanations could be used to build a partial domain knowledge. In our particular case, we use the explanations for building general knowledge about carcinogenesis. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
149. An explanation-based tools for debugging constraint satisfaction problems.
- Author
-
Ouis, Samir and Tounsi, Mohamed
- Subjects
COMPUTER programming ,ELECTRONIC data processing ,MATHEMATICAL analysis ,COMPUTER algorithms ,MATHEMATICAL models - Abstract
Abstract: This paper describes an explanation-based tools for constraint programming system. These tools provide to the user the conflicts when it raise during solving process. Our tools simulate constraint additions and/or constraint relaxations without any propagation; it also determine if a given constraint belongs to a conflict and it provide diagnosis tool (e.g. why variable cannot take value val?). With more user-friendly representation of conflicts and explanations, our proposed tools give better problem understanding to the user. We prooved that the proposed tools are efficient, and while there is no debugging system that allow the user to interact with the solver, our explanation-based tools could be used for many other applications. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
150. An epistemological challenge to ontological bruteness
- Author
-
Brown, Joshua Matthan
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.