15 results on '"causability"'
Search Results
2. A Survey of Counterfactual Explanations: Definition, Evaluation, Algorithms, and Applications
- Author
-
Zhang, Xuezhong, Dai, Libin, Peng, Qingming, Tang, Ruizhi, Li, Xinwei, Xhafa, Fatos, Series Editor, Xiong, Ning, editor, Li, Maozhen, editor, Li, Kenli, editor, Xiao, Zheng, editor, Liao, Longlong, editor, and Wang, Lipo, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Explainability and causability in digital pathology
- Author
-
Markus Plass, Michaela Kargl, Tim‐Rasmus Kiehl, Peter Regitnig, Christian Geißler, Theodore Evans, Norman Zerbe, Rita Carvalho, Andreas Holzinger, and Heimo Müller
- Subjects
digital pathology ,artificial intelligence ,explainability ,causability ,Pathology ,RB1-214 - Abstract
Abstract The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what‐if’‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes.
- Published
- 2023
- Full Text
- View/download PDF
4. Explainability and causability in digital pathology.
- Author
-
Plass, Markus, Kargl, Michaela, Kiehl, Tim‐Rasmus, Regitnig, Peter, Geißler, Christian, Evans, Theodore, Zerbe, Norman, Carvalho, Rita, Holzinger, Andreas, and Müller, Heimo
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,COMPUTER programming ,USER interfaces ,PATHOLOGY ,FORENSIC pathology - Abstract
The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive 'what‐if'‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Logic and Pragmatics in AI Explanation
- Author
-
Tsai, Chun-Hua, Carroll, John M., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Holzinger, Andreas, editor, Goebel, Randy, editor, Fong, Ruth, editor, Moon, Taesup, editor, Müller, Klaus-Robert, editor, and Samek, Wojciech, editor
- Published
- 2022
- Full Text
- View/download PDF
6. The Causal Plausibility Decision in Healthcare.
- Author
-
RAJPUT, Vije Kumar, KALTOFT, Mette Kjer, and DOWIE, Jack
- Abstract
The explosion of interest in exploiting machine learning techniques in healthcare has brought the issue of inferring causation from observational data to centre stage. In our work in supporting the health decisions of the individual person/patient-as-person at the point of care, we cannot avoid making decisions about which options are to be included or excluded in a decision support tool. Should the researcher’s routine injunction to use their findings ‘with caution’, because of methodological limitations, lead to inclusion or exclusion? The task is one of deciding, first on causal plausibility, and then on causality. Like all decisions these are both sensitive to error preferences (trade-offs). We engage selectively with the Artificial Intelligence (AI) literature on the causality challenge and on the closely associated issue of the ‘explain ability’ now demanded of ‘black box’ AI. Our commitment to embracing ‘lifestyle’ as well as ‘medical’ options for the individual person, leads us to highlight the key issue as that of who is to make the preference sensitive decisions on causal plausibility and causality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.
- Author
-
Müller, Heimo, Holzinger, Andreas, Plass, Markus, Brcic, Luka, Stumptner, Cornelia, and Zatloukal, Kurt
- Subjects
- *
IMAGE analysis , *MEDICAL laws , *DIAGNOSTIC imaging , *MACHINE learning , *ARTIFICIAL intelligence , *MEDICAL equipment - Abstract
Artificial Intelligence (AI) for the biomedical domain is gaining significant interest and holds considerable potential for the future of healthcare, particularly also in the context of in vitro diagnostics. The European In Vitro Diagnostic Medical Device Regulation (IVDR) explicitly includes software in its requirements. This poses major challenges for In Vitro Diagnostic devices (IVDs) that involve Machine Learning (ML) algorithms for data analysis and decision support. This can increase the difficulty of applying some of the most successful ML and Deep Learning (DL) methods to the biomedical domain, just by missing the required explanatory components from the manufacturers. In this context, trustworthy AI has to empower biomedical professionals to take responsibility for their decision-making, which clearly raises the need for explainable AI methods. Explainable AI, such as layer-wise relevance propagation, can help in highlighting the relevant parts of inputs to, and representations in, a neural network that caused a result and visualize these relevant parts. In the same way that usability encompasses measurements for the quality of use, the concept of causability encompasses measurements for the quality of explanations produced by explainable AI methods. This paper describes both concepts and gives examples of how explainability and causability are essential in order to demonstrate scientific validity as well as analytical and clinical performance for future AI-based IVDs. • Causability has to be shown for features recognized by In Vitro Diagnostic devices. • Analytical and clinical performance of AI algorithms must be monitored. • Explainability and causability have to be demonstrated for performance monitoring. • Experts need evidence of explainability/causability in order to take responsibility. • AI needs causability measures to ensure the quality of explainability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications.
- Author
-
Chou, Yu-Liang, Moreira, Catarina, Bruza, Peter, Ouyang, Chun, and Jorge, Joaquim
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *DECISION support systems , *COUNTERFACTUALS (Logic) , *ALGORITHMS , *POPULAR literature - Abstract
Deep learning models have achieved high performance across different domains, such as medical decision-making, autonomous vehicles, decision support systems, among many others. However, despite this success, the inner mechanisms of these models are opaque because their internal representations are too complex for a human to understand. This opacity makes it hard to understand the how or the why of the predictions of deep learning models. There has been a growing interest in model-agnostic methods that make deep learning models more transparent and explainable to humans. Some researchers recently argued that for a machine to achieve human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing literature on counterfactuals and causability for explainable artificial intelligence (AI). We performed a Latent Dirichlet topic modelling analysis (LDA) under a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to find the most relevant literature articles. This analysis yielded a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications to real-world data. Our research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Furthermore, our findings suggest that the explanations derived from popular algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous, or even biased explanations. Thus, this paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable AI. • A survey on model-agnostic counterfactual approaches for XAI. • A novel taxonomy for model-agnostic counterfactual approaches for XAI. • A set of properties for causability systems for XAI. • Opportunities and challenges for causability systems based on modelagnostic counterfactual approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Kauzalność materialna umowy o ustanowieniu hipoteki. Glosa do postanowienia Sądu Najwyższego z dnia 8 grudnia 2017 roku, III CSK 273/16.
- Author
-
Biernat, Jakub
- Abstract
Copyright of Studies in Law: Research Papers / Studia Prawnicze. Rozprawy i Materialy is the property of Andrzej Frycz Modrzewski Krakow University, AFM Publishing Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
10. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems
- Author
-
Schoonderwoerd, Tjeerd A.J. (author), Jorritsma, Wiard (author), Neerincx, M.A. (author), Van Den Bosch, Karel (author), Schoonderwoerd, Tjeerd A.J. (author), Jorritsma, Wiard (author), Neerincx, M.A. (author), and Van Den Bosch, Karel (author)
- Abstract
Much of the research on eXplainable Artificial Intelligence (XAI) has centered on providing transparency of machine learning models. More recently, the focus on human-centered approaches to XAI has increased. Yet, there is a lack of practical methods and examples on the integration of human factors into the development processes of AI-generated explanations that humans prove to uptake for better performance. This paper presents a case study of an application of a human-centered design approach for AI-generated explanations. The approach consists of three components: Domain analysis to define the concept & context of explanations, Requirements elicitation & assessment to derive the use cases & explanation requirements, and the consequential Multi-modal interaction design & evaluation to create a library of design patterns for explanations. In a case study, we adopt the DoReMi-approach to design explanations for a Clinical Decision Support System (CDSS) for child health. In the requirements elicitation & assessment, a user study with experienced paediatricians uncovered what explanations the CDSS should provide. In the interaction design & evaluation, a second user study tested the consequential interaction design patterns. This case study provided a first set of user requirements and design patterns for an explainable decision support system in medical diagnosis, showing how to involve expert end users in the development process and how to develop, more or less, generic solutions for general design problems in XAI., Interactive Intelligence
- Published
- 2021
- Full Text
- View/download PDF
11. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems
- Author
-
Mark A. Neerincx, Karel van den Bosch, Wiard Jorritsma, and Tjeerd Schoonderwoerd
- Subjects
Decision support system ,Decision-support system ,User study ,Computer science ,Human Factors and Ergonomics ,02 engineering and technology ,Interaction design ,Requirements elicitation ,Clinical decision support system ,Education ,Human-centered design ,Interaction design pattern ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Use case ,User-centered design ,Causability ,General Engineering ,Design patterns ,Explainability ,Data science ,Human-Computer Interaction ,Hardware and Architecture ,Software design pattern ,Explainable AI ,020201 artificial intelligence & image processing ,Software ,Clinical decision making - Abstract
Much of the research on eXplainable Artificial Intelligence (XAI) has centered on providing transparency of machine learning models. More recently, the focus on human-centered approaches to XAI has increased. Yet, there is a lack of practical methods and examples on the integration of human factors into the development processes of AI-generated explanations that humans prove to uptake for better performance. This paper presents a case study of an application of a human-centered design approach for AI-generated explanations. The approach consists of three components: Domain analysis to define the concept & context of explanations, Requirements elicitation & assessment to derive the use cases & explanation requirements, and the consequential Multi-modal interaction design & evaluation to create a library of design patterns for explanations. In a case study, we adopt the DoReMi-approach to design explanations for a Clinical Decision Support System (CDSS) for child health. In the requirements elicitation & assessment, a user study with experienced paediatricians uncovered what explanations the CDSS should provide. In the interaction design & evaluation, a second user study tested the consequential interaction design patterns. This case study provided a first set of user requirements and design patterns for an explainable decision support system in medical diagnosis, showing how to involve expert end users in the development process and how to develop, more or less, generic solutions for general design problems in XAI.
- Published
- 2021
12. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems.
- Author
-
Schoonderwoerd, Tjeerd A.J., Jorritsma, Wiard, Neerincx, Mark A., and van den Bosch, Karel
- Subjects
- *
DECISION support systems , *ARTIFICIAL intelligence , *MACHINE learning , *EXPLANATION - Abstract
• Human-centered XAI design is required to create human-understandable explanations. • We present a human-centered design approach to develop explanations for AI systems. • User requirements for explanations from clinical support systems are derived. • A set of re-usable explanation design patterns for decision-support systems is created. [Display omitted] Much of the research on eXplainable Artificial Intelligence (XAI) has centered on providing transparency of machine learning models. More recently, the focus on human-centered approaches to XAI has increased. Yet, there is a lack of practical methods and examples on the integration of human factors into the development processes of AI-generated explanations that humans prove to uptake for better performance. This paper presents a case study of an application of a human-centered design approach for AI-generated explanations. The approach consists of three components: Do main analysis to define the concept & context of explanations, Re quirements elicitation & assessment to derive the use cases & explanation requirements, and the consequential M ulti-modal i nteraction design & evaluation to create a library of design patterns for explanations. In a case study, we adopt the DoReMi-approach to design explanations for a Clinical Decision Support System (CDSS) for child health. In the requirements elicitation & assessment, a user study with experienced paediatricians uncovered what explanations the CDSS should provide. In the interaction design & evaluation, a second user study tested the consequential interaction design patterns. This case study provided a first set of user requirements and design patterns for an explainable decision support system in medical diagnosis, showing how to involve expert end users in the development process and how to develop, more or less, generic solutions for general design problems in XAI. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI.
- Author
-
Shin, Donghee
- Subjects
- *
ARTIFICIAL intelligence , *ATTITUDE (Psychology) , *TRUST , *ALGORITHMS , *DECISION making - Abstract
• This study examines the effect of explainability in AI on user trust and attitudes toward AI. • Conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust. • The dual roles of causability and explainability in terms of its underlying links to trust. • Causability lends the justification for what and how should be explained. • Causable explainable AI will help people understand the decision-making process of AI algorithms. Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors' perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. The natural language explanation algorithms for the lung cancer computer-aided diagnosis system.
- Author
-
Meldo, Anna, Utkin, Lev, Kovalev, Maxim, and Kasimov, Ernest
- Subjects
- *
NATURAL languages , *LUNG cancer , *CANCER diagnosis , *PULMONARY nodules , *EXPLANATION - Abstract
Two algorithms for explaining decisions of a lung cancer computer-aided diagnosis system are proposed. Their main peculiarity is that they produce explanations of diseases in the form of special sentences via natural language. The algorithms consist of two parts. The first part is a standard local post-hoc explanation model, for example, the well-known LIME, which is used for selecting important features from a special feature representation of the segmented lung suspicious objects. This part is identical for both algorithms. The second part is a model which aims to connect selected important features and to transform them to explanation sentences in natural language. This part is implemented differently for both algorithms. The training phase of the first algorithm uses a special vocabulary of simple phrases which produce sentences and their embeddings. The second algorithm significantly simplifies some parts of the first algorithm and reduces the explanation problem to a set of simple classifiers. The basic idea behind the improvement is to represent every simple phrase from vocabulary as a class of the "sparse" histograms. An implementation of the second algorithm is shown in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Causability and explainability of artificial intelligence in medicine.
- Author
-
Holzinger, Andreas, Langs, Georg, Denk, Helmut, Zatloukal, Kurt, and Müller, Heimo
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *DEEP learning , *DEFINITIONS , *DRUGS , *PROBABILISTIC databases - Abstract
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.