8 results on '"Lim, Brian"'
Search Results
2. Towards Relatable Explainable AI with the Perceptual Process
- Author
-
Zhang, Wencan and Lim, Brian Y.
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction ,I.2.0 ,Human-Computer Interaction (cs.HC) - Abstract
Machine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications., 14 pages, 7 figures, 4 tables, accepted by chi2022
- Published
- 2022
3. Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
- Author
-
Zhang, Wencan, Dimiccoli, Mariella, and Lim, Brian Y.
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Human-Computer Interaction ,I.2.0 ,Human-Computer Interaction (cs.HC) - Abstract
Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases., Comment: This work was intended as a replacement of arXiv:2012.05567 and any subsequent updates will appear there
- Published
- 2022
- Full Text
- View/download PDF
4. TExSS: Transparency and Explanations in Smart Systems
- Author
-
Smith-Renner, Alison, Kleanthous, Styliani, Dodge, Jonathan, Dugan, Casey, Kyung Lee, Min, Lim, Brian, Kuflik, Tsvi, Sarkar, Advait, and Shulner-Tal, Avital
- Abstract
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.
- Published
- 2021
5. Exploiting Explanations for Model Inversion Attacks
- Author
-
Zhao, Xuejun, Zhang, Wencan, Xiao, Xiaokui, and Lim, Brian Y.
- Subjects
FOS: Computer and information sciences ,Computer Science - Computers and Society ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computers and Society (cs.CY) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
The successful deployment of artificial intelligence (AI) in many domains from healthcare to hiring requires their responsible use, particularly in model explanations and privacy. Explainable artificial intelligence (XAI) provides more information to help users to understand model decisions, yet this additional knowledge exposes additional risks for privacy attacks. Hence, providing explanation harms privacy. We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations. We have developed several multi-modal transposed CNN architectures that achieve significantly higher inversion performance than using the target model prediction only. These XAI-aware inversion models were designed to exploit the spatial knowledge in image explanations. To understand which explanations have higher privacy risk, we analyzed how various explanation types and factors influence inversion performance. In spite of some models not providing explanations, we further demonstrate increased inversion performance even for non-explainable target models by exploiting explanations of surrogate models through attention transfer. This method first inverts an explanation from the target prediction, then reconstructs the target image. These threats highlight the urgent and significant privacy risks of explanations and calls attention for new privacy preservation techniques that balance the dual-requirement for AI explainability and privacy., Comment: ICCV 2021
- Published
- 2021
- Full Text
- View/download PDF
6. Quantitative Evidence for Revising the Definition of Primary Graft Dysfunction after Lung Transplant
- Author
-
Cantu, Edward, Diamond, Joshua M, Suzuki, Yoshikazu, Lasky, Jared, Schaufler, Christian, Lim, Brian, Shah, Rupal, Porteous, Mary, Lederer, David J, Kawut, Steven M, Palmer, Scott M, Snyder, Laurie D, Hartwig, Matthew G, Lama, Vibha N, Bhorade, Sangeeta, Bermudez, Christian, Crespo, Maria, McDyer, John, Wille, Keith, Orens, Jonathan, Shah, Pali D, Weinacker, Ann, Weill, David, Wilkes, David, Roe, David, Hage, Chadi, Ware, Lorraine B, Bellamy, Scarlett L, Christie, Jason D, and Lung Transplant Outcomes Group
- Subjects
Graft Rejection ,Adult ,Male ,Consensus ,Time Factors ,Respiratory System ,Kaplan-Meier Estimate ,Severity of Illness Index ,Risk Assessment ,Medical and Health Sciences ,lung transplant outcomes ,Lung Transplant Outcomes Group ,Cohort Studies ,Young Adult ,Rare Diseases ,Cause of Death ,Humans ,Acute Respiratory Distress Syndrome ,Lung ,Proportional Hazards Models ,Retrospective Studies ,Transplantation ,Graft Survival ,Reproducibility of Results ,Organ Transplantation ,Middle Aged ,lung transplant ,United States ,Survival Rate ,Logistic Models ,Good Health and Well Being ,primary graft dysfunction ,Female ,Biomarkers ,Lung Transplantation - Abstract
RationalePrimary graft dysfunction (PGD) is a form of acute lung injury that occurs after lung transplantation. The definition of PGD was standardized in 2005. Since that time, clinical practice has evolved, and this definition is increasingly used as a primary endpoint for clinical trials; therefore, validation is warranted.ObjectivesWe sought to determine whether refinements to the 2005 consensus definition could further improve construct validity.MethodsData from the Lung Transplant Outcomes Group multicenter cohort were used to compare variations on the PGD definition, including alternate oxygenation thresholds, inclusion of additional severity groups, and effects of procedure type and mechanical ventilation. Convergent and divergent validity were compared for mortality prediction and concurrent lung injury biomarker discrimination.Measurements and main resultsA total of 1,179 subjects from 10 centers were enrolled from 2007 to 2012. Median length of follow-up was 4 years (interquartile range = 2.4-5.9). No mortality differences were noted between no PGD (grade 0) and mild PGD (grade 1). Significantly better mortality discrimination was evident for all definitions using later time points (48, 72, or 48-72 hours; P
- Published
- 2018
7. Improving Understanding and Trust with Intelligibility in Context-Aware Applications
- Author
-
Lim, Brian Y.
- Subjects
Applied Computer Science - Abstract
To facilitate everyday activities, context-aware applications use sensors to detect what is happening and use increasingly complex mechanisms (e.g., by using big rule-sets or machine learning) to infer the user's context and intent. For example, a mobile application can recognize that the user is in a conversation and suppress any incoming calls. When the application works well, this implicit sensing and complex inference remain invisible. However, when it behaves inappropriately or unexpectedly, users may not understand its behavior. This can lead users to mistrust, misuse, or even abandon it. To counter this lack of understanding and loss of trust, context-aware applications should be intelligible, capable of explaining their behavior. We investigate providing intelligibility in context-aware applications and evaluate its usefulness to improve user understanding and trust in context-aware applications. Specifically, this thesis supports intelligibility in context-aware applications through the provision of explanations that answer different question types, such as: Why did it do X? Why did it not do Y? What if I did W, What will it do? How can I get the application to do Y? This thesis takes a three-pronged approach to investigating intelligibility by (i) eliciting the user requirements for intelligibility, to identify what explanation types end-users are interested in asking context-aware applications, (ii) supporting the development of intelligible context-aware applications with a software toolkit and the design of these applications with design and usability recommendations, and (iii) evaluating the impact of intelligibility on user understanding and trust under various situations and application reliability, and measuring how users use an interactive intelligible prototype. We show that users are willing to use well-designed intelligibility features, and this can improve user understanding and trust in the adaptive behavior of context-aware applications.
- Published
- 2012
- Full Text
- View/download PDF
8. EXSS-ATEC: Explainable smart systems and algorithmic transparency in emerging technologies 2020
- Author
-
Smith-Renner, Alison, Kleanthous, Styliani, Lim, Brian, Kuflik, Tsvi, Stumpf, Simone, Otterbacher, Jahna, Sarkar, Advait, Dugan, Casey, Shulner, Avital, Smith-Renner, Alison, Kleanthous, Styliani, Lim, Brian, Kuflik, Tsvi, Stumpf, Simone, Sarkar, Advait, Dugan, Casey, and Shulner, Avital
- Abstract
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system's inter-workings, such as awareness, data provenance, and validation.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.