Search

Your search keyword '"Interpretability"' showing total 14,260 results

Search Constraints

Start Over You searched for: Descriptor "Interpretability" Remove constraint Descriptor: "Interpretability"
14,260 results on '"Interpretability"'

Search Results

1. Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning

2. Predicting and explaining parking space sharing behaviors using LightGBM and SHAP with individual heterogeneity considered.

3. Reliability and Interpretability in Science and Deep Learning.

4. Explainable artificial intelligence models for mineral prospectivity mapping.

6. Interpretable machine learning for evaluating risk factors of freeway crash severity.

7. LK-IB: a hybrid framework with legal knowledge injection for compulsory measure prediction.

8. Pattern‐centric transformation of omics data grounded on discriminative gene associations aids predictive tasks in TCGA while ensuring interpretability.

9. A binarization approach to model interactions between categorical predictors in Generalized Linear Models.

10. Partitioned least squares.

11. FairMOE: counterfactually-fair mixture of experts with levels of interpretability.

12. L2XGNN: learning to explain graph neural networks.

13. Improving interpretability via regularization of neural activation sensitivity.

14. Evaluating feature attribution methods in the image domain.

15. Interpretable representations in explainable AI: from theory to practice.

16. Mining Pareto-optimal counterfactual antecedents with a branch-and-bound model-agnostic algorithm.

17. Sparse oblique decision trees: a tool to understand and manipulate neural net features.

18. A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts.

19. An interpretable capacity prediction method for lithium-ion battery considering environmental interference.

20. Light Recurrent Unit: Towards an Interpretable Recurrent Neural Network for Modeling Long-Range Dependency.

21. Generated or Not Generated (GNG): The Importance of Background in the Detection of Fake Images.

22. Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems.

23. Survey on Knowledge Representation Models in Healthcare.

24. Characterizing climate pathways using feature importance on echo state networks.

25. DKNN: deep kriging neural network for interpretable geospatial interpolation.

26. Transforming gradient-based techniques into interpretable methods.

27. Recurrent variational autoencoder approach for remaining useful life estimation.

28. On inductive biases for the robust and interpretable prediction of drug concentrations using deep compartment models.

29. Interpretability Analysis of Shear Capacity in Reinforced Recycled Aggregate Concrete Beams Using Tree Models.

30. A survey on interpretable reinforcement learning.

31. Ijuice: integer JUstIfied counterfactual explanations.

32. Interpretability of rectangle packing solutions with Monte Carlo tree search.

33. HIEF: a holistic interpretability and explainability framework.

34. Integrated machine learning and deep learning for predicting diabetic nephropathy model construction, validation, and interpretability.

35. Explainable artificial intelligence (XAI) in finance: a systematic literature review.

36. Explainable drug repurposing via path based knowledge graph completion.

37. A Student Performance Prediction Model Based on Hierarchical Belief Rule Base with Interpretability.

38. Adaptive Mask-Based Interpretable Convolutional Neural Network (AMI-CNN) for Modulation Format Identification.

39. Unsupervised Machine Learning‐Derived Anion‐Exchange Membrane Polymers Map: A Guideline for Polymers Exploration and Design.

40. Coupling Fault Diagnosis Based on Dynamic Vertex Interpretable Graph Neural Network.

41. Enhancing Interpretability in Medical Image Classification by Integrating Formal Concept Analysis with Convolutional Neural Networks.

42. Susceptibility Modeling and Potential Risk Analysis of Thermokarst Hazard in Qinghai–Tibet Plateau Permafrost Landscapes Using a New Interpretable Ensemble Learning Method.

43. Intelligent regional subsurface prediction based on limited borehole data and interpretability stacking technique of ensemble learning.

44. Essential hereditary undecidability.

46. MDUNet: deep-prior unrolling network with multi-parameter data integration for low-dose computed tomography reconstruction.

47. MultiPINN: multi-head enriched physics-informed neural networks for differential equations solving.

48. Neural network-driven interpretability analysis for evaluating compressive stress in polymer foams.

49. Sentiment Analysis Meets Explainable Artificial Intelligence: A Survey on Explainable Sentiment Analysis.

50. A fine-grained convolutional recurrent model for obstructive sleep apnea detection.

Catalog

Books, media, physical & digital resources