24 results on '"Denœux, Thierry"'
Search Results
2. Deep evidential fusion with uncertainty quantification and contextual discounting for multimodal medical image segmentation
- Author
-
Huang, Ling, Ruan, Su, Decazes, Pierre, and Denoeux, Thierry
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Single-modality medical images generally do not contain enough information to reach an accurate and reliable diagnosis. For this reason, physicians generally diagnose diseases based on multimodal medical images such as, e.g., PET/CT. The effective fusion of multimodal information is essential to reach a reliable decision and explain how the decision is made as well. In this paper, we propose a fusion framework for multimodal medical image segmentation based on deep learning and the Dempster-Shafer theory of evidence. In this framework, the reliability of each single modality image when segmenting different objects is taken into account by a contextual discounting operation. The discounted pieces of evidence from each modality are then combined by Dempster's rule to reach a final decision. Experimental results with a PET-CT dataset with lymphomas and a multi-MRI dataset with brain tumors show that our method outperforms the state-of-the-art methods in accuracy and reliability.
- Published
- 2023
3. An Evidential Neural Network Model for Regression Based on Random Fuzzy Numbers
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Machine Learning - Abstract
We introduce a distance-based neural network model for regression, in which prediction uncertainty is quantified by a belief function on the real line. The model interprets the distances of the input vector to prototypes as pieces of evidence represented by Gaussian random fuzzy numbers (GRFN's) and combined by the generalized product intersection rule, an operator that extends Dempster's rule to random fuzzy sets. The network output is a GRFN that can be summarized by three numbers characterizing the most plausible predicted value, variability around this value, and epistemic uncertainty. Experiments with real datasets demonstrate the very good performance of the method as compared to state-of-the-art evidential and statistical learning algorithms. \keywords{Evidence theory, Dempster-Shafer theory, belief functions, machine learning, random fuzzy sets., Comment: 7th International Conference on Belief Functions (BELIEF 2022), Paris, France, October 26-28, 2022
- Published
- 2022
- Full Text
- View/download PDF
4. Evidence fusion with contextual discounting for multi-modality medical image segmentation
- Author
-
Huang, Ling, Denoeux, Thierry, Vera, Pierre, and Ruan, Su
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
As information sources are usually imperfect, it is necessary to take into account their reliability in multi-source information fusion tasks. In this paper, we propose a new deep framework allowing us to merge multi-MR image segmentation results using the formalism of Dempster-Shafer theory while taking into account the reliability of different modalities relative to different classes. The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module, which assigns a vector of discount rates to each modality evidence and combines the discounted evidence using Dempster's rule. The whole framework is trained by minimizing a new loss function based on a discounted Dice index to increase segmentation accuracy and reliability. The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors. Quantitative and qualitative results show that our method outperforms the state of the art, and implements an effective new idea for merging multi-information within deep neural networks., Comment: MICCAI2022
- Published
- 2022
5. A Distributional Approach for Soft Clustering Comparison and Evaluation
- Author
-
Campagner, Andrea, Ciucci, Davide, and Denœux, Thierry
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
The development of external evaluation criteria for soft clustering (SC) has received limited attention: existing methods do not provide a general approach to extend comparison measures to SC, and are unable to account for the uncertainty represented in the results of SC algorithms. In this article, we propose a general method to address these limitations, grounding on a novel interpretation of SC as distributions over hard clusterings, which we call \emph{distributional measures}. We provide an in-depth study of complexity- and metric-theoretic properties of the proposed approach, and we describe approximation techniques that can make the calculations tractable. Finally, we illustrate our approach through a simple but illustrative experiment., Comment: This is the extended version of article "A Distributional Approach for Soft Clustering Comparison and Evaluation", accepted at BELIEF 2022 (http://hebergement.universite-paris-saclay.fr/belief2022/). Please cite the proceedings version of the article
- Published
- 2022
6. Application of belief functions to medical image segmentation: A review
- Author
-
Huang, Ling, Ruan, Su, and Denoeux, Thierry
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
The investigation of uncertainty is of major importance in risk-critical applications, such as medical image segmentation. Belief function theory, a formal framework for uncertainty analysis and multiple evidence fusion, has made significant contributions to medical image segmentation, especially since the development of deep learning. In this paper, we provide an introduction to the topic of medical image segmentation methods using belief function theory. We classify the methods according to the fusion step and explain how information with uncertainty or imprecision is modeled and fused with belief function theory. In addition, we discuss the challenges and limitations of present belief function-based medical image segmentation and propose orientations for future research. Future research could investigate both belief function theory and deep learning to achieve more promising and reliable segmentation results., Comment: Accepted by Information fusion
- Published
- 2022
- Full Text
- View/download PDF
7. Reasoning with fuzzy and uncertain evidence using epistemic random fuzzy sets: general framework and practical models
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Artificial Intelligence ,Statistics - Methodology - Abstract
We introduce a general theory of epistemic random fuzzy sets for reasoning with fuzzy or crisp evidence. This framework generalizes both the Dempster-Shafer theory of belief functions, and possibility theory. Independent epistemic random fuzzy sets are combined by the generalized product-intersection rule, which extends both Dempster's rule for combining belief functions, and the product conjunctive combination of possibility distributions. We introduce Gaussian random fuzzy numbers and their multi-dimensional extensions, Gaussian random fuzzy vectors, as practical models for quantifying uncertainty about scalar or vector quantities. Closed-form expressions for the combination, projection and vacuous extension of Gaussian random fuzzy numbers and vectors are derived.
- Published
- 2022
- Full Text
- View/download PDF
8. Lymphoma segmentation from 3D PET-CT images using a deep evidential network
- Author
-
Huang, Ling, Ruan, Su, Decazes, Pierre, and Denoeux, Thierry
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
An automatic evidential segmentation method based on Dempster-Shafer theory and deep learning is proposed to segment lymphomas from three-dimensional Positron Emission Tomography (PET) and Computed Tomography (CT) images. The architecture is composed of a deep feature-extraction module and an evidential layer. The feature extraction module uses an encoder-decoder framework to extract semantic feature vectors from 3D inputs. The evidential layer then uses prototypes in the feature space to compute a belief function at each voxel quantifying the uncertainty about the presence or absence of a lymphoma at this location. Two evidential layers are compared, based on different ways of using distances to prototypes for computing mass functions. The whole model is trained end-to-end by minimizing the Dice loss function. The proposed combination of deep feature extraction and evidential segmentation is shown to outperform the baseline UNet model as well as three other state-of-the-art models on a dataset of 173 patients., Comment: Preprint submitted to International Journal of Approximate Reasoning
- Published
- 2022
- Full Text
- View/download PDF
9. Clustering acoustic emission data streams with sequentially appearing clusters using mixture models
- Author
-
Ramasso, Emmanuel, Denoeux, Thierry, and Chevallier, Gael
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Computer Science - Sound ,Statistics - Applications ,Statistics - Methodology - Abstract
The interpretation of unlabeled acoustic emission (AE) data classically relies on general-purpose clustering methods. While several external criteria have been used in the past to select the hyperparameters of those algorithms, few studies have paid attention to the development of dedicated objective functions in clustering methods able to cope with the specificities of AE data. We investigate how to explicitly represent clusters onsets in mixture models in general, and in Gaussian Mixture Models (GMM) in particular. By modifying the internal criterion of such models, we propose the first clustering method able to provide, through parameters estimated by an expectation-maximization procedure, information about when clusters occur (onsets), how they grow (kinetics) and their level of activation through time. This new objective function accommodates continuous timestamps of AE signals and, thus, their order of occurrence. The method, called GMMSEQ, is experimentally validated to characterize the loosening phenomenon in bolted structure under vibrations. A comparison with three standard clustering methods on raw streaming data from five experimental campaigns shows that GMMSEQ not only provides useful qualitative information about the timeline of clusters, but also shows better performance in terms of cluster characterization. In view of developing an open acoustic emission initiative and according to the FAIR principles, the datasets and the codes are made available to reproduce the research of this paper.
- Published
- 2021
- Full Text
- View/download PDF
10. Fusion of evidential CNN classifiers for image classification
- Author
-
Tong, Zheng, Xu, Philippe, and Denoeux, Thierry
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
We propose an information-fusion approach based on belief functions to combine convolutional neural networks. In this approach, several pre-trained DS-based CNN architectures extract features from input images and convert them into mass functions on different frames of discernment. A fusion module then aggregates these mass functions using Dempster's rule. An end-to-end learning procedure allows us to fine-tune the overall architecture using a learning set with soft labels, which further improves the classification performance. The effectiveness of this approach is demonstrated experimentally using three benchmark databases.
- Published
- 2021
11. Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation
- Author
-
Huang, Ling, Denoeux, Thierry, Tonnelet, David, Decazes, Pierre, and Ruan, Su
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Lymphoma detection and segmentation from whole-body Positron Emission Tomography/Computed Tomography (PET/CT) volumes are crucial for surgical indication and radiotherapy. Designing automatic segmentation methods capable of effectively exploiting the information from PET and CT as well as resolving their uncertainty remain a challenge. In this paper, we propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer. Single-modality volumes are trained separately to get initial segmentation maps and an evidential fusion layer is proposed to fuse the two pieces of evidence using Dempster-Shafer theory (DST). Moreover, a multi-task loss function is proposed: in addition to the use of the Dice loss for PET and CT segmentation, a loss function based on the concordance between the two segmentation is added to constrain the final segmentation. We evaluate our proposal on a database of polycentric PET/CT volumes of patients treated for lymphoma, delineated by the experts. Our method get accurate segmentation results with Dice score of 0.726, without any user interaction. Quantitative results show that our method is superior to the state-of-the-art methods., Comment: MICCAI 2021 Workshop MLMI
- Published
- 2021
12. Evidential segmentation of 3D PET/CT images
- Author
-
Huang, Ling, Ruan, Su, Decazes, Pierre, and Denoeux, Thierry
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
PET and CT are two modalities widely used in medical image analysis. Accurately detecting and segmenting lymphomas from these two imaging modalities are critical tasks for cancer staging and radiotherapy planning. However, this task is still challenging due to the complexity of PET/CT images, and the computation cost to process 3D data. In this paper, a segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images. The architecture is composed of a feature extraction module and an evidential segmentation (ES) module. The ES module outputs not only segmentation results (binary maps indicating the presence or absence of lymphoma in each voxel) but also uncertainty maps quantifying the classification uncertainty. The whole model is optimized by minimizing Dice and uncertainty loss functions to increase segmentation accuracy. The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma. Quantitative and qualitative results show that our method outperforms the state-of-the-art methods., Comment: Belief2021
- Published
- 2021
13. Evidential fully convolutional network for semantic segmentation
- Author
-
Tong, Zheng, Xu, Philippe, and Denœux, Thierry
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
We propose a hybrid architecture composed of a fully convolutional network (FCN) and a Dempster-Shafer layer for image semantic segmentation. In the so-called evidential FCN (E-FCN), an encoder-decoder architecture first extracts pixel-wise feature maps from an input image. A Dempster-Shafer layer then computes mass functions at each pixel location based on distances to prototypes. Finally, a utility layer performs semantic segmentation from mass functions and allows for imprecise classification of ambiguous pixels and outliers. We propose an end-to-end learning strategy for jointly updating the network parameters, which can make use of soft (imprecise) labels. Experiments using three databases (Pascal VOC 2011, MIT-scene Parsing and SIFT Flow) show that the proposed combination improves the accuracy and calibration of semantic segmentation by assigning confusing pixels to multi-class sets., Comment: 34 pages, 21 figures
- Published
- 2021
- Full Text
- View/download PDF
14. An evidential classifier based on Dempster-Shafer theory and deep learning
- Author
-
Tong, Zheng, Xu, Philippe, and Denœux, Thierry
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We propose a new classifier based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification. In this classifier, called the evidential deep-learning classifier, convolutional and pooling layers first extract high-dimensional features from input data. The features are then converted into mass functions and aggregated by Dempster's rule in a DS layer. Finally, an expected utility layer performs set-valued classification based on mass functions. We propose an end-to-end learning strategy for jointly updating the network parameters. Additionally, an approach for selecting partial multi-class acts is proposed. Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy and to make cautious decisions by assigning confusing patterns to multi-class sets.
- Published
- 2021
- Full Text
- View/download PDF
15. Belief function-based semi-supervised learning for brain tumor segmentation
- Author
-
Huang, Ling, Ruan, Su, and Denoeux, Thierry
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Precise segmentation of a lesion area is important for optimizing its treatment. Deep learning makes it possible to detect and segment a lesion field using annotated data. However, obtaining precisely annotated data is very challenging in the medical domain. Moreover, labeling uncertainty and imprecision make segmentation results unreliable. In this paper, we address the uncertain boundary problem by a new evidential neural network with an information fusion strategy, and the scarcity of annotated data by semi-supervised learning. Experimental results show that our proposal has better performance than state-of-the-art methods., Comment: 5 pages, 4 figures, ISBI2021 conference
- Published
- 2021
16. Covid-19 classification with deep neural network and belief functions
- Author
-
Huang, Ling, Ruan, Su, and Denoeux, Thierry
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Computed tomography (CT) image provides useful information for radiologists to diagnose Covid-19. However, visual analysis of CT scans is time-consuming. Thus, it is necessary to develop algorithms for automatic Covid-19 detection from CT images. In this paper, we propose a belief function-based convolutional neural network with semi-supervised training to detect Covid-19 cases. Our method first extracts deep features, maps them into belief degree maps and makes the final classification decision. Our results are more reliable and explainable than those of traditional deep learning-based classification models. Experimental results show that our approach is able to achieve a good performance with an accuracy of 0.81, an F1 of 0.812 and an AUC of 0.875., Comment: medical image, Covid-19, belief function, BIHI conference
- Published
- 2021
17. EGMM: an Evidential Version of the Gaussian Mixture Model for Clustering
- Author
-
Jiao, Lianmeng, Denoeux, Thierry, Liu, Zhun-ga, and Pan, Quan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
The Gaussian mixture model (GMM) provides a simple yet principled framework for clustering, with properties suitable for statistical inference. In this paper, we propose a new model-based clustering algorithm, called EGMM (evidential GMM), in the theoretical framework of belief functions to better characterize cluster-membership uncertainty. With a mass function representing the cluster membership of each object, the evidential Gaussian mixture distribution composed of the components over the powerset of the desired clusters is proposed to model the entire dataset. The parameters in EGMM are estimated by a specially designed Expectation-Maximization (EM) algorithm. A validity index allowing automatic determination of the proper number of clusters is also provided. The proposed EGMM is as simple as the classical GMM, but can generate a more informative evidential partition for the considered dataset. The synthetic and real dataset experiments show that the proposed EGMM performs better than other representative clustering algorithms. Besides, its superiority is also demonstrated by an application to multi-modal brain image segmentation.
- Published
- 2020
- Full Text
- View/download PDF
18. NN-EVCLUS: Neural Network-based Evidential Clustering
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Statistics - Machine Learning - Abstract
Evidential clustering is an approach to clustering based on the use of Dempster-Shafer mass functions to represent cluster-membership uncertainty. In this paper, we introduce a neural-network based evidential clustering algorithm, called NN-EVCLUS, which learns a mapping from attribute vectors to mass functions, in such a way that more similar inputs are mapped to output mass functions with a lower degree of conflict. The neural network can be paired with a one-class support vector machine to make it robust to outliers and allow for novelty detection. The network is trained to minimize the discrepancy between dissimilarities and degrees of conflict for all or some object pairs. Additional terms can be added to the loss function to account for pairwise constraints or labeled data, which can also be used to adapt the metric. Comparative experiments show the superiority of N-EVCLUS over state-of-the-art evidential clustering algorithms for a range of unsupervised and constrained clustering tasks involving both attribute and dissimilarity data.
- Published
- 2020
- Full Text
- View/download PDF
19. Belief functions induced by random fuzzy sets: A general framework for representing uncertain and fuzzy evidence
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Artificial Intelligence ,Mathematics - Statistics Theory - Abstract
We revisit Zadeh's notion of "evidence of the second kind" and show that it provides the foundation for a general theory of epistemic random fuzzy sets, which generalizes both the Dempster-Shafer theory of belief functions and possibility theory. In this perspective, Dempster-Shafer theory deals with belief functions generated by random sets, while possibility theory deals with belief functions induced by fuzzy sets. The more general theory allows us to represent and combine evidence that is both uncertain and fuzzy. We demonstrate the application of this formalism to statistical inference, and show that it makes it possible to reconcile the possibilistic interpretation of likelihood with Bayesian inference.
- Published
- 2020
- Full Text
- View/download PDF
20. From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)
- Author
-
Bouraoui, Zied, Cornuéjols, Antoine, Denœux, Thierry, Destercke, Sébastien, Dubois, Didier, Guillaume, Romain, Marques-Silva, João, Mengin, Jérôme, Prade, Henri, Schockaert, Steven, Serrurier, Mathieu, and Vrain, Christel
- Subjects
Computer Science - Artificial Intelligence - Abstract
This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quite separately in the last three decades. Some common concerns are identified and discussed such as the types of used representation, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then some methodologies combining reasoning and learning are reviewed (such as inductive logic programming, neuro-symbolic reasoning, formal concept analysis, rule-based representations and ML, uncertainty in ML, or case-based reasoning and analogical reasoning), before discussing examples of synergies between KRR and ML (including topics such as belief functions on regression, EM algorithm versus revision, the semantic description of vector representations, the combination of deep learning with high level inference, knowledge graph completion, declarative frameworks for data mining, or preferences and recommendation). This paper is the first step of a work in progress aiming at a better mutual understanding of research in KRR and ML, and how they could cooperate., Comment: 53 pages
- Published
- 2019
21. An Interval-Valued Utility Theory for Decision Making with Dempster-Shafer Belief Functions
- Author
-
Denoeux, Thierry and Shenoy, Prakash P.
- Subjects
Computer Science - Artificial Intelligence - Abstract
The main goal of this paper is to describe an axiomatic utility theory for Dempster-Shafer belief function lotteries. The axiomatic framework used is analogous to von Neumann-Morgenstern's utility theory for probabilistic lotteries as described by Luce and Raiffa. Unlike the probabilistic case, our axiomatic framework leads to interval-valued utilities, and therefore, to a partial (incomplete) preference order on the set of all belief function lotteries. If the belief function reference lotteries we use are Bayesian belief functions, then our representation theorem coincides with Jaffray's representation theorem for his linear utility theory for belief functions. We illustrate our representation theorem using some examples discussed in the literature, and we propose a simple model for assessing utilities based on an interval-valued pessimism index representing a decision-maker's attitude to ambiguity and indeterminacy. Finally, we compare our decision theory with those proposed by Jaffray, Smets, Dubois et al., Giang and Shenoy, and Shafer.
- Published
- 2019
- Full Text
- View/download PDF
22. Calibrated model-based evidential clustering using bootstrapping
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Machine Learning ,Statistics - Computation ,Statistics - Machine Learning - Abstract
Evidential clustering is an approach to clustering in which cluster-membership uncertainty is represented by a collection of Dempster-Shafer mass functions forming an evidential partition. In this paper, we propose to construct these mass functions by bootstrapping finite mixture models. In the first step, we compute bootstrap percentile confidence intervals for all pairwise probabilities (the probabilities for any two objects to belong to the same class). We then construct an evidential partition such that the pairwise belief and plausibility degrees approximate the bounds of the confidence intervals. This evidential partition is calibrated, in the sense that the pairwise belief-plausibility intervals contain the true probabilities "most of the time", i.e., with a probability close to the defined confidence level. This frequentist property is verified by simulation, and the practical applicability of the method is demonstrated using several real datasets.
- Published
- 2019
- Full Text
- View/download PDF
23. Decision-Making with Belief Functions: a Review
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Artificial Intelligence - Abstract
Approaches to decision-making under uncertainty in the belief function framework are reviewed. Most methods are shown to blend criteria for decision under ignorance with the maximum expected utility principle of Bayesian decision theory. A distinction is made between methods that construct a complete preference relation among acts, and those that allow incomparability of some acts due to lack of information. Methods developed in the imprecise probability framework are applicable in the Dempster-Shafer context and are also reviewed. Shafer's constructive decision theory, which substitutes the notion of goal for that of utility, is described and contrasted with other approaches. The paper ends by pointing out the need to carry out deeper investigation of fundamental issues related to decision-making with belief functions and to assess the descriptive, normative and prescriptive values of the different approaches.
- Published
- 2018
- Full Text
- View/download PDF
24. Logistic Regression, Neural Networks and Dempster-Shafer Theory: a New Perspective
- Author
-
Denoeux, Thierry
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We revisit logistic regression and its nonlinear extensions, including multilayer feedforward neural networks, by showing that these classifiers can be viewed as converting input or higher-level features into Dempster-Shafer mass functions and aggregating them by Dempster's rule of combination. The probabilistic outputs of these classifiers are the normalized plausibilities corresponding to the underlying combined mass function. This mass function is more informative than the output probability distribution. In particular, it makes it possible to distinguish between lack of evidence (when none of the features provides discriminant information) from conflicting evidence (when different features support different classes). This expressivity of mass functions allows us to gain insight into the role played by each input feature in logistic regression, and to interpret hidden unit outputs in multilayer neural networks. It also makes it possible to use alternative decision rules, such as interval dominance, which select a set of classes when the available evidence does not unambiguously point to a single class, thus trading reduced error rate for higher imprecision.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.