Back to Search
Start Over
Standardizing the Probabilistic Sources of Uncertainty for the sake of Safety Deep Learning
- Publication Year :
- 2023
- Publisher :
- CEUR Workshop Proceedings, 2023.
-
Abstract
- Nowadays, critical functionalities are increasingly tackled by autonomous decision-making systems, which depend on Artificial Intelligence (e.g. Deep Learning) models. Still, most of these models are designed to maximize the generic performance rather than preventing potential irreversible errors. While robustness and reliability techniques have been developed, in the recent years, to fill this gap, the sources of uncertainty in those decision models are still ambiguous. With a view to standardizing the uncertainty sources, in this paper we present a formal methodology to disentangle those sources from a probabilistic viewpoint for any (regression and classification) supervised learning model. Once we associate a formula to each uncertainty type, we expose the terminology disagreement in the literature and we propose one that is aligned with other previous works. Finally, based on the proposed formulation, we present an integrated visualization method to represent all the uncertainty sources in a single figure to, ultimately, assisting the design of uncertainty-tailored actions The research leading to these results has received funding from the Horizon Europe Programme under the SAFEXPLAIN Project (www.safexplain.eu), grant agreement num. 101069595 and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773). Additionally, this work has been partially supported by Grant PID2019-107255GB-C21 funded by MCIN/AEI/ 10.13039/501100011033.
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.od......3484..cf1d36a2a9981ecf7ce76365bd3088ad