The content of task T6.3 is described as follows: UKON provides an interface for the insilico models to manage the dataflow between the models and the visualization interface. This task is undertaken in cooperation with WP5 (model development) and implementation of the visual tools on the final simulation framework will be undertaken in WP7, as part of the platform integration. The visualization tools proposed for the in-silico models will enable a visualization of the reliability of predictive data. Each calculated model introduces uncertainty to a certain degree. Experts in the medical field cannot fully rely on these results, which is communicated effectively by an uncertainty score produced by the prediction algorithms. The uncertainty score reflects the model uncertainty by comparing the predictions with the outcomes of the large-scale database. UKON will design a general uncertainty score, which will be added to the models as a further measure for the reliability of the prediction. In order to check if the user-driven exploration interface is tailored to the users’ needs, we provide several design prototypes during the project to ensure the appropriateness of visual encodings. Recurring feedback sessions with experts are needed to confirm our design decisions for the visual tools, as well as to ensure that the domain experts are in line., {"references":["[1] Keim, Daniel, Kohlhammer, Jörn, and Ellis, Geoffrey. \"Mastering the Information Age: Solving Problems with Visual Analytics, Eurographics Association.\" (2010).","[2] Sacha, Dominik, et al. \"Knowledge generation model for visual analytics.\" IEEE transactions on visualization and computer graphics 20.12 (2014): 1604-1613.","[3] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. \"Why Should I Trust You?\": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2016), 1135–1144.","[4] U. Schlegel, D. L. Vo, D. A. Keim, D. Seebacher: \"TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models\". European Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI) (2021)","[5] U. Schlegel, D. A. Keim: \"Time Series Model Attribution Visualizations as Explanations\". Workshop on Trust and Expertise in Visual Analytics (TREX) at IEEE Visualization Conference (VIS) (2021)","[6] Hüllermeier, E., Waegeman, W. \"Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods\". Machine Learning 110, 457–506 (2021).","[7] McInnes, Leland, John Healy, and James Melville. \"Umap: Uniform manifold approximation and projection for dimension reduction.\" arXiv preprint arXiv:1802.03426 (2018).","[8] Bonneau, G. P., Hege, H. C., Johnson, C. R., Oliveira, M. M., Potter, K., Rheingans, P., & Schultz, T. \"Overview and state-of-the-art of uncertainty visualization\". In Scientific Visualization (2014)","[9] Hullman, J., Qiao, X., Correll, M., Kale, A., & Kay, M. \"In pursuit of error: A survey of uncertainty visualization evaluation\". IEEE Transactions on Visualization and Computer Graphics, (2018): 25(1).","[10] MacEachren, A. M. \"Visual Analytics and Uncertainty: It's Not About the Data\". In E. Bertini & J. C. Roberts (Reds), EuroVis Workshop on Visual Analytics (EuroVA) (2015)","[11] Sacha, Dominik, et al. \"The role of uncertainty, awareness, and trust in visual analytics.\" IEEE transactions on visualization and computer graphics 22.1 (2015): 240-249.","[12] S. M. Lundberg and S.-I. Lee, \"A Unified Approach to Interpreting Model Predictions,\" in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., (2017): 4765–4774."]}