Back to Search Start Over

Robust Probabilistic Model Checking with Continuous Reward Domains

Authors :
Ji, Xiaotong
Wang, Hanchun
Filieri, Antonio
Epifani, Ilenia
Publication Year :
2025

Abstract

Probabilistic model checking traditionally verifies properties on the expected value of a measure of interest. This restriction may fail to capture the quality of service of a significant proportion of a system's runs, especially when the probability distribution of the measure of interest is poorly represented by its expected value due to heavy-tail behaviors or multiple modalities. Recent works inspired by distributional reinforcement learning use discrete histograms to approximate integer reward distribution, but they struggle with continuous reward space and present challenges in balancing accuracy and scalability. We propose a novel method for handling both continuous and discrete reward distributions in Discrete Time Markov Chains using moment matching with Erlang mixtures. By analytically deriving higher-order moments through Moment Generating Functions, our method approximates the reward distribution with theoretically bounded error while preserving the statistical properties of the true distribution. This detailed distributional insight enables the formulation and robust model checking of quality properties based on the entire reward distribution function, rather than restricting to its expected value. We include a theoretical foundation ensuring bounded approximation errors, along with an experimental evaluation demonstrating our method's accuracy and scalability in practical model-checking problems.<br />Comment: Accepted by the 20th International Conference on Software Engineering for Adaptive and Self-Managing Systems 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.04530
Document Type :
Working Paper