1. Performance metrics for models designed to predict treatment effect
- Author
-
C. C. H. M. Maas, D. M. Kent, M. C. Hughes, R. Dekker, H. F. Lingsma, and D. van Klaveren
- Subjects
Heterogeneous treatment effect ,Prediction models ,Logistic regression ,Causal forest ,Medicine (General) ,R5-920 - Abstract
Abstract Background Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibration and overall performance are still lacking. We aimed to propose metrics of calibration and overall performance for models predicting treatment effect in randomized clinical trials (RCTs). Methods Similar to the previously proposed C-for-benefit, we defined observed pairwise treatment effect as the difference between outcomes in pairs of matched patients with different treatment assignment. We match each untreated patient with the nearest treated patient based on the Mahalanobis distance between patient characteristics. Then, we define the Eavg-for-benefit, E50-for-benefit, and E90-for-benefit as the average, median, and 90th quantile of the absolute distance between the predicted pairwise treatment effects and local-regression-smoothed observed pairwise treatment effects. Furthermore, we define the cross-entropy-for-benefit and Brier-for-benefit as the logarithmic and average squared distance between predicted and observed pairwise treatment effects. In a simulation study, the metric values of deliberately “perturbed models” were compared to those of the data-generating model, i.e., “optimal model”. To illustrate these performance metrics, different modeling approaches for predicting treatment effect are applied to the data of the Diabetes Prevention Program: 1) a risk modelling approach with restricted cubic splines; 2) an effect modelling approach including penalized treatment interactions; and 3) the causal forest. Results As desired, performance metric values of “perturbed models” were consistently worse than those of the “optimal model” (Eavg-for-benefit ≥ 0.043 versus 0.002, E50-for-benefit ≥ 0.032 versus 0.001, E90-for-benefit ≥ 0.084 versus 0.004, cross-entropy-for-benefit ≥ 0.765 versus 0.750, Brier-for-benefit ≥ 0.220 versus 0.218). Calibration, discriminative ability, and overall performance of three different models were similar in the case study. The proposed metrics were implemented in a publicly available R-package “HTEPredictionMetrics”. Conclusion The proposed metrics are useful to assess the calibration and overall performance of models predicting treatment effect in RCTs.
- Published
- 2023
- Full Text
- View/download PDF