1. Tumor Response Evaluation Using iRECIST: Feasibility and Reliability of Manual Versus Software-Assisted Assessments.
- Author
-
Ristow, Inka, Well, Lennart, Wiese, Nis Jesper, Warncke, Malte, Tintelnot, Joseph, Karimzadeh, Amir, Koehler, Daniel, Adam, Gerhard, Bannas, Peter, and Sauer, Markus
- Subjects
BLOOD vessels ,COMPUTED tomography ,DIGITAL diagnostic imaging ,CLINICAL trials ,RETROSPECTIVE studies ,DESCRIPTIVE statistics ,MANN Whitney U Test ,QUANTITATIVE research ,METASTASIS ,COMPUTER-aided diagnosis ,RELIABILITY (Personality trait) ,INTER-observer reliability - Abstract
Simple Summary: Quantitative assessment of the therapy response in oncological patients undergoing chemo- or immunotherapy is becoming increasingly important not only in the context of clinical studies but also in clinical routine. To facilitate the sometimes complex and time-consuming oncological response assessment, dedicated software solutions, e.g., according to (i)RECIST, have been developed. Considering the higher degree of complexity of iRECIST, we investigated the benefits of software-assisted assessments compared to manual approaches with respect to reader agreement, error rate, and reading time. iRECIST assessments were more feasible and reliable when supported by dedicated software. We conclude that oncologic response assessment in clinical trials should be performed software-assisted rather than manually. Objectives: To compare the feasibility and reliability of manual versus software-assisted assessments of computed tomography scans according to iRECIST in patients undergoing immune-based cancer treatment. Methods: Computed tomography scans of 30 tumor patients undergoing cancer treatment were evaluated by four independent radiologists at baseline (BL) and two follow-ups (FU), resulting in a total of 360 tumor assessments (120 each at BL/FU1/FU2). After image interpretation, tumor burden and response status were either calculated manually or semi-automatically as defined by software, respectively. The reading time, calculated sum of longest diameter (SLD), and tumor response (e.g., "iStable Disease") were determined for each assessment. After complete data collection, a consensus reading among the four readers was performed to establish a reference standard for the correct response assignments. The reading times, error rates, and inter-reader agreement on SLDs were statistically compared between the manual versus software-assisted approaches. Results: The reading time was significantly longer for the manual versus software-assisted assessments at both follow-ups (median [interquartile range] FU1: 4.00 min [2.17 min] vs. 2.50 min [1.00 min]; FU2: 3.75 min [1.88 min] vs. 2.00 min [1.50 min]; both p < 0.001). Regarding reliability, 2.5% of all the response assessments were incorrect at FU1 (3.3% manual; 0% software-assisted), which increased to 5.8% at FU2 (10% manual; 1.7% software-assisted), demonstrating higher error rates for manual readings. Quantitative SLD inter-reader agreement was inferior for the manual compared to the software-assisted assessments at both FUs (FU1: ICC = 0.91 vs. 0.93; FU2: ICC = 0.75 vs. 0.86). Conclusions: Software-assisted assessments may facilitate the iRECIST response evaluation of cancer patients in clinical routine by decreasing the reading time and reducing response misclassifications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF