78 results on '"Eli, Gibson"'
Search Results
2. 3D-2D GAN based brain metastasis synthesis with configurable parameters for fully 3D data augmentation
- Author
-
Gengyan Zhao, Youngjin Yoo, Thomas J. Re, Jyotipriya Das, Hesheng Wang, Michelle Kim, Colette Shen, Yueh Z. Lee, Douglas Kondziolka, Mohannad Ibrahim, Jun Lian, Rajan Jain, Tong Zhu, Hemant Parmar, James M. Balter, Yue Cao, Eli Gibson, and Dorin Comaniciu
- Published
- 2023
- Full Text
- View/download PDF
3. Stochastic Sequential Modeling: Toward Improved Prostate Cancer Diagnosis Through Temporal-Ultrasound
- Author
-
Farhad Imani, Mena Gaed, Layan Nahlawi, Hagit Shatkay, Jose A. Gomez, Eli Gibson, Madeleine Moussa, Aaron D. Ward, Purang Abolmaesumi, Aaron Fenster, and Parvin Mousavi
- Subjects
Computer science ,030232 urology & nephrology ,Biomedical Engineering ,Malignancy ,030218 nuclear medicine & medical imaging ,Time-domain analysis ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Tissue characterization ,Image guided diagnosis ,medicine ,Hidden Markov models ,Hidden Markov model ,Temporal information ,TRUS-guided biopsies ,business.industry ,Ultrasound ,Cancer ,Pattern recognition ,medicine.disease ,Sequential modeling ,Original Article ,Artificial intelligence ,business - Abstract
Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.
- Published
- 2020
- Full Text
- View/download PDF
4. Artificial Intelligence with Statistical Confidence Scores for Detection of Acute or Subacute Hemorrhage on Noncontrast CT Head Scans
- Author
-
Eli Gibson, Bogdan Georgescu, Pascal Ceccaldi, Pierre-Hugo Trigan, Youngjin Yoo, Jyotipriya Das, Thomas J. Re, Vishwanath RS, Abishek Balachandran, Eva Eibenberger, Andrei Chekkoury, Barbara Brehm, Uttam K. Bodanapally, Savvas Nicolaou, Pina C. Sanelli, Thomas J. Schroeppel, Thomas Flohr, Dorin Comaniciu, and Yvonne W. Lui
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Original Research - Abstract
PURPOSE: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. MATERIALS AND METHODS: This retrospective study included 46 057 studies from seven “internal” centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16( )764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score–defined subsets using bootstrapping. RESULTS: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). CONCLUSION: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation. Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022
- Published
- 2022
- Full Text
- View/download PDF
5. Automatically detecting anatomy
- Author
-
Florin C. Ghesu, Bogdan Georgescu, Eli Gibson, Sasa Grbic, and Dorin Comaniciu
- Published
- 2022
- Full Text
- View/download PDF
6. Brain midline shift detection and quantification by a cascaded deep network pipeline on non-contrast computed tomography scans
- Author
-
Youngjin Yoo, Pina C. Sanelli, Eli Gibson, Eva Eibenberger, Thomas J. Re, Yvonne W. Lui, Uttam Bodanapally, Jyotipriya Das, Savvas Nicolaou, Dorin Comaniciu, Nguyen P. Nguyen, Abishek Balachandran, Tommi A. White, Filiz Bunyak, Thomas J. Schroeppel, and Andrei Chekkoury
- Subjects
medicine.diagnostic_test ,Midline shift ,business.industry ,Pipeline (computing) ,media_common.quotation_subject ,medicine ,Contrast (vision) ,Computed tomography ,Artificial intelligence ,business ,Biomedical engineering ,media_common - Published
- 2021
- Full Text
- View/download PDF
7. Evaluating deep learning methods in detecting and segmenting different sizes of brain metastases on 3D post-contrast T1-weighted images
- Author
-
Eli Gibson, Yue Cao, Siqi Liu, Youngjin Yoo, James M. Balter, Thomas J. Re, and Ceccaldi Pascal
- Subjects
Image fusion ,medicine.medical_specialty ,medicine.diagnostic_test ,Recall ,business.industry ,Deep learning ,Magnetic resonance imaging ,Image segmentation ,medicine.disease ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Radiology ,Artificial intelligence ,medicine.symptom ,Ultrasonic Imaging and Tomography ,business ,Brain metastasis - Abstract
Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of [Formula: see text] and precision of [Formula: see text] for lesion volume [Formula: see text] but deteriorated as metastasis size decreased below [Formula: see text] to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to [Formula: see text] but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.
- Published
- 2021
- Full Text
- View/download PDF
8. The LUX-ZEPLIN (LZ) radioactivity and cleanliness control programs
- Author
-
Mikkel B. Johnson, B. Landerud, A. Biekert, B. N. Edwards, K. Hanzel, T. E. Tope, D. Curran, C. Chiller, J. Palmer, R. Leonard, P. Sutcliffe, E. D. Fraser, R. Bunker, J. So, A. A. Chiller, M. R. While, A. Dobi, D. Hamilton, M. G. D. van der Grinten, W. J. Wisniewski, J. Li, A. Fan, J.S. Saba, C. Lynch, Henrique Araujo, S. Luitz, J. Nesbit, M. Horn, C. D. Kocher, Catarina Silva, F. G. O’Neill, J. C. Davis, J. J. Silk, M. C. Carmona-Benitez, Simon Fayer, M. Pangilinan, K. O’Sullivan, D. Lucero, Q. Xiao, D. Hemer, B. Boxer, J. M. Lyle, C. Chan, C. E. Tull, J. Genovesi, A. Vaitkus, M. Arthurs, V. B. Francis, S. Kravitz, X. Liu, H. J. Birch, R. Linehan, S. Walcott, C. H. Faham, T. J. Anderson, A. B.M.R. Sazzad, D. White, A. Kamaha, M. S. Witherell, R. Studley, K. Sundarnath, R. Liu, H. Oh, L. Korley, H. Flaecher, R. Conley, K. Kamdin, P. Beltrame, S. Stephenson, C. Pereira, C. R. Hall, R. Cabrita, B. Holbrook, B. G. Lenardo, P. Majewski, T.M. Stiegler, I. B. Peterson, A. Manalaysay, A. Monte, C. Ghag, H. Kraus, C. Loniewski, J. Makkinje, X. Xiang, Robert A. Taylor, M. N. Irving, S. Uvarov, Michael Schubnell, J. Heise, R. Coughlen, A. Lambert, F. Froborg, L. Oxborough, D.C. Malling, S. Greenwood, J. Yin, S. J. Haselschwardt, H. J. Krebs, W. H. Lippincott, K. J. Palladino, R. E. Smith, V. A. Kudryavtsev, I. Olcina, C. M. Ignarra, A. Harrison, A. J. Bailey, Minfang Yeh, D. Bauer, W. Skulski, J. Keefner, O. Hitchcock, Ben Carlson, E. Leason, Benjamin Krikler, A. Cottle, E. Mizrachi, Michele Cascella, M. Khaleeq, M. Solmaz, T. J. Whitis, J. J. Wang, N. Angelides, S. Gokhale, K. Skarpaas, Daniel McKinsey, S. Dardin, S. Kyre, D. Santone, P. R. Scovell, T. Vietanen, S. Powell, Y. Wang, David Leonard, E. Morrison, N. Swanson, M. Sarychev, M. A. Olevitch, E. K. Pease, M. Elnimr, P. Brás, N.J. Gantos, R. G. Jacobsen, J. Migneault, Yeongduk Kim, W. Turner, S. D. Worm, Seth Hillbrand, T. Fruth, G. Gregerson, Wenzhao Wei, V. Kasey, L. Kreczko, J. R. Watson, A. Bhatti, D. Naim, Ethan Bernard, B. J. Mount, V. N. Solovov, C. Nedlik, K. Wilson, Elena Korolkova, G. R. C. Rischbieter, P. Ford, A. Stevens, D. J. Taylor, H. N. Nelson, F. Neves, S. Aviles, W. T. Emmet, K. Stifter, B. Birrittella, J. T. White, S. J. Patton, D. Molash, M. Severson, T. A. Shutt, A. Richards, D Kodroff, J. Lin, Kareem Kazkaz, T. P. Biesiadzinski, David Colling, J. Liao, J. Mock, J. A. Morad, E. Holtom, J. E. Y. Dobson, Bjoern Penning, C. E. Dahl, A. Dushkin, A. Konovalov, D. J. Markley, G. W. Shutt, N. Parveen, M. G. D. Gilchriese, Yanwen Liu, C. Carels, Martin Breidenbach, Kathrin C. Walker, V.V. Sosnovtsev, A. Naylor, K. T. Lesko, N. A. Larsen, C. Lee, A. Pagac, J. J. Cherwinka, N. Decheine, J. Bang, J. A. Nikoleyczik, Patrick Bauer, J.P. Rodrigues, S. Branson, T. J. R. Davison, B. Lopez Paredes, D. Pagenkopf, J. S. Campbell, M. Tan, K. C. Oliver-Mallory, M.J. Barry, J. Belle, D. Yu. Akimov, M. Timalsina, S. Shaw, Alexander Bolozdynya, W. Ji, Sridhara Dasu, D. Q. Huang, J. Edwards, F. L. H. Wolfs, K. E. Boast, J. Busenitz, Ren-Jie Wang, S. Fiorucci, N. Stern, C. Rhyne, V. Bugaev, A. Laundrie, G. Rutherford, G. Pereira, E. H. Miller, W. W. Craddock, S. Alsum, J.P. da Cunha, Richard J. Smith, A. Cole, W. Wang, Julie Harrison, I. Khurana, M. Utes, R. J. Gaitskell, J. Kras, D. Khaitan, R. L. Mannino, J. D. Wolfs, H. Auyeung, L. de Viveiros, E. Voirin, E. M. Boulton, N. I. Chott, I. Stancu, L. Tvrznikova, Richard Rosero, P. MarrLaundrie, D. R. Tronstad, T. Benson, Dongming Mei, T. J. Sumner, O. Jahangir, J. Va’vra, Ross G. White, L. Sabarots, A. Currie, A. R. Smith, W. L. Waldron, J. P. Coleman, E. Lopez-Asamar, Wolfgang Lorenzon, A. Piepke, Carl Gwilliam, S. Hans, T. Harrington, Laura Manenti, A. Greenall, F.-T. Liao, G. Cox, J. R. Bensinger, V. M. Gehman, H. J. Rose, Christopher Brew, X. Bai, P. Sorensen, A. Arbuckle, Y. Qie, R. C. Webb, R.M. Gerhard, T.W. Hurteau, K.J. Thomas, P. Rossiter, C. Hasselkus, W. G. Jones, J. Johnson, R. Gelfand, T. G. Gonda, C. O. Vuosalo, A. St. J. Murphy, Adam Bernstein, Chao Zhang, A. Nilima, R. M. Preece, T. K. Edberg, Q. Riffard, B. P. Tennyson, Yue Meng, C. Maupin, J. E. Cutter, J. Reichenbacher, J.Y-K. Hor, N. Marangou, D. Temples, Eli Gibson, M. Hoff, H. S. Lee, J. H. Buckley, Z. J. Minaker, M.I. Lopes, M. Koyuncu, P. A. Terman, J.R. Verbus, Bhawna Gomber, J. A. Nikkel, A. Alquahtani, I. M. Fogarty Florang, D. Seymour, A. V. Kumpan, Antonin Vacheret, C. Hjemfelt, M.R. Stark, S. Pierson, M. Racine, D. R. Tiedt, D. S. Akerib, A. Khazov, W. C. Taylor, J. Balajthy, A.V. Khromov, A. C. Kaboth, V. M. Palmaccio, Duncan Carlsmith, K. Pushkin, S. A. Hertel, S. N. Jeffery, E. Druszkiewicz, R. W. Schnee, S. Pal, R. Bramante, B. N. Ratcliff, M. E. Monzani, J. O'Dell, P. Zarzhitsky, L. Wang, P. Johnson, Matthew Szydagis, W. H. To, J. E. Armstrong, U. Utku, Mani Tripathi, D. Woodward, D. Garcia, W. R. Edwards, Carl W. Akerlof, Jilei Xu, C. Nehrkorn, Ian S. Young, J. McLaughlin, J. Thomson, S. R. Eriksen, R. Rucinski, T.J. Martin, C. Levy, Sergey Burdin, A. Baxter, A. Lindote, L. Reichhart, Juhyeong Lee, S. Balashov, C. T. McConnell, M. F. Marzioni, A. Tomás, W. T. Kim, S. Weatherly, and Science and Technology Facilities Council (STFC)
- Subjects
BACKGROUNDS ,Particle physics ,Photomultiplier ,Physics - Instrumentation and Detectors ,Physics and Astronomy (miscellaneous) ,FOS: Physical sciences ,lcsh:Astrophysics ,Scintillator ,01 natural sciences ,High Energy Physics - Experiment ,Physics, Particles & Fields ,High Energy Physics - Experiment (hep-ex) ,WIMP ,lcsh:QB460-466 ,0103 physical sciences ,lcsh:Nuclear and particle physics. Atomic energy. Radioactivity ,Gamma spectroscopy ,Sensitivity (control systems) ,010306 general physics ,DETECTOR ,physics.ins-det ,0206 Quantum Physics ,Engineering (miscellaneous) ,Physics ,Science & Technology ,hep-ex ,010308 nuclear & particles physics ,Scattering ,IMPURITIES ,Instrumentation and Detectors (physics.ins-det) ,Nuclear & Particles Physics ,Physical Sciences ,0202 Atomic, Molecular, Nuclear, Particle and Plasma Physics ,Content (measure theory) ,lcsh:QC770-798 ,CONSTRUCTION MATERIALS - Abstract
LUX-ZEPLIN (LZ) is a second-generation direct dark matter experiment with spin-independent WIMP-nucleon scattering sensitivity above $1.4 \times 10^{-48}$ cm$^{2}$ for a WIMP mass of 40 GeV/c$^{2}$ and a 1000 d exposure. LZ achieves this sensitivity through a combination of a large 5.6 t fiducial volume, active inner and outer veto systems, and radio-pure construction using materials with inherently low radioactivity content. The LZ collaboration performed an extensive radioassay campaign over a period of six years to inform material selection for construction and provide an input to the experimental background model against which any possible signal excess may be evaluated. The campaign and its results are described in this paper. We present assays of dust and radon daughters depositing on the surface of components as well as cleanliness controls necessary to maintain background expectations through detector construction and assembly. Finally, examples from the campaign to highlight fixed contaminant radioassays for the LZ photomultiplier tubes, quality control and quality assurance procedures through fabrication, radon emanation measurements of major sub-systems, and bespoke detector systems to assay scintillator are presented., Comment: 45 pages (79 inc. tables), 7 figures, 9 tables
- Published
- 2020
- Full Text
- View/download PDF
9. Projected sensitivity of the LUX-ZEPLIN experiment to the 0νββ decay of Xe136
- Author
-
Richard Rosero, D. R. Tronstad, David Leonard, X. Bai, J. Johnson, X. Liu, J. Busenitz, P. Brás, C. Rhyne, A. Cole, R. J. Gaitskell, R. Linehan, Eli Gibson, C. Levy, Sergey Burdin, R. C. Webb, A. Baxter, L. de Viveiros, A. Alqahtani, A. Monte, C. Ghag, N. Angelides, P. Sorensen, S. Gokhale, S. Shaw, Catarina Silva, J. Palmer, B. Lopez Paredes, T. J. Sumner, N. Marangou, A. Manalaysay, K. E. Boast, O. Jahangir, R. W. Schnee, S. Pal, M. E. Monzani, Yue Meng, D. Naim, I. Stancu, J. Liao, J. J. Wang, K. O’Sullivan, V. Bugaev, N. I. Chott, A. Khazov, P. Zarzhitsky, B. J. Mount, T. A. Shutt, Matthew Szydagis, J. Lin, H. N. Nelson, K. C. Oliver-Mallory, Wolfgang Lorenzon, Ross G. White, J.J. Silk, J. R. Bensinger, E. Leason, Benjamin Krikler, M. G. D. Gilchriese, E. Druszkiewicz, Laura Manenti, J. E. Y. Dobson, C. Carels, E. Mizrachi, C. Chan, Henrique Araujo, J. A. Morad, L. Kreczko, J. R. Watson, F.-T. Liao, A. Vaitkus, S. Kravitz, W. C. Taylor, P. Rossiter, H. J. Birch, D. Khaitan, K. Stifter, A. Kamaha, J. Bang, J. A. Nikoleyczik, D. Seymour, W. Turner, U. Utku, E. D. Fraser, Mani Tripathi, K. T. Lesko, C. Nedlik, A. Biekert, J. Balajthy, V. A. Kudryavtsev, S. Fiorucci, A. C. Kaboth, J. McLaughlin, C. M. Ignarra, C. R. Hall, F. L. H. Wolfs, R. Cabrita, Kareem Kazkaz, G. Rutherford, D. R. Tiedt, A. Harrison, M. Horn, J. Li, T. P. Biesiadzinski, C. D. Kocher, M. G. D. van der Grinten, K. J. Palladino, M. F. Marzioni, M. C. Carmona-Benitez, J. Kras, Michele Cascella, Carl W. Akerlof, H. Kraus, C. E. Dahl, T. Fruth, T. J. Anderson, A. Lindote, J. M. Lyle, Jilei Xu, B. Boxer, C. Nehrkorn, A. B.M.R. Sazzad, A. Tomás, E. K. Pease, Juhyeong Lee, Michael Schubnell, D. S. Akerib, L. Korley, K. Kamdin, S. Balashov, Ethan Bernard, P. Majewski, W. H. Lippincott, V. N. Solovov, X. Xiang, Daniel McKinsey, N. Parveen, Minfang Yeh, A. St. J. Murphy, M. Solmaz, Adam Bernstein, Robert A. Taylor, E. Morrison, D. Woodward, A. Naylor, T. J. Whitis, S. J. Haselschwardt, D. Q. Huang, M.I. Lopes, N. Swanson, W. Wang, P. A. Terman, S. R. Eriksen, R. L. Mannino, Antonin Vacheret, M. Arthurs, S. Luitz, G. Pereira, Richard J. Smith, T. K. Edberg, Q. Riffard, D. Temples, K. Pushkin, J. E. Armstrong, J. H. Buckley, J. Genovesi, M. Tan, E. H. Miller, A. Cottle, I. Olcina, A. Bhatti, S. A. Hertel, C. Loniewski, Elena Korolkova, A. Stevens, G. R. C. Rischbieter, F. Neves, Bjoern Penning, M. Timalsina, L. Tvrznikova, Henning Flaecher, S. Alsum, I. Khurana, J. Y.K. Hor, J. E. Cutter, J. Reichenbacher, W. Ji, A. Fan, Duncan Carlsmith, D. Santone, A. Nilima, and R. Liu
- Subjects
Physics ,010308 nuclear & particles physics ,Active volume ,0103 physical sciences ,Analytical chemistry ,010306 general physics ,01 natural sciences ,Sensitivity (electronics) - Abstract
Author(s): Akerib, DS; Akerlof, CW; Alqahtani, A; Alsum, SK; Anderson, TJ; Angelides, N; Araujo, HM; Armstrong, JE; Arthurs, M; Bai, X; Balajthy, J; Balashov, S; Bang, J; Baxter, A; Bensinger, J; Bernard, EP; Bernstein, A; Bhatti, A; Biekert, A; Biesiadzinski, TP; Birch, HJ; Boast, KE; Boxer, B; Bras, P; Buckley, JH; Bugaev, VV; Burdin, S; Busenitz, JK; Cabrita, R; Carels, C; Carlsmith, DL; Carmona-Benitez, MC; Cascella, M; Chan, C; Chott, NI; Cole, A; Cottle, A; Cutter, JE; Dahl, CE; De Viveiros, L; Dobson, JEY; Druszkiewicz, E; Edberg, TK; Eriksen, SR; Fan, A; Fiorucci, S; Flaecher, H; Fraser, ED; Fruth, T; Gaitskell, RJ; Genovesi, J; Ghag, C; Gibson, E; Gilchriese, MGD; Gokhale, S; Van Der Grinten, MGD; Hall, CR; Harrison, A; Haselschwardt, SJ; Hertel, SA; Hor, JYK; Horn, M; Huang, DQ; Ignarra, CM; Jahangir, O; Ji, W; Johnson, J; Kaboth, AC; Kamaha, AC; Kamdin, K; Kazkaz, K; Khaitan, D; Khazov, A; Khurana, I; Kocher, CD; Korley, L; Korolkova, EV; Kras, J; Kraus, H; Kravitz, S; Kreczko, L; Krikler, B; Kudryavtsev, VA; Leason, EA; Lee, J | Abstract: The LUX-ZEPLIN (LZ) experiment will enable a neutrinoless double β decay search in parallel to the main science goal of discovering dark matter particle interactions. We report the expected LZ sensitivity to Xe136 neutrinoless double β decay, taking advantage of the significant (g600 kg) Xe136 mass contained within the active volume of LZ without isotopic enrichment. After 1000 live-days, the median exclusion sensitivity to the half-life of Xe136 is projected to be 1.06×1026 years (90% confidence level), similar to existing constraints. We also report the expected sensitivity of a possible subsequent dedicated exposure using 90% enrichment with Xe136 at 1.06×1027 years.
- Published
- 2020
- Full Text
- View/download PDF
10. Prostate lesion delineation from multiparametric magnetic resonance imaging based on locality alignment discriminant analysis
- Author
-
Bernard Chiu, Aaron D. Ward, Tommy W. S. Chow, Derek W. Cool, Zahra Kassam, Huageng Liang, Mingquan Lin, Weifu Chen, Mingbo Zhao, Eli Gibson, and Matthew Bastian-Jordan
- Subjects
Male ,Computer science ,Feature vector ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Image Processing, Computer-Assisted ,medicine ,Humans ,Effective diffusion coefficient ,Segmentation ,Radiation treatment planning ,Multiparametric Magnetic Resonance Imaging ,Pixel ,business.industry ,Discriminant Analysis ,Prostatic Neoplasms ,Multiparametric MRI ,Pattern recognition ,General Medicine ,medicine.disease ,Linear discriminant analysis ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Linear Models ,Artificial intelligence ,business ,Algorithms - Abstract
PURPOSE Multiparametric MRI (mpMRI) has shown promise in the detection and localization of prostate cancer foci. Although techniques have been previously introduced to delineate lesions from mpMRI, these techniques were evaluated in datasets with T2 maps available. The generation of T2 map is not included in the clinical prostate mpMRI consensus guidelines; the acquisition of which requires repeated T2-weighted (T2W) scans and would significantly lengthen the scan time currently required for the clinically recommended acquisition protocol, which includes T2W, diffusion-weighted (DW), and dynamic contrast-enhanced (DCE) imaging. The goal of this study is to develop and evaluate an algorithm that provides pixel-accurate lesion delineation from images acquired based on the clinical protocol. METHODS Twenty-five pixel-based features were extracted from the T2-weighted (T2W), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) images. The pixel-wise classification was performed on the reduced space generated by locality alignment discriminant analysis (LADA), a version of linear discriminant analysis (LDA) localized to patches in the feature space. Postprocessing procedures, including the removal of isolated points identified and filling of holes inside detected regions, were performed to improve delineation accuracy. The segmentation result was evaluated against the lesions manually delineated by four expert observers according to the Prostate Imaging-Reporting and Data System (PI-RADS) detection guideline. RESULTS The LADA-based classifier (60 ± 11%) achieved a higher sensitivity than the LDA-based classifier (51 ± 10%), thereby demonstrating, for the first time, that higher classification performance was attained on the reduced space generated by LADA than by LDA. Further sensitivity improvement (75 ± 14%) was obtained after postprocessing, approaching the sensitivities attained by previous mpMRI lesion delineation studies in which nonclinical T2 maps were available. CONCLUSION The proposed algorithm delineated lesions accurately and efficiently from images acquired following the clinical protocol. The development of this framework may potentially accelerate the clinical uses of mpMRI in prostate cancer diagnosis and treatment planning.
- Published
- 2018
- Full Text
- View/download PDF
11. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks
- Author
-
Ester Bonmati, Steve Bandula, Yipeng Hu, Dean C. Barratt, Stephen P. Pereira, Brian R. Davidson, Matthew J. Clarkson, Kurinchi Selvan Gurusamy, Francesco Giganti, and Eli Gibson
- Subjects
Radiography, Abdominal ,medicine.medical_specialty ,Radiography ,Kidney ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Segmentation ,Electrical and Electronic Engineering ,Esophagus ,Radiation treatment planning ,Radiological and Ultrasound Technology ,business.industry ,Gallbladder ,Stomach ,Image segmentation ,Computer Science Applications ,medicine.anatomical_structure ,Radiographic Image Interpretation, Computer-Assisted ,Radiology ,Tomography, X-Ray Computed ,business ,Pancreas ,Digestive System ,Algorithms ,Spleen ,030217 neurology & neurosurgery ,Software - Abstract
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
- Published
- 2018
- Full Text
- View/download PDF
12. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures
- Author
-
Geri Keane, Yipeng Hu, Ester Bonmati, Stephen P. Pereira, Kurinchi Gurusami, Dean C. Barratt, Matthew J. Clarkson, Laura Uribarri, Brian R. Davidson, and Eli Gibson
- Subjects
Endoscopic ultrasound ,Percentile ,Computer science ,Biomedical Engineering ,Image registration ,Health Informatics ,02 engineering and technology ,Endosonography ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Imaging, Three-Dimensional ,Pancreatectomy ,0302 clinical medicine ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Upper gastrointestinal ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Pancreas ,EUS ,Retrospective Studies ,Landmark ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Pancreatic cancer ,General Medicine ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Pancreatic Neoplasms ,Planning ,Surgery, Computer-Assisted ,Feature (computer vision) ,Computer-assisted interventions ,Original Article ,020201 artificial intelligence & image processing ,Surgery ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Tomography, X-Ray Computed ,business - Abstract
Purpose Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. Methods A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document}n=9) or retrospective clinical (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1$$\end{document}n=1) EUS landmarks. Results The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$< 0.01$$\end{document}
- Published
- 2018
- Full Text
- View/download PDF
13. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
- Author
-
Arnaud Arindra Adiyoso Setio, Dorin Comaniciu, Siqi Liu, Sasa Grbic, Eli Gibson, Bogdan Georgescu, and Florin C. Ghesu
- Subjects
FOS: Computer and information sciences ,Nodule detection ,Computer Science - Machine Learning ,Lung Neoplasms ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Image processing ,Computed tomography ,030218 nuclear medicine & medical imaging ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,Lung cancer ,Lung ,Early Detection of Cancer ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Image and Video Processing (eess.IV) ,Cancer ,Solitary Pulmonary Nodule ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Radiographic Image Interpretation, Computer-Assisted ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Software ,Lung cancer screening - Abstract
Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations., Comment: Published on IEEE Trans. on Medical Imaging
- Published
- 2020
- Full Text
- View/download PDF
14. Simulations of Events for the LUX-ZEPLIN (LZ) Dark Matter Experiment
- Author
-
T. Fruth, D. Seymour, J. Lin, W. Ji, J. McLaughlin, J. Palmer, L. Tvrznikova, M. Tan, P. R. Scovell, J. Genovesi, Ethan Bernard, E. H. Miller, F. L. H. Wolfs, G. Rutherford, V. Bugaev, J. Kras, D. Woodward, B. Lopez Paredes, M. Solmaz, E. Morrison, A. Biekert, I. Stancu, A. Lindote, K. E. Boast, A. Fan, Kareem Kazkaz, Juhyeong Lee, I. Olcina, A. Bhatti, A. Alqahtani, M. G. D. Gilchriese, S. Balashov, D. Santone, C. Carels, A. Piepke, Ross G. White, A. Cottle, M. E. Monzani, F.-T. Liao, T. P. Biesiadzinski, C. Loniewski, J. Y.K. Hor, J. E. Cutter, J. Reichenbacher, M. G. D. van der Grinten, N. I. Chott, P. Zarzhitsky, Matthew Szydagis, C. Levy, Sergey Burdin, K. J. Palladino, P. Rossiter, Elena Korolkova, Laura Manenti, G. R. C. Rischbieter, A. Baxter, D. R. Tiedt, C. E. Dahl, B. Boxer, T. K. Edberg, S. A. Hertel, Q. Riffard, U. Utku, K. C. Oliver-Mallory, R. W. Schnee, S. Pal, G. Pereira, Richard J. Smith, Michele Cascella, David Leonard, E. Druszkiewicz, D. S. Akerib, Mani Tripathi, A. Nilima, Simon Fayer, M. Timalsina, E. K. Pease, D. Temples, K. Pushkin, N. Angelides, P. Majewski, M. C. Carmona-Benitez, T. J. Anderson, N. Marangou, J. Busenitz, W. H. Lippincott, Michael Schubnell, A. B.M.R. Sazzad, C. Rhyne, J. E. Armstrong, V. N. Solovov, A. Manalaysay, A. Khazov, J.J. Silk, A. St. J. Murphy, Adam Bernstein, A. Naylor, D. Naim, S. Shaw, J. Li, S. Luitz, S. Gokhale, A. Cole, R. J. Gaitskell, J. H. Buckley, L. Korley, N. Parveen, K. Kamdin, R. Linehan, Minfang Yeh, W. C. Taylor, D. Q. Huang, B. J. Mount, T. A. Shutt, Catarina Silva, Carl W. Akerlof, Jilei Xu, C. Nehrkorn, H. J. Birch, J. Balajthy, M.I. Lopes, O. Jahangir, A. Monte, Daniel McKinsey, A. C. Kaboth, J. E. Y. Dobson, X. Xiang, A. Richards, C. Ghag, V. A. Kudryavtsev, C. Chan, A. Vaitkus, S. Kravitz, D. Khaitan, J. J. Wang, C. M. Ignarra, P. A. Terman, D. Bauer, M. F. Marzioni, A. Tomás, L. Kreczko, J. R. Watson, A. Harrison, R. L. Mannino, Antonin Vacheret, S. J. Haselschwardt, M. Arthurs, F. Neves, W. Wang, E. D. Fraser, P. Sorensen, E. Leason, T. J. Sumner, P. Brás, Henning Flaecher, Benjamin Krikler, K. T. Lesko, Yue Meng, T. J. Whitis, X. Bai, E. Mizrachi, J. Johnson, S. R. Eriksen, Duncan Carlsmith, Wolfgang Lorenzon, X. Liu, N. Swanson, J. R. Bensinger, Eli Gibson, A. Kamaha, H. N. Nelson, W. Turner, C. Nedlik, Henrique Araujo, C. R. Hall, R. Cabrita, H. Kraus, R. Liu, Richard Rosero, D. R. Tronstad, R. C. Webb, L. de Viveiros, K. Stifter, S. Fiorucci, J. Liao, J. A. Morad, J. Bang, J. A. Nikoleyczik, M. Horn, C. D. Kocher, J. M. Lyle, Robert A. Taylor, S. Alsum, I. Khurana, A. Stevens, and Bjoern Penning
- Subjects
Physics ,Particle physics ,Physics - Instrumentation and Detectors ,hep-ex ,010308 nuclear & particles physics ,Physics::Instrumentation and Detectors ,Monte Carlo method ,Detector ,Dark matter ,FOS: Physical sciences ,Astronomy and Astrophysics ,Instrumentation and Detectors (physics.ins-det) ,01 natural sciences ,High Energy Physics - Experiment ,High Energy Physics - Experiment (hep-ex) ,WIMP ,0103 physical sciences ,Sensitivity (control systems) ,Projection (set theory) ,physics.ins-det ,010303 astronomy & astrophysics ,Event (particle physics) ,Background radiation - Abstract
The LUX-ZEPLIN dark matter search aims to achieve a sensitivity to the WIMP-nucleon spin-independent cross-section down to (1--2)$\times10^{-12}$\,pb at a WIMP mass of 40 GeV/$c^2$. This paper describes the simulations framework that, along with radioactivity measurements, was used to support this projection, and also to provide mock data for validating reconstruction and analysis software. Of particular note are the event generators, which allow us to model the background radiation, and the detector response physics used in the production of raw signals, which can be converted into digitized waveforms similar to data from the operational detector. Inclusion of the detector response allows us to process simulated data using the same analysis routines as developed to process the experimental data., Comment: 24 pages, 19 figures; Corresponding Authors: A. Cottle, V. Kudryavtsev, D. Woodward
- Published
- 2020
- Full Text
- View/download PDF
15. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans
- Author
-
T G W Boers, F. van der Heijden, Dean C. Barratt, Henkjan J. Huisman, Eli Gibson, Jasenko Krdzalic, J J Hermans, Yipeng Hu, Ester Bonmati, Digital Society Institute, and Robotics and Mechatronics
- Subjects
Computer science ,Interactive 3d ,pancreatic cancer ,UT-Hybrid-D ,Computed tomography ,U-net ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Imaging, Three-Dimensional ,All institutes and research themes of the Radboud University Medical Center ,Medical imaging ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Set (psychology) ,Pancreas ,Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Deep learning ,22/2 OA procedure ,Pattern recognition ,interactive segmentation ,030220 oncology & carcinogenesis ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
The increasing incidence of pancreatic cancer will make it the second deadliest cancer in 2030. Imaging based early diagnosis and image guided treatment are emerging potential solutions. Artificial intelligence (AI) can help provide and improve widespread diagnostic expertise and accurate interventional image interpretation. Accurate segmentation of the pancreas is essential to create annotated data sets to train AI, and for computer assisted interventional guidance. Automated deep learning segmentation performance in pancreas computed tomography (CT) imaging is low due to poor grey value contrast and complex anatomy. A good solution seemed a recent interactive deep learning segmentation framework for brain CT that helped strongly improve initial automated segmentation with minimal user input. This method yielded no satisfactory results for pancreas CT, possibly due to a sub-optimal neural network architecture. We hypothesize that a state-of-the-art U-net neural network architecture is better because it can produce a better initial segmentation and is likely to be extended to work in a similar interactive approach. We implemented the existing interactive method, iFCN, and developed an interactive version of U-net method we call iUnet. The iUnet is fully trained to produce the best possible initial segmentation. In interactive mode it is additionally trained on a partial set of layers on user generated scribbles. We compare initial segmentation performance of iFCN and iUnet on a 100CT dataset using dice similarity coefficient analysis. Secondly, we assessed the performance gain in interactive use with three observers on segmentation quality and time. Average automated baseline performance was 78% (iUnet) versus 72% (FCN). Manual and semi-automatic segmentation performance was: 87% in 15 min. for manual, and 86% in 8 min. for iUNet. We conclude that iUnet provides a better baseline than iFCN and can reach expert manual performance significantly faster than manual segmentation in case of pancreas CT. Our novel iUnet architecture is modality and organ agnostic and can be a potential novel solution for semi-automatic medical imaging segmentation in general.
- Published
- 2020
16. Validation of a fully automated liver segmentation algorithm using multi-scale deep reinforcement learning and comparison versus manual segmentation
- Author
-
David J. Winkel, Daniel T. Boll, Hanns-Christian Breit, Thomas Weikert, Dorin Comaniciu, Tobias Heye, Eli Gibson, and Guillaume Chabin
- Subjects
Reproducibility ,business.industry ,Liver Diseases ,Univariate ,Contrast (statistics) ,Reproducibility of Results ,General Medicine ,Deep Learning ,Liver ,Robustness (computer science) ,Approximation error ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Tomography ,business ,Tomography, X-Ray Computed ,Algorithm ,Algorithms ,Volume (compression) ,Retrospective Studies - Abstract
Purpose To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. Materials and methods We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. Results The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. Conclusion The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.
- Published
- 2019
17. Determination of the Association Between T2-weighted MRI and Gleason Sub-pattern: A Proof of Principle Study
- Author
-
Theo van der Kwast, Masoom A. Haider, Jenna Sykes, Eli Gibson, Michelle R Downes, and Aaron D. Ward
- Subjects
Adult ,Male ,medicine.medical_specialty ,Pathology ,Biopsy ,medicine.medical_treatment ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,medicine ,Carcinoma ,Humans ,Radiology, Nuclear Medicine and imaging ,Sampling (medicine) ,Aged ,Prostatectomy ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Magnetic resonance imaging ,Middle Aged ,medicine.disease ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Cribriform ,Radiology ,Neoplasm Grading ,business - Abstract
The study aimed to determine the relationship between T2-weighted magnetic resonance imaging (MRI) signal and histologic sub-patterns in prostate cancer areas with different Gleason grades.MR images of prostates (n = 25) were obtained prior to radical prostatectomy. These were processed as whole-mount specimens with tumors and the peripheral zone was annotated digitally by two pathologists. Gleason grade 3 was the most prevalent grade and was subdivided into packed, intermediate, and sparse based on gland-to-stroma ratio. Large cribriform, intraductal carcinoma, and small cribriform glands (grade 4 group) were separately annotated but grouped together for statistical analysis. The log MRI signal intensity for each contoured region (n = 809) was measured, and pairwise comparisons were performed using the open-source software R version 3.0.1.Packed grade 3 sub-pattern has a significantly lower MRI intensity than the grade 4 group (P 0.00001). Sparse grade 3 has a significantly higher MRI intensity than the packed grade 3 sub-pattern (P 0.0001). No significant difference in MRI intensity was observed between the Gleason grade 4 group and the sparse sub-pattern grade 3 group (P = 0.54). In multivariable analysis adjusting for peripheral zone, the P values maintained significance (packed grade 3 group vs grade 4 group, P 0.001; and sparse grade 3 sub-pattern vs packed grade 3 sub-pattern, P 0.001).This study demonstrated that T2-weighted MRI signal is dependent on histologic sub-patterns within Gleason grades 3 and 4 cancers, which may have implications for directed biopsy sampling and patient management.
- Published
- 2016
- Full Text
- View/download PDF
18. Correction to: Stochastic Sequential Modeling: Toward Improved Prostate Cancer Diagnosis Through Temporal-Ultrasound
- Author
-
Aaron Fenster, Parvin Mousavi, Mena Gaed, Farhad Imani, Hagit Shatkay, Eli Gibson, Layan Nahlawi, Aaron D. Ward, Purang Abolmaesumi, Jose A. Gomez, and Madeleine Moussa
- Subjects
Male ,business.industry ,Computer science ,Prostate ,Biomedical Engineering ,Correction ,Prostatic Neoplasms ,Models, Theoretical ,computer.software_genre ,medicine.disease ,Markov Chains ,Sequential modeling ,Prostate cancer ,medicine ,Humans ,Artificial intelligence ,business ,computer ,Natural language processing ,Ultrasonography - Abstract
Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.
- Published
- 2020
- Full Text
- View/download PDF
19. Conditional Segmentation in Lieu of Image Registration
- Author
-
Yipeng Hu, Mark Emberton, Tom Vercauteren, Dean C. Barratt, J. Alison Noble, and Eli Gibson
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Pixel ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Image registration ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Measure (mathematics) ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Displacement field ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy., Accepted to MICCAI 2019
- Published
- 2019
- Full Text
- View/download PDF
20. Stochastic Modeling of Temporal Enhanced Ultrasound: Impact of Temporal Properties on Prostate Cancer Characterization
- Author
-
Mena Gaed, Madeleine Moussa, Layan Nahlawi, Hagit Shatkay, Aaron Fenster, Parvin Mousavi, Eli Gibson, Farhad Imani, Purang Abolmaesumi, Caroline Goncalves, Aaron D. Ward, and Jose A. Gomez
- Subjects
Male ,Computer science ,030232 urology & nephrology ,Biomedical Engineering ,Signal ,Sensitivity and Specificity ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Biopsy ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Hidden Markov model ,Divergence (statistics) ,Ultrasonography ,Stochastic Processes ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Prostate ,Prostatic Neoplasms ,Pattern recognition ,Tissue characterization ,medicine.disease ,Markov Chains ,Ultrasonic imaging ,Imaging technique ,Artificial intelligence ,business - Abstract
Objectives : Temporal enhanced ultrasound (TeUS) is a new ultrasound-based imaging technique that provides tissue-specific information. Recent studies have shown the potential of TeUS for improving tissue characterization in prostate cancer diagnosis. We study the temporal properties of TeUS—temporal order and length—and present a new framework to assess their impact on tissue information. Methods : We utilize a probabilistic modeling approach using hidden Markov models (HMMs) to capture the temporal signatures of malignant and benign tissues from TeUS signals of nine patients. We model signals of benign and malignant tissues (284 and 286 signals, respectively) in their original temporal order as well as under order permutations. We then compare the resulting models using the Kullback–Liebler divergence and assess their performance differences in characterization. Moreover, we train HMMs using TeUS signals of different durations and compare their model performance when differentiating tissue types. Results : Our findings demonstrate that models of order-preserved signals perform statistically significantly better (85% accuracy) in tissue characterization compared to models of order-altered signals (62% accuracy). The performance degrades as more changes in signal order are introduced. Additionally, models trained on shorter sequences perform as accurately as models of longer sequences. Conclusion : The work presented here strongly indicates that temporal order has substantial impact on TeUS performance; thus, it plays a significant role in conveying tissue-specific information. Furthermore, shorter TeUS signals can relay sufficient information to accurately distinguish between tissue types. Significance : Understanding the impact of TeUS properties facilitates the process of its adopting in diagnostic procedures and provides insights on improving its acquisition.
- Published
- 2018
21. Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images
- Author
-
Nooshin, Ghavami, Yipeng, Hu, Ester, Bonmati, Rachael, Rodell, Eli, Gibson, Caroline, Moore, and Dean, Barratt
- Subjects
Special Section on Artificial Intelligence in Medical Imaging - Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.
- Published
- 2018
22. Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks
- Author
-
Yipeng Hu, Eli Gibson, Ester Bonmati, Caroline M. Moore, Dean C. Barratt, Nooshin Ghavami, and Rachel Rodell
- Subjects
Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Computer science ,Deep learning ,Image registration ,urologic and male genital diseases ,medicine.disease ,Convolutional neural network ,Cross-validation ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,030220 oncology & carcinogenesis ,medicine ,Computer vision ,Segmentation ,Artificial intelligence ,business - Abstract
This paper, originally published on 12 March 2018, was replaced with a corrected/revised version on 1 June 2018. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Clinically important targets for ultrasound-guided prostate biopsy and prostate cancer focal therapy can be defined on MRI. However, localizing these targets on transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of the prostate on intraoperative TRUS images is an important step towards automating most MRI-TRUS image registration workflows so that they become more acceptable in clinical practice. In this paper, we propose a deep learning method using convolutional neural networks (CNNs) for automatic prostate segmentation in 2D TRUS slices and 3D TRUS volumes. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS-guided targeted biopsy. Segmentation accuracy was measured by comparison to manual prostate segmentation in 2D on 4055 TRUS images and in 3D on the corresponding 110 volumes, in a 10-fold patient-level cross validation. The proposed method achieved a mean 2D Dice score coefficient (DSC) of 0.91±0.12 and a mean absolute boundary segmentation error of 1.23±1.46mm. Dice scores (0.91±0.04) were also calculated for 3D volumes on the patient level. These suggest a promising approach to aid a wide range of TRUS-guided prostate cancer procedures needing multimodality data fusion.
- Published
- 2018
- Full Text
- View/download PDF
23. Ultrasound-Based Characterization of Prostate Cancer Using Joint Independent Component Analysis
- Author
-
D. Robert Siemens, Farhad Imani, Mena Gaed, Cesare Romagnoli, Aaron D. Ward, Saman Nouranian, Silvia D. Chang, Michael Leveridge, Purang Abolmaesumi, Eli Gibson, Madeleine Moussa, Mahdi Ramezani, Amir Khojaste, Jose A. Gomez, Aaron Fenster, and Parvin Mousavi
- Subjects
Male ,Engineering ,Feature vector ,Feature extraction ,Wavelet Analysis ,030232 urology & nephrology ,Biomedical Engineering ,Cross-validation ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Humans ,Time series ,Ultrasonography ,Models, Statistical ,Receiver operating characteristic ,business.industry ,Prostate ,Prostatic Neoplasms ,Wavelet transform ,Pattern recognition ,Independent component analysis ,3. Good health ,Principal component analysis ,Artificial intelligence ,business - Abstract
Objective: This paper presents the results of a new approach for selection of RF time series features based on joint independent component analysis for in vivo characterization of prostate cancer. Methods: We project three sets of RF time series features extracted from the spectrum, fractal dimension, and the wavelet transform of the ultrasound RF data on a space spanned by five joint independent components. Then, we demonstrate that the obtained mixing coefficients from a group of patients can be used to train a classifier, which can be applied to characterize cancerous regions of a test patient. Results: In a leave-one-patient-out cross validation, an area under receiver operating characteristic curve of 0.93 and classification accuracy of 84% are achieved. Conclusion: Ultrasound RF time series can be used to accurately characterize prostate cancer, in vivo without the need for exhaustive search in the feature space. Significance: We use joint independent component analysis for systematic fusion of multiple sets of RF time series features, within a machine learning framework, to characterize PCa in an in vivo study.
- Published
- 2015
- Full Text
- View/download PDF
24. Development of a computer aided diagnosis model for prostate cancer classification on multi-parametric MRI
- Author
-
Mena Gaed, Joseph L. Chin, Stephen E. Pautler, D. Soetemans, Ryan Alfano, Madeleine Moussa, Eli Gibson, Jose A. Gomez, Aaron D. Ward, and Glenn Bauman
- Subjects
medicine.medical_specialty ,business.industry ,Linear classifier ,Feature selection ,medicine.disease ,Random forest ,Support vector machine ,Prostate cancer ,Prostate cancer screening ,Computer-aided diagnosis ,Medicine ,Radiology ,business ,Grading (tumors) - Abstract
Multi-parametric MRI (mp-MRI) is becoming a standard in contemporary prostate cancer screening and diagnosis, and has shown to aid physicians in cancer detection. It offers many advantages over traditional systematic biopsy, which has shown to have very high clinical false-negative rates of up to 23% at all stages of the disease. However beneficial, mp-MRI is relatively complex to interpret and suffers from inter-observer variability in lesion localization and grading. Computer-aided diagnosis (CAD) systems have been developed as a solution as they have the power to perform deterministic quantitative image analysis. We measured the accuracy of such a system validated using accurately co-registered whole-mount digitized histology. We trained a logistic linear classifier (LOGLC), support vector machine (SVC), k-nearest neighbour (KNN) and random forest classifier (RFC) in a four part ROI based experiment against: 1) cancer vs. non-cancer, 2) high-grade (Gleason score ≥4+3) vs. low-grade cancer (Gleason score
- Published
- 2018
- Full Text
- View/download PDF
25. Class-Aware Adversarial Lung Nodule Synthesis in CT Images
- Author
-
Siqi Liu, Guillaume Chabin, Andrew F. Laine, Bogdan Georgescu, Eli Gibson, Arnaud Arindra Adiyoso Setio, Sasa Grbic, Dorin Comaniciu, Jie Yang, and Zhoubing Xu
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Malignancy ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,business.industry ,Deep learning ,Supervised learning ,Pattern recognition ,Nodule (medicine) ,Real image ,medicine.disease ,Class (biology) ,ComputingMethodologies_PATTERNRECOGNITION ,Binary classification ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,business - Abstract
Though large-scale datasets are essential for training deep learning systems, it is expensive to scale up the collection of medical imaging datasets. Synthesizing the objects of interests, such as lung nodules, in medical images based on the distribution of annotated datasets can be helpful for improving the supervised learning tasks, especially when the datasets are limited by size and class balance. In this paper, we propose the class-aware adversarial synthesis framework to synthesize lung nodules in CT images. The framework is built with a coarse-to-fine patch in-painter (generator) and two class-aware discriminators. By conditioning on the random latent variables and the target nodule labels, the trained networks are able to generate diverse nodules given the same context. By evaluating on the public LIDC-IDRI dataset, we demonstrate an example application of the proposed framework for improving the accuracy of the lung nodule malignancy estimation as a binary classification problem, which is important in the lung screening scenario. We show that combining the real image patches and the synthetic lung nodules in the training set can improve the mean AUC classification score across different network architectures by 2%.
- Published
- 2018
- Full Text
- View/download PDF
26. Inter-site Variability in Prostate Segmentation Accuracy Using Deep Learning
- Author
-
Dean C. Barratt, Caroline M. Moore, Hashim U. Ahmed, Henkjan J. Huisman, Mark Emberton, Yipeng Hu, Eli Gibson, and Nooshin Ghavami
- Subjects
Training set ,Computer science ,business.industry ,Deep learning ,education ,Pattern recognition ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Medical imaging ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Prostate segmentation - Abstract
Deep-learning-based segmentation tools have yielded higher reported segmentation accuracies for many medical imaging applications. However, inter-site variability in image properties can challenge the translation of these tools to data from ‘unseen’ sites not included in the training data. This study quantifies the impact of inter-site variability on the accuracy of deep-learning-based segmentations of the prostate from magnetic resonance (MR) images, and evaluates two strategies for mitigating the reduced accuracy for data from unseen sites: training on multi-site data and training with limited additional data from the unseen site. Using 376 T2-weighted prostate MR images from six sites, we compare the segmentation accuracy (Dice score and boundary distance) of three deep-learning-based networks trained on data from a single site and on various configurations of data from multiple sites. We found that the segmentation accuracy of a single-site network was substantially worse on data from unseen sites than on data from the training site. Training on multi-site data yielded marginally improved accuracy and robustness. However, including as few as 8 subjects from the unseen site, e.g. during commissioning of a new clinical system, yielded substantial improvement (regaining 75% of the difference in Dice score).
- Published
- 2018
- Full Text
- View/download PDF
27. Adversarial Deformation Regularization for Training Image Registration Neural Networks
- Author
-
Nooshin Ghavami, Ester Bonmati, J. Alison Noble, Tom Vercauteren, Caroline M. Moore, Mark Emberton, Yipeng Hu, Dean C. Barratt, and Eli Gibson
- Subjects
medicine.diagnostic_test ,Artificial neural network ,Computer science ,business.industry ,Image registration ,Magnetic resonance imaging ,Pattern recognition ,medicine.disease ,Regularization (mathematics) ,Convolutional neural network ,Finite element method ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.
- Published
- 2018
- Full Text
- View/download PDF
28. A Graph-Based Multi-kernel Feature Weight Learning Framework for Detection and Grading of Prostate Lesions Using Multi-parametric MR Images
- Author
-
Bernard Chiu, Aaron D. Ward, Huagen Liang, Eli Gibson, Derek W. Cool, Weifu Chen, Qi Shen, Matthew Bastian-Jordan, Zahra Kassam, and Guocan Feng
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Feature extraction ,Pattern recognition ,Magnetic resonance imaging ,Regression analysis ,medicine.disease ,computer.software_genre ,Regression ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,Voxel ,030220 oncology & carcinogenesis ,medicine ,Artificial intelligence ,business ,computer ,Grading (tumors) - Abstract
Prostate cancer is the third leading causes of death in men. However, the disease is curable if diagnosed early. During the past decades, multi-parametric magnetic resonance imaging (mpMRI) has been shown to be superior to trans-rectal ultrasound (TRUS) in detecting and localizing prostate cancer lesions to guide prostate biopsies and radiation therapies. The goal of this paper is to develop a simple and accurate graph-based regression framework for voxel-wise detection and grading of prostate cancer using mpMRIs. In the framework, groups of features were first extracted from the mpMRIs, and a graph-based multi-kernel model was proposed to learn the weights of the groups of features and the similarity matrix simultaneously. A Lapalacian regression model was then used to estimate the PI-RADS score of each voxels which characterizes how likely a voxel is cancerous. Experimental results of detection and grading of prostate lesions evaluated by six metrics show that the proposed method yielded convincing results.
- Published
- 2017
- Full Text
- View/download PDF
29. MP70-02 CORRELATION OF MPMRI CONTOURS WITH 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSY WITHIN THE PROMIS TRIAL PILOT: WHAT MARGINS ARE REQUIRED?
- Author
-
Dean C. Barratt, Esther Bonmati, Richard Kaplan, Eli Gibson, Hashim U. Ahmed, Mark Emberton, Yipeng Hu, Clement Orczyk, Alex Kirkham, Yolana Coraco-Moraes, Shonit Punwani, Ahmed El-Shater Bosaily, Louise Brown, and Katie Ward
- Subjects
medicine.medical_specialty ,medicine.anatomical_structure ,medicine.diagnostic_test ,Prostate ,business.industry ,Urology ,Biopsy ,medicine ,Radiology ,business - Published
- 2017
- Full Text
- View/download PDF
30. MP38-07 SHOULD WE AIM FOR THE CENTRE OF AN MRI PROSTATE LESION? CORRELATION BETWEEN MPMRI AND 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSIES FROM THE PROMIS TRIAL
- Author
-
Yipeng Hu, Dean C. Barratt, Esther Bonmati, Hashim U. Ahmed, Ahmed El-Shater Bosaily, Mark Emberton, Louise Brown, Shonit Punwani, Katie Ward, Alex Kirkham, Yolana Coraco-Moraes, Eli Gibson, Clement Orczyk, and Richard Kaplan
- Subjects
Lesion ,medicine.medical_specialty ,medicine.anatomical_structure ,Prostate ,business.industry ,Urology ,medicine ,Radiology ,medicine.symptom ,business - Published
- 2017
- Full Text
- View/download PDF
31. Deep residual networks for automatic segmentation of laparoscopic videos of the liver
- Author
-
C Schneider, Maria Robu, Kurinchi Selvan Gurusamy, Eli Gibson, Dean C. Barratt, Stephen A. Thompson, David J. Hawkes, Brian R. Davidson, Eddie Edwards, and Matthew J. Clarkson
- Subjects
medicine.medical_specialty ,Computer science ,Population ,02 engineering and technology ,Liver resections ,Residual ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Liver tissue ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,education ,Laparoscopy ,education.field_of_study ,medicine.diagnostic_test ,business.industry ,Deep learning ,Image segmentation ,medicine.disease ,020201 artificial intelligence & image processing ,Radiology ,Artificial intelligence ,Liver cancer ,business - Abstract
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores ≥0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.
- Published
- 2017
- Full Text
- View/download PDF
32. Models of temporal enhanced ultrasound data for prostate cancer diagnosis: the impact of time-series order
- Author
-
Aaron D. Ward, Aaron Fenster, Parvin Mousavi, Farhad Imani, Hagit Shatkay, Madeleine Moussa, Mena Gaed, Caroline Goncalves, Purang Abolmaesumi, Layan Nahlawi, Eli Gibson, and Jose A. Gomez
- Subjects
Series (mathematics) ,medicine.diagnostic_test ,business.industry ,Computer science ,Ultrasound ,030232 urology & nephrology ,Cancer ,Pattern recognition ,Tissue characterization ,Malignancy ,medicine.disease ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Order (biology) ,Prostate ,Biopsy ,medicine ,Artificial intelligence ,Ultrasonography ,business ,Hidden Markov model - Abstract
Recent studies have shown the value of Temporal Enhanced Ultrasound (TeUS) imaging for tissue characterization in transrectal ultrasound-guided prostate biopsies. Here, we present results of experiments designed to study the impact of temporal order of the data in TeUS signals. We assess the impact of variations in temporal order on the ability to automatically distinguish benign prostate-tissue from malignant tissue. We have previously used Hidden Markov Models (HMMs) to model TeUS data, as HMMs capture temporal order in time series. In the work presented here, we use HMMs to model malignant and benign tissues; the models are trained and tested on TeUS signals while introducing variation to their temporal order. We first model the signals in their original temporal order, followed by modeling the same signals under various time rearrangements. We compare the performance of these models for tissue characterization. Our results show that models trained over the original order-preserving signals perform statistically significantly better for distinguishing between malignant and benign tissues, than those trained on rearranged signals. The performance degrades as the amount of temporal-variation increases. Specifically, accuracy of tissue characterization decreases from 85% using models trained on original signals to 62% using models trained and tested on signals that are completely temporally-rearranged. These results indicate the importance of order in characterization of tissue malignancy from TeUS data.
- Published
- 2017
- Full Text
- View/download PDF
33. Prostate lesion detection and localization based on locality alignment discriminant analysis
- Author
-
Bernard Chiu, Matthew Bastian-Jordan, Derek W. Cool, Eli Gibson, Tommy W. S. Chow, Zahra Kassam, Mingquan Lin, Aaron D. Ward, Weifu Chen, and Mingbo Zhao
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Feature vector ,030232 urology & nephrology ,Cancer ,Magnetic resonance imaging ,Image segmentation ,medicine.disease ,Linear discriminant analysis ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,Biopsy ,Medicine ,Computer vision ,Radiology ,Artificial intelligence ,business - Abstract
Prostatic adenocarcinoma is one of the most commonly occurring cancers among men in the world, and it also the most curable cancer when it is detected early. Multiparametric MRI (mpMRI) combines anatomic and functional prostate imaging techniques, which have been shown to produce high sensitivity and specificity in cancer localization, which is important in planning biopsies and focal therapies. However, in previous investigations, lesion localization was achieved mainly by manual segmentation, which is time-consuming and prone to observer variability. Here, we developed an algorithm based on locality alignment discriminant analysis (LADA) technique, which can be considered as a version of linear discriminant analysis (LDA) localized to patches in the feature space. Sensitivity, specificity and accuracy generated by the proposed algorithm in five prostates by LADA were 52.2%, 89.1% and 85.1% respectively, compared to 31.3%, 85.3% and 80.9% generated by LDA. The delineation accuracy attainable by this tool has a potential in increasing the cancer detection rate in biopsies and in minimizing collateral damage of surrounding tissues in focal therapies.
- Published
- 2017
- Full Text
- View/download PDF
34. Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks
- Author
-
Weidi Xie, Li-Lin Lee, Tom Vercauteren, J. Alison Noble, Dean C. Barratt, Yipeng Hu, and Eli Gibson
- Subjects
FOS: Computer and information sciences ,Discriminator ,Pixel ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Real image ,Sample (graphics) ,Imaging phantom ,Machine Learning (cs.LG) ,Computer Science - Learning ,Range (mathematics) ,Position (vector) ,Computer vision ,Artificial intelligence ,business ,Spatial analysis - Abstract
Sonography synthesis has a wide range of applications, including medical procedure simulation, clinical training and multimodality image registration. In this paper, we propose a machine learning approach to simulate ultrasound images at given 3D spatial locations (relative to the patient anatomy), based on conditional generative adversarial networks (GANs). In particular, we introduce a novel neural network architecture that can sample anatomically accurate images conditionally on spatial position of the (real or mock) freehand ultrasound probe. To ensure an effective and efficient spatial information assimilation, the proposed spatially-conditioned GANs take calibrated pixel coordinates in global physical space as conditioning input, and utilise residual network units and shortcuts of conditioning data in the GANs' discriminator and generator, respectively. Using optically tracked B-mode ultrasound images, acquired by an experienced sonographer on a fetus phantom, we demonstrate the feasibility of the proposed method by two sets of quantitative results: distances were calculated between corresponding anatomical landmarks identified in the held-out ultrasound images and the simulated data at the same locations unseen to the networks; a usability study was carried out to distinguish the simulated data from the real images. In summary, we present what we believe are state-of-the-art visually realistic ultrasound images, simulated by the proposed GAN architecture that is stable to train and capable of generating plausibly diverse image samples., Comment: Accepted to MICCAI RAMBO 2017
- Published
- 2017
- Full Text
- View/download PDF
35. Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks
- Author
-
Stephen P. Pereira, Brian R. Davidson, Eli Gibson, Yipeng Hu, Francesco Giganti, Ester Bonmati, Dean C. Barratt, Steve Bandula, Matthew J. Clarkson, and Kurinchi Selvan Gurusamy
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Stomach ,Abdominal ct ,Image registration ,Multi organ ,030218 nuclear medicine & medical imaging ,Endoscopy ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,medicine ,Segmentation ,Radiology ,Esophagus ,Pancreas ,business ,030217 neurology & neurosurgery - Abstract
Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-based algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures.
- Published
- 2017
- Full Text
- View/download PDF
36. Fusion of multi-parametric MRI and temporal ultrasound for characterization of prostate cancer: in vivo feasibility study
- Author
-
Eli Gibson, Mena Gaed, Silvia D. Chang, Zahra Kassam, Derek W. Cool, Amir Khojaste, Michael Leveridge, Siavash Khallaghi, Cesare Romagnoli, Purang Abolmaesumi, Farhad Imani, Matthew Bastian-Jordan, Aaron Fenster, Parvin Mousavi, Madeleine Moussa, D. Robert Siemens, Aaron D. Ward, Jose A. Gomez, and Sahar Ghavidel
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Receiver operating characteristic ,business.industry ,Ultrasound ,030232 urology & nephrology ,Cancer ,Magnetic resonance imaging ,medicine.disease ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,In vivo ,Biopsy ,medicine ,Medical physics ,Histopathology ,business ,Nuclear medicine - Abstract
Recently, multi-parametric Magnetic Resonance Imaging (mp-MRI) has been used to improve the sensitivity of detecting high-risk prostate cancer (PCa). Prior to biopsy, primary and secondary cancer lesions are identified on mp-MRI. The lesions are then targeted using TRUS guidance. In this paper, for the first time, we present a fused mp-MRI-temporal-ultrasound framework for characterization of PCa, in vivo. Cancer classification results obtained using temporal ultrasound are fused with those achieved using consolidated mp-MRI maps determined by multiple observers. We verify the outcome of our study using histopathology following deformable registration of ultrasound and histology images. Fusion of temporal ultrasound and mp-MRI for characterization of the PCa results in an area under the receiver operating characteristic curve (AUC) of 0.86 for cancerous regions with Gleason scores (GSs)≥3+3, and AUC of 0.89 for those with GSs≥3+4.
- Published
- 2016
- Full Text
- View/download PDF
37. How does prostate biopsy guidance error impact pathologic cancer risk assessment?
- Author
-
Joseph L. Chin, Peter R. Martin, Mena Gaed, Derek W. Cool, Aaron D. Ward, Eli Gibson, Stephen E. Pautler, Madeleine Moussa, Jose A. Gomez, and Aaron Fenster
- Subjects
medicine.medical_specialty ,Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Prostatectomy ,medicine.medical_treatment ,Ultrasound ,Image registration ,Magnetic resonance imaging ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,030220 oncology & carcinogenesis ,Biopsy ,medicine ,Medical physics ,Radiology ,Image-Guided Biopsy ,business - Abstract
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21–47% false negative rate of clinical 2D TRUS-guided sextant biopsy, but still has a substantial false negative rate. This could be improved via biopsy needle target optimization, accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As an initial step toward the broader goal of optimized prostate biopsy targeting, in this study we elucidated the impact of biopsy needle delivery error on the probability of obtaining a tumor sample, and on the core involvement. These are both important parameters to patient risk stratification and the decision for active surveillance vs. definitive therapy. We addressed these questions for cancer of all grades, and separately for high grade (≥ Gleason 4+3) cancer. We used expert-contoured gold-standard prostatectomy histology to simulate targeted biopsies using an isotropic Gaussian needle delivery error from 1 to 6 mm, and investigated the amount of cancer obtained in each biopsy core as determined by histology. Needle delivery error resulted in variability in core involvement that could influence treatment decisions; the presence or absence of cancer in 1/3 or more of each needle core can be attributed to a needle delivery error of 4 mm. However, our data showed that by making multiple biopsy attempts at selected tumor foci, we may increase the probability of correctly characterizing the extent and grade of the cancer.
- Published
- 2016
- Full Text
- View/download PDF
38. Classification of prostate cancer grade using temporal ultrasound: in vivo feasibility study
- Author
-
Eli Gibson, D. Robert Siemens, Silvia D. Chang, Aaron D. Ward, Farhad Imani, Mena Gaed, Madeleine Moussa, Michael Leveridge, Siavash Khallaghi, Purang Abolmaesumi, Jose A. Gomez, Sahar Ghavidel, Amir Khojaste, Aaron Fenster, and Parvin Mousavi
- Subjects
Receiver operating characteristic ,business.industry ,Computer science ,0206 medical engineering ,Ultrasound ,Cancer ,Feature selection ,Pattern recognition ,02 engineering and technology ,medicine.disease ,020601 biomedical engineering ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,In vivo ,Principal component analysis ,medicine ,Unsupervised learning ,Artificial intelligence ,Ultrasonography ,business - Abstract
Temporal ultrasound has been shown to have high classification accuracy in differentiating cancer from benign tissue. In this paper, we extend the temporal ultrasound method to classify lower grade Prostate Cancer (PCa) from all other grades. We use a group of nine patients with mostly lower grade PCa, where cancerous regions are also limited. A critical challenge is to train a classifier with limited aggressive cancerous tissue compared to low grade cancerous tissue. To resolve the problem of imbalanced data, we use Synthetic Minority Oversampling Technique (SMOTE) to generate synthetic samples for the minority class. We calculate spectral features of temporal ultrasound data and perform feature selection using Random Forests. In leave-one-patient-out cross-validation strategy, an area under receiver operating characteristic curve (AUC) of 0.74 is achieved with overall sensitivity and specificity of 70%. Using an unsupervised learning approach prior to proposed method improves sensitivity and AUC to 80% and 0.79. This work represents promising results to classify lower and higher grade PCa with limited cancerous training samples, using temporal ultrasound.
- Published
- 2016
- Full Text
- View/download PDF
39. Prostate: Registration of Digital Histopathologic Images to in Vivo MR Images Acquired by Using Endorectal Receive Coil
- Author
-
Jose A. Gomez, Cesare Romagnoli, Eli Gibson, Glenn Bauman, Jacques Montreuil, Aaron D. Ward, Charles A. McKenzie, Aaron Fenster, Cathie Crukley, Joseph L. Chin, and Madeleine Moussa
- Subjects
Gadolinium DTPA ,Male ,medicine.medical_specialty ,medicine.medical_treatment ,Contrast Media ,Magnetic Resonance Imaging, Interventional ,Imaging, Three-Dimensional ,Fiducial Markers ,Prostate ,In vivo ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Prostatectomy ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Magnetic resonance imaging ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Research studies ,Prostate surgery ,Radiology ,Mr images ,business ,Fiducial marker - Abstract
To develop and evaluate a technique for the registration of in vivo prostate magnetic resonance (MR) images to digital histopathologic images by using image-guided specimen slicing based on strand-shaped fiducial markers relating specimen imaging to histopathologic examination.The study was approved by the institutional review board (the University of Western Ontario Health Sciences Research Ethics Board, London, Ontario, Canada), and written informed consent was obtained from all patients. This work proposed and evaluated a technique utilizing developed fiducial markers and real-time three-dimensional visualization in support of image guidance for ex vivo prostate specimen slicing parallel to the MR imaging planes prior to digitization, simplifying the registration process. Means, standard deviations, root-mean-square errors, and 95% confidence intervals are reported for all evaluated measurements.The slicing error was within the 2.2 mm thickness of the diagnostic-quality MR imaging sections, with a tissue block thickness standard deviation of 0.2 mm. Rigid registration provided negligible postregistration overlap of the smallest clinically important tumors (0.2 cm(3)) at histologic examination and MR imaging, whereas the tested nonrigid registration method yielded a mean target registration error of 1.1 mm and provided useful coregistration of such tumors.This method for the registration of prostate digital histopathologic images to in vivo MR images acquired by using an endorectal receive coil was sufficiently accurate for coregistering the smallest clinically important lesions with 95% confidence.
- Published
- 2012
- Full Text
- View/download PDF
40. A semi-automated method for identifying and measuring myelinated nerve fibers in scanning electron microscope images
- Author
-
Heather L. More, Mirza Faisal Beg, Eli Gibson, J. Maxwell Donelan, and Jingyun Chen
- Subjects
Time Factors ,Scanning electron microscope ,Computer science ,Myelinated nerve fiber ,business.industry ,General Neuroscience ,Pattern recognition ,Image segmentation ,Nerve Fibers, Myelinated ,Sciatic Nerve ,Axons ,Rats ,Myelin ,medicine.anatomical_structure ,nervous system ,Peripheral nerve ,Peripheral nervous system ,Microscopy, Electron, Scanning ,medicine ,Animals ,Artificial intelligence ,Axon ,business ,Neuroscience ,Automated method - Abstract
Diagnosing illnesses, developing and comparing treatment methods, and conducting research on the organization of the peripheral nervous system often require the analysis of peripheral nerve images to quantify the number, myelination, and size of axons in a nerve. Current methods that require manually labeling each axon can be extremely time-consuming as a single nerve can contain thousands of axons. To improve efficiency, we developed a computer-assisted axon identification and analysis method that is capable of analyzing and measuring sub-images covering the nerve cross-section, acquired using a scanning electron microscope. This algorithm performs three main procedures - it first uses cross-correlation to combine the acquired sub-images into a large image showing the entire nerve cross-section, then identifies and individually labels axons using a series of image intensity and shape criteria, and finally identifies and labels the myelin sheath of each axon using a region growing algorithm with the geometric centers of axons as seeds. To ensure accurate analysis of the image, we incorporated manual supervision to remove mislabeled axons and add missed axons. The typical user-assisted processing time for a two-megapixel image containing over 2000 axons was less than 1h. This speed was almost eight times faster than the time required to manually process the same image. Our method has proven to be well suited for identifying axons and their characteristics, and represents a significant time savings over traditional manual methods.
- Published
- 2011
- Full Text
- View/download PDF
41. Optic Nerve Head Registration Via Hemispherical Surface and Volume Registration
- Author
-
Eli Gibson, Mei Young, Marinko V. Sarunic, and Mirza Faisal Beg
- Subjects
medicine.diagnostic_test ,Contextual image classification ,Computer science ,business.industry ,Optic Disk ,Feature extraction ,Coordinate system ,Biomedical Engineering ,Image registration ,Image processing ,Grayscale ,Optical coherence tomography ,Data Interpretation, Statistical ,Image Processing, Computer-Assisted ,medicine ,Optic nerve ,Humans ,Computer vision ,Artificial intelligence ,Optical tomography ,business ,Tomography, Optical Coherence - Abstract
We present an automated method for nonrigid registration of the optic nerve head (ONH) surfaces extracted from 3-D optical coherence tomography images to give a one-to-one correspondence between two ONH surfaces. This allows development of population-average ONH surfaces, pooling of morphometric data measured on ONH surfaces from multiple subjects into a single chosen template surface, and statistical analysis (cross sectional, or longitudinal, or both) in a common coordinate system. An application of this coordinate system to construct an average ONH shape across an illustrative dataset is demonstrated, and the impact of template selection is assessed.
- Published
- 2010
- Full Text
- View/download PDF
42. Prostate Cancer: Improved Tissue Characterization by Temporal Modeling of Radio-Frequency Ultrasound Echo Data
- Author
-
Farhad Imani, Mena Gaed, Jose A. Gomez, Eli Gibson, Hagit Shatkay, Layan Nahlawi, Aaron D. Ward, Purang Abolmaesumi, Madeleine Moussa, Aaron Fenster, and Parvin Mousavi
- Subjects
medicine.medical_specialty ,business.industry ,0206 medical engineering ,Ultrasound ,Echo (computing) ,Cancer ,02 engineering and technology ,Tissue characterization ,Malignancy ,medicine.disease ,020601 biomedical engineering ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Medicine ,Radiology ,Radio frequency ,business ,Hidden Markov model - Abstract
Despite recent advances in clinical oncology, prostate cancer remains a major health concern in men, where current detection techniques still lead to both over- and under-diagnosis. More accurate prediction and detection of prostate cancer can improve disease management and treatment outcome. Temporal ultrasound is a promising imaging approach that can help identify tissue-specific patterns in time-series of ultrasound data and, in turn, differentiate between benign and malignant tissues. We propose a probabilistic-temporal framework, based on hidden Markov models, for modeling ultrasound time-series data obtained from prostate cancer patients. Our results show improved prediction of malignancy compared to previously reported results, where we identify cancerous regions with over 88 % accuracy. As our models directly represent temporal aspects of the data, we expect our method to be applicable to other types of cancer in which temporal-ultrasound can be captured.
- Published
- 2016
- Full Text
- View/download PDF
43. 2D-3D Registration Accuracy Estimation for Optimised Planning of Image-Guided Pancreatobiliary Interventions
- Author
-
Steven Bandula, Yipeng Hu, John H. Hipwell, Ester Bonmati, Stephen P. Pereira, Dean C. Barratt, David J. Hawkes, and Eli Gibson
- Subjects
Estimation ,3d registration ,business.industry ,Orientation (computer vision) ,Computer science ,Monte Carlo method ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Computer vision ,In patient ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Simulation - Abstract
We describe a fast analytical method to estimate landmark-based 2D-3D registration accuracy to aid the planning of pancreatobiliary interventions in which ERCP images are combined with information from diagnostic 3D MR or CT images. The method analytically estimates a target registration error (TRE), accounting for errors in the manual selection of both 2D- and 3D landmarks, that agrees with Monte Carlo simulation to within 4.5 ± 3.6 % (mean ± SD). We also show how to analytically estimate a planning uncertainty incorporating uncertainty in patient positioning, and utilise it to support ERCP-guided procedure planning by selecting the optimal patient position and X-ray C-arm orientation that minimises the expected TRE. Simulated- and derived planning uncertainties agreed to within 17.9 ± 9.7 % when the root-mean-square error was less than 50°. We demonstrate the feasibility of this approach on clinical data from two patients.
- Published
- 2016
- Full Text
- View/download PDF
44. Toward Prostate Cancer Contouring Guidelines on Magnetic Resonance Imaging: Dominant Lesion Gross and Clinical Target Volume Coverage Via Accurate Histology Fusion
- Author
-
Mena Gaed, Matthew Bastian-Jordan, Masoom A. Haider, Zahra Kassam, Stephen E. Pautler, Eli Gibson, Madeleine Moussa, Aaron Fenster, Jose A. Gomez, Glenn Bauman, Aaron D. Ward, Cesare Romagnoli, Derek W. Cool, Joseph L. Chin, and Cathie Crukley
- Subjects
Male ,Cancer Research ,medicine.medical_treatment ,Multimodal Imaging ,Sensitivity and Specificity ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Image Interpretation, Computer-Assisted ,medicine ,Effective diffusion coefficient ,Humans ,Radiology, Nuclear Medicine and imaging ,Multiparametric Magnetic Resonance Imaging ,Aged ,Prostatectomy ,Contouring ,Radiation ,medicine.diagnostic_test ,business.industry ,Margins of Excision ,Prostatic Neoplasms ,Reproducibility of Results ,Magnetic resonance imaging ,Middle Aged ,medicine.disease ,Magnetic Resonance Imaging ,Tumor Burden ,medicine.anatomical_structure ,Treatment Outcome ,Oncology ,Surgery, Computer-Assisted ,030220 oncology & carcinogenesis ,Practice Guidelines as Topic ,medicine.symptom ,business ,Nuclear medicine - Abstract
Purpose Defining prostate cancer (PCa) lesion clinical target volumes (CTVs) for multiparametric magnetic resonance imaging (mpMRI) could support focal boosting or treatment to improve outcomes or lower morbidity, necessitating appropriate CTV margins for mpMRI-defined gross tumor volumes (GTVs). This study aimed to identify CTV margins yielding 95% coverage of PCa tumors for prospective cases with high likelihood. Methods and Materials Twenty-five men with biopsy-confirmed clinical stage T1 or T2 PCa underwent pre-prostatectomy mpMRI, yielding T2-weighted, dynamic contrast-enhanced, and apparent diffusion coefficient images. Digitized whole-mount histology was contoured and registered to mpMRI scans (error ≤2 mm). Four observers contoured lesion GTVs on each mpMRI scan. CTVs were defined by isotropic and anisotropic expansion from these GTVs and from multiparametric (unioned) GTVs from 2 to 3 scans. Histologic coverage (proportions of tumor area on co-registered histology inside the CTV, measured for Gleason scores [GSs] ≥6 and ≥7) and prostate sparing (proportions of prostate volume outside the CTV) were measured. Nonparametric histologic-coverage prediction intervals defined minimal margins yielding 95% coverage for prospective cases with 78% to 92% likelihood. Results On analysis of 72 true-positive tumor detections, 95% coverage margins were 9 to 11 mm (GS ≥ 6) and 8 to 10 mm (GS ≥ 7) for single-sequence GTVs and were 8 mm (GS ≥ 6) and 6 mm (GS ≥ 7) for 3-sequence GTVs, yielding CTVs that spared 47% to 81% of prostate tissue for the majority of tumors. Inclusion of T2-weighted contours increased sparing for multiparametric CTVs with 95% coverage margins for GS ≥6, and inclusion of dynamic contrast-enhanced contours increased sparing for GS ≥7. Anisotropic 95% coverage margins increased the sparing proportions to 71% to 86%. Conclusions Multiparametric magnetic resonance imaging–defined GTVs expanded by appropriate margins may support focal boosting or treatment of PCa; however, these margins, accounting for interobserver and intertumoral variability, may preclude highly conformal CTVs. Multiparametric GTVs and anisotropic margins may reduce the required margins and improve prostate sparing.
- Published
- 2015
45. Using Hidden Markov Models to capture temporal aspects of ultrasound data in prostate cancer
- Author
-
Aaron Fenster, Parvin Mousavi, Farhad Imani, Mena Gaed, Purang Abolmaesumi, Hagit Shatkay, Eli Gibson, Madeleine Moussa, Aaron D. Ward, Layan Nahlawi, and Jose A. Gomez
- Subjects
Future studies ,business.industry ,Computer science ,Speech recognition ,Ultrasound ,Pattern recognition ,medicine.disease ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,medicine ,Artificial intelligence ,business ,Hidden Markov model - Abstract
Recent studies highlight temporal ultrasound data as highly promising in differentiating between malignant and benign tissues in prostate cancer patients. Since Hidden Markov Models can be used for capturing order and patterns in time varying signals, we employ them to model temporal aspects of ultrasound data that are typically not incorporated in existing models. By comparing order-preserving and orderaltering models, we demonstrate that the order encoded in the series is necessary to model the variability in ultrasound data of prostate tissues. In future studies, we will investigate the influence of order on the differentiation between malignant and benign tissues.
- Published
- 2015
- Full Text
- View/download PDF
46. Computer-Aided Prostate Cancer Detection Using Ultrasound RF Time Series: In Vivo Feasibility Study
- Author
-
Mena Gaed, Silvia D. Chang, Eli Gibson, Jose A. Gomez, Aaron Fenster, Parvin Mousavi, Cesare Romagnoli, Aaron D. Ward, Madeleine Moussa, Amir Khojaste, D. Robert Siemens, Michael Leveridge, Purang Abolmaesumi, and Farhad Imani
- Subjects
Male ,Pathology ,medicine.medical_specialty ,Feature extraction ,Wavelet ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Electrical and Electronic Engineering ,Time series ,Ultrasonography ,Radiological and Ultrasound Technology ,Receiver operating characteristic ,business.industry ,Ultrasound ,Prostate ,Prostatic Neoplasms ,Reproducibility of Results ,Pattern recognition ,3. Good health ,Computer Science Applications ,Hierarchical clustering ,Area Under Curve ,Computer-aided ,Feasibility Studies ,Artificial intelligence ,Radio frequency ,business ,Software - Abstract
This paper presents the results of a computer-aided intervention solution to demonstrate the application of RF time series for characterization of prostate cancer, in vivo. Methods: We pre-process RF time series features extracted from 14 patients using hierarchical clustering to remove possible outliers. Then, we demonstrate that the mean central frequency and wavelet features extracted from a group of patients can be used to build a nonlinear classifier which can be applied successfully to differentiate between cancerous and normal tissue regions of an unseen patient. Results: In a cross-validation strategy, we show an average area under receiver operating characteristic curve (AUC) of 0.93 and classification accuracy of 80%. To validate our results, we present a detailed ultrasound to histology registration framework. Conclusion: Ultrasound RF time series results in differentiation of cancerous and normal tissue with high AUC.
- Published
- 2015
47. Population-based prediction of subject-specific prostate deformation for MR-to-ultrasound image registration
- Author
-
Yipeng, Hu, Eli, Gibson, Hashim Uddin, Ahmed, Caroline M, Moore, Mark, Emberton, and Dean C, Barratt
- Subjects
Male ,Models, Anatomic ,Patient-Specific Modeling ,Statistical shape modelling ,Prostate ,Reproducibility of Results ,Kernel regression ,Magnetic Resonance Imaging ,Multimodal Imaging ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,Subtraction Technique ,Image Interpretation, Computer-Assisted ,Tissue deformation ,Humans ,Computer Simulation ,Organ motion ,Image registration ,Ultrasonography - Abstract
Highlights • A novel framework for building population-predicted, subject-specific models of organ motion is presented. • Subject-specific PDFs are modelled without requiring knowledge of motion correspondence between training subjects. • A simple yet generalisable kernel regression scheme is employed. • A rigorous validation is presented using prostate MR-TRUS image registration data acquired on human patients., Statistical shape models of soft-tissue organ motion provide a useful means of imposing physical constraints on the displacements allowed during non-rigid image registration, and can be especially useful when registering sparse and/or noisy image data. In this paper, we describe a method for generating a subject-specific statistical shape model that captures prostate deformation for a new subject given independent population data on organ shape and deformation obtained from magnetic resonance (MR) images and biomechanical modelling of tissue deformation due to transrectal ultrasound (TRUS) probe pressure. The characteristics of the models generated using this method are compared with corresponding models based on training data generated directly from subject-specific biomechanical simulations using a leave-one-out cross validation. The accuracy of registering MR and TRUS images of the prostate using the new prostate models was then estimated and compared with published results obtained in our earlier research. No statistically significant difference was found between the specificity and generalisation ability of prostate shape models generated using the two approaches. Furthermore, no statistically significant difference was found between the landmark-based target registration errors (TREs) following registration using different models, with a median (95th percentile) TRE of 2.40 (6.19) mm versus 2.42 (7.15) mm using models generated with the new method versus a model built directly from patient-specific biomechanical simulation data, respectively (N = 800; 8 patient datasets; 100 registrations per patient). We conclude that the proposed method provides a computationally efficient and clinically practical alternative to existing complex methods for modelling and predicting subject-specific prostate deformation, such as biomechanical simulations, for new subjects. The method may also prove useful for generating shape models for other organs, for example, where only limited shape training data from dynamic imaging is available., Graphical abstract Image, graphical abstract
- Published
- 2015
48. Sci-Fri AM: MRI and Diagnostic Imaging - 04: How does prostate biopsy guidance error impact pathologic cancer risk assessment?
- Author
-
Joseph L. Chin, Aaron D. Ward, Aaron Fenster, Stephen E. Pautler, Madeleine Moussa, Peter R. Martin, Eli Gibson, Jose A. Gomez, Mena Gaed, and Derek W. Cool
- Subjects
medicine.medical_specialty ,Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Prostatectomy ,medicine.medical_treatment ,Ultrasound ,Image registration ,Magnetic resonance imaging ,General Medicine ,Cancer risk assessment ,Biopsy ,medicine ,Medical imaging ,Radiology ,business - Abstract
Purpose: MRI-targeted, 3D transrectal ultrasound (TRUS)-guided prostate biopsy aims to reduce the 21–47% false negative rate1 of clinical 2D TRUS-guided sextant biopsy, but still has a substantial false negative rate. This could be improved via biopsy needle target optimization, accounting for uncertainties due to guidance system errors and image registration errors. As an initial step toward this broader goal, we elucidated the impact of biopsy needle delivery error on the probability of obtaining tumour samples and on core involvement. These are both important parameters to patient risk stratification and treatment decision. Methods: We investigated this for cancer of all grades, and separately for intermediate/high grade (≥Gleason 4+3) cancer. We used expert-contoured gold-standard prostatectomy histology to simulate targeted biopsies using an isotropic Gaussian needle delivery error from 1 to 6 mm, and investigated the amount of cancer obtained in each biopsy core as determined by histology. Results: Needle delivery error resulted in core involvement variability that could influence treatment decisions; the presence or absence of cancer in 1/3 or more of each needle core can be attributed to needle delivery error of 4 mm (as observed in practice2). Conclusions: Repeated biopsies of the same tumour target can yield percent core involvement measures with sufficient variability to influence the decision between active surveillance and treatment. However, this may be mitigated by making more than one biopsy attempt at selected tumour targets.
- Published
- 2016
- Full Text
- View/download PDF
49. Statistical Power in Image Segmentation: Relating Sample Size to Reference Standard Quality
- Author
-
Dean C. Barratt, Henkjan J. Huisman, and Eli Gibson
- Subjects
Data set ,Computer science ,Sample size determination ,Monte Carlo method ,Range (statistics) ,Image segmentation ,Data mining ,Function (mathematics) ,computer.software_genre ,computer ,Statistical power - Abstract
Ideal reference standards for comparing segmentation algorithms balance trade-offs between the data set size, the costs of reference standard creation and the resulting accuracy. As reference standard quality impacts the likelihood of detecting significant improvements (i.e. the statistical power), we derived a sample size formula for segmentation accuracy comparison using an imperfect reference standard. We expressed this formula as a function of algorithm performance and reference standard quality (e.g. measured with a high quality reference standard on pilot data) to reveal the relationship between reference standard quality and statistical power, addressing key study design questions: (1) How many validation images are needed to compare segmentation algorithms? (2) How accurate should the reference standard be? The resulting formula predicted statistical power to within 2% of Monte Carlo simulations across a range of model parameters. A case study, using the PROMISE12 prostate segmentation data set, shows the practical use of the formula.
- Published
- 2015
- Full Text
- View/download PDF
50. Spatially varying accuracy and reproducibility of prostate segmentation in magnetic resonance images using manual and semiautomated methods
- Author
-
Maysam, Shahedi, Derek W, Cool, Cesare, Romagnoli, Glenn S, Bauman, Matthew, Bastian-Jordan, Eli, Gibson, George, Rodrigues, Belal, Ahmad, Michael, Lock, Aaron, Fenster, and Aaron D, Ward
- Subjects
Male ,Observer Variation ,Imaging, Three-Dimensional ,Image Processing, Computer-Assisted ,Prostate ,Humans ,Prostatic Neoplasms ,Reproducibility of Results ,Magnetic Resonance Imaging ,Algorithms ,Software ,Pattern Recognition, Automated - Abstract
Three-dimensional (3D) prostate image segmentation is useful for cancer diagnosis and therapy guidance, but can be time-consuming to perform manually and involves varying levels of difficulty and interoperator variability within the prostatic base, midgland (MG), and apex. In this study, the authors measured accuracy and interobserver variability in the segmentation of the prostate on T2-weighted endorectal magnetic resonance (MR) imaging within the whole gland (WG), and separately within the apex, midgland, and base regions.The authors collected MR images from 42 prostate cancer patients. Prostate border delineation was performed manually by one observer on all images and by two other observers on a subset of ten images. The authors used complementary boundary-, region-, and volume-based metrics [mean absolute distance (MAD), Dice similarity coefficient (DSC), recall rate, precision rate, and volume difference (ΔV)] to elucidate the different types of segmentation errors that they observed. Evaluation for expert manual and semiautomatic segmentation approaches was carried out. Compared to manual segmentation, the authors' semiautomatic approach reduces the necessary user interaction by only requiring an indication of the anteroposterior orientation of the prostate and the selection of prostate center points on the apex, base, and midgland slices. Based on these inputs, the algorithm identifies candidate prostate boundary points using learned boundary appearance characteristics and performs regularization based on learned prostate shape information.The semiautomated algorithm required an average of 30 s of user interaction time (measured for nine operators) for each 3D prostate segmentation. The authors compared the segmentations from this method to manual segmentations in a single-operator (mean whole gland MAD = 2.0 mm, DSC = 82%, recall = 77%, precision = 88%, and ΔV = - 4.6 cm(3)) and multioperator study (mean whole gland MAD = 2.2 mm, DSC = 77%, recall = 72%, precision = 86%, and ΔV = - 4.0 cm(3)). These results compared favorably with observed differences between manual segmentations and a simultaneous truth and performance level estimation reference for this data set (whole gland differences as high as MAD = 3.1 mm, DSC = 78%, recall = 66%, precision = 77%, and ΔV = 15.5 cm(3)). The authors found that overall, midgland segmentation was more accurate and repeatable than the segmentation of the apex and base, with the base posing the greatest challenge.The main conclusions of this study were that (1) the semiautomated approach reduced interobserver segmentation variability; (2) the segmentation accuracy of the semiautomated approach, as well as the accuracies of recently published methods from other groups, were within the range of observed expert variability in manual prostate segmentation; and (3) further efforts in the development of computer-assisted segmentation would be most productive if focused on improvement of segmentation accuracy and reduction of variability within the prostatic apex and base.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.