157 results on '"Eli, Gibson"'
Search Results
52. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.
- Author
-
Eli Gibson, Aaron Fenster, and Aaron D. Ward
- Published
- 2013
- Full Text
- View/download PDF
53. Optic Nerve Head Registration Via Hemispherical Surface and Volume Registration.
- Author
-
Eli Gibson, Mei Young, Marinko Sarunic, and Mirza Faisal Beg
- Published
- 2010
- Full Text
- View/download PDF
54. Stochastic Sequential Modeling: Toward Improved Prostate Cancer Diagnosis Through Temporal-Ultrasound
- Author
-
Farhad Imani, Mena Gaed, Layan Nahlawi, Hagit Shatkay, Jose A. Gomez, Eli Gibson, Madeleine Moussa, Aaron D. Ward, Purang Abolmaesumi, Aaron Fenster, and Parvin Mousavi
- Subjects
Computer science ,030232 urology & nephrology ,Biomedical Engineering ,Malignancy ,030218 nuclear medicine & medical imaging ,Time-domain analysis ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Tissue characterization ,Image guided diagnosis ,medicine ,Hidden Markov models ,Hidden Markov model ,Temporal information ,TRUS-guided biopsies ,business.industry ,Ultrasound ,Cancer ,Pattern recognition ,medicine.disease ,Sequential modeling ,Original Article ,Artificial intelligence ,business - Abstract
Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.
- Published
- 2020
- Full Text
- View/download PDF
55. Artificial Intelligence with Statistical Confidence Scores for Detection of Acute or Subacute Hemorrhage on Noncontrast CT Head Scans
- Author
-
Eli Gibson, Bogdan Georgescu, Pascal Ceccaldi, Pierre-Hugo Trigan, Youngjin Yoo, Jyotipriya Das, Thomas J. Re, Vishwanath RS, Abishek Balachandran, Eva Eibenberger, Andrei Chekkoury, Barbara Brehm, Uttam K. Bodanapally, Savvas Nicolaou, Pina C. Sanelli, Thomas J. Schroeppel, Thomas Flohr, Dorin Comaniciu, and Yvonne W. Lui
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Original Research - Abstract
PURPOSE: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. MATERIALS AND METHODS: This retrospective study included 46 057 studies from seven “internal” centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16( )764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score–defined subsets using bootstrapping. RESULTS: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). CONCLUSION: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation. Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022
- Published
- 2022
- Full Text
- View/download PDF
56. Models of temporal enhanced ultrasound data for prostate cancer diagnosis: the impact of time-series order.
- Author
-
Layan Nahlawi, Caroline Goncalves, Farhad Imani, Mena Gaed, Jose A. Gomez, Madeleine Moussa, Eli Gibson, Aaron Fenster, Aaron D. Ward, Purang Abolmaesumi, Parvin Mousavi, and Hagit Shatkay
- Published
- 2017
- Full Text
- View/download PDF
57. Deep residual networks for automatic segmentation of laparoscopic videos of the liver.
- Author
-
Eli Gibson, Maria R. Robu, Stephen A. Thompson, Eddie Edwards, Crispin Schneider, Kurinchi Gurusamy, Brian R. Davidson, David J. Hawkes, Dean C. Barratt, and Matthew J. Clarkson
- Published
- 2017
- Full Text
- View/download PDF
58. Prostate lesion detection and localization based on locality alignment discriminant analysis.
- Author
-
Mingquan Lin, Weifu Chen, Mingbo Zhao, Eli Gibson, Matthew Bastian-Jordan, Derek W. Cool, Zahra Kassam, Tommy W. S. Chow, Aaron D. Ward, and Bernard Chiu
- Published
- 2017
- Full Text
- View/download PDF
59. Automatically detecting anatomy
- Author
-
Florin C. Ghesu, Bogdan Georgescu, Eli Gibson, Sasa Grbic, and Dorin Comaniciu
- Published
- 2022
- Full Text
- View/download PDF
60. Brain midline shift detection and quantification by a cascaded deep network pipeline on non-contrast computed tomography scans
- Author
-
Youngjin Yoo, Pina C. Sanelli, Eli Gibson, Eva Eibenberger, Thomas J. Re, Yvonne W. Lui, Uttam Bodanapally, Jyotipriya Das, Savvas Nicolaou, Dorin Comaniciu, Nguyen P. Nguyen, Abishek Balachandran, Tommi A. White, Filiz Bunyak, Thomas J. Schroeppel, and Andrei Chekkoury
- Subjects
medicine.diagnostic_test ,Midline shift ,business.industry ,Pipeline (computing) ,media_common.quotation_subject ,medicine ,Contrast (vision) ,Computed tomography ,Artificial intelligence ,business ,Biomedical engineering ,media_common - Published
- 2021
- Full Text
- View/download PDF
61. How does prostate biopsy guidance error impact pathologic cancer risk assessment?
- Author
-
Peter R. Martin, Mena Gaed, José A. Gómez, Madeleine Moussa, Eli Gibson, Derek W. Cool, Joseph L. Chin, Stephen E. Pautler, Aaron Fenster, and Aaron D. Ward
- Published
- 2016
- Full Text
- View/download PDF
62. Classification of prostate cancer grade using temporal ultrasound: in vivo feasibility study.
- Author
-
Sahar Ghavidel, Farhad Imani, Siavash Khallaghi, Eli Gibson, Amir Khojaste, Mena Gaed, Madeleine Moussa, Jose A. Gomez, D. Robert Siemens, Michael Leveridge, Silvia D. Chang, Aaron Fenster, Aaron D. Ward, Purang Abolmaesumi, and Parvin Mousavi
- Published
- 2016
- Full Text
- View/download PDF
63. Impact of region contouring variability on image-based focal therapy evaluation.
- Author
-
Eli Gibson, Ian A. Donaldson, Taimur T. Shah, Yipeng Hu, Hashim Uddin Ahmed, and Dean C. Barratt
- Published
- 2016
- Full Text
- View/download PDF
64. Fusion of multi-parametric MRI and temporal ultrasound for characterization of prostate cancer: in vivo feasibility study.
- Author
-
Farhad Imani, Sahar Ghavidel, Purang Abolmaesumi, Siavash Khallaghi, Eli Gibson, Amir Khojaste, Mena Gaed, Madeleine Moussa, Jose A. Gomez, Cesare Romagnoli, Derek W. Cool, Matthew Bastian-Jordan, Zahra Kassam, D. Robert Siemens, Michael Leveridge, Silvia D. Chang, Aaron Fenster, Aaron D. Ward, and Parvin Mousavi
- Published
- 2016
- Full Text
- View/download PDF
65. A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.
- Author
-
Yiwen Xu, J Geoffrey Pickering, Zengxuan Nong, Eli Gibson, John-Michael Arpino, Hao Yin, and Aaron D Ward
- Subjects
Medicine ,Science - Abstract
Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p < 0.01) for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for analysis of diseased microvasculature.
- Published
- 2015
- Full Text
- View/download PDF
66. Evaluating deep learning methods in detecting and segmenting different sizes of brain metastases on 3D post-contrast T1-weighted images
- Author
-
Eli Gibson, Yue Cao, Siqi Liu, Youngjin Yoo, James M. Balter, Thomas J. Re, and Ceccaldi Pascal
- Subjects
Image fusion ,medicine.medical_specialty ,medicine.diagnostic_test ,Recall ,business.industry ,Deep learning ,Magnetic resonance imaging ,Image segmentation ,medicine.disease ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Radiology ,Artificial intelligence ,medicine.symptom ,Ultrasonic Imaging and Tomography ,business ,Brain metastasis - Abstract
Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of [Formula: see text] and precision of [Formula: see text] for lesion volume [Formula: see text] but deteriorated as metastasis size decreased below [Formula: see text] to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to [Formula: see text] but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.
- Published
- 2021
- Full Text
- View/download PDF
67. GAMER MRI: Gated-attention mechanism ranking of multi-contrast MRI in brain pathology
- Author
-
Alessandro Daducci, Youngjin Yoo, Matthias Weigel, Reza Rahmanzadeh, Po-Jui Lu, Zahi A. Fayad, Benjamin L. Odry, Eli Gibson, Riccardo Galbusera, Philippe C. Cattin, Meritxell Bach Cuadra, Ceccaldi Pascal, Pascal Spincemaille, Thanh D. Nguyen, Robin Sandkühler, Jens Kuhle, Francesco La Rosa, Ludwig Kappos, Cristina Granziera, Amish H. Doshi, Yi Wang, and Kambiz Nael
- Subjects
Pathology ,Neurodegenerative ,Fluid-attenuated inversion recovery ,lcsh:RC346-429 ,Brain Ischemia ,0302 clinical medicine ,Medicine ,Stroke ,screening and diagnosis ,05 social sciences ,Brain ,Regular Article ,Quantitative susceptibility mapping ,Magnetic Resonance Imaging ,Detection ,Neurology ,Neurological ,Biomedical Imaging ,lcsh:R858-859.7 ,F1 score ,4.2 Evaluation of markers and technologies ,medicine.medical_specialty ,Multiple Sclerosis ,Cognitive Neuroscience ,Attention mechanism ,lcsh:Computer applications to medicine. Medical informatics ,050105 experimental psychology ,Ranking (information retrieval) ,Multiple sclerosis ,03 medical and health sciences ,Deep learning ,Quantitative MRI ,Relative importance order ,Clinical Research ,Humans ,Effective diffusion coefficient ,0501 psychology and cognitive sciences ,Radiology, Nuclear Medicine and imaging ,lcsh:Neurology. Diseases of the nervous system ,business.industry ,Mechanism (biology) ,Neurosciences ,medicine.disease ,Brain Disorders ,Diffusion Magnetic Resonance Imaging ,Neurology (clinical) ,business ,030217 neurology & neurosurgery - Abstract
Highlights • The attention mechanism can rank MR measures by relative importance. • Proposed guideline for use of the attention mechanism with MR measures. • Attention weights and quantitative MR measures can potentially form new patterns., Introduction During the last decade, a multitude of novel quantitative and semiquantitative MRI techniques have provided new information about the pathophysiology of neurological diseases. Yet, selection of the most relevant contrasts for a given pathology remains challenging. In this work, we developed and validated a method, Gated-Attention MEchanism Ranking of multi-contrast MRI in brain pathology (GAMER MRI), to rank the relative importance of MR measures in the classification of well understood ischemic stroke lesions. Subsequently, we applied this method to the classification of multiple sclerosis (MS) lesions, where the relative importance of MR measures is less understood. Methods GAMER MRI was developed based on the gated attention mechanism, which computes attention weights (AWs) as proxies of importance of hidden features in the classification. In the first two experiments, we used Trace-weighted (Trace), apparent diffusion coefficient (ADC), Fluid-Attenuated Inversion Recovery (FLAIR), and T1-weighted (T1w) images acquired in 904 acute/subacute ischemic stroke patients and in 6,230 healthy controls and patients with other brain pathologies to assess if GAMER MRI could produce clinically meaningful importance orders in two different classification scenarios. In the first experiment, GAMER MRI with a pretrained convolutional neural network (CNN) was used in conjunction with Trace, ADC, and FLAIR to distinguish patients with ischemic stroke from those with other pathologies and healthy controls. In the second experiment, GAMER MRI with a patch-based CNN used Trace, ADC and T1w to differentiate acute ischemic stroke lesions from healthy tissue. The last experiment explored the performance of patch-based CNN with GAMER MRI in ranking the importance of quantitative MRI measures to distinguish two groups of lesions with different pathological characteristics and unknown quantitative MR features. Specifically, GAMER MRI was applied to assess the relative importance of the myelin water fraction (MWF), quantitative susceptibility mapping (QSM), T1 relaxometry map (qT1), and neurite density index (NDI) in distinguishing 750 juxtacortical lesions from 242 periventricular lesions in 47 MS patients. Pair-wise permutation t-tests were used to evaluate the differences between the AWs obtained for each quantitative measure. Results In the first experiment, we achieved a mean test AUC of 0.881 and the obtained AWs of FLAIR and the sum of AWs of Trace and ADC were 0.11 and 0.89, respectively, as expected based on previous knowledge. In the second experiment, we achieved a mean test F1 score of 0.895 and a mean AW of Trace = 0.49, of ADC = 0.28, and of T1w = 0.23, thereby confirming the findings of the first experiment. In the third experiment, MS lesion classification achieved test balanced accuracy = 0.777, sensitivity = 0.739, and specificity = 0.814. The mean AWs of T1map, MWF, NDI, and QSM were 0.29, 0.26, 0.24, and 0.22 (p
- Published
- 2021
68. The LUX-ZEPLIN (LZ) radioactivity and cleanliness control programs
- Author
-
Mikkel B. Johnson, B. Landerud, A. Biekert, B. N. Edwards, K. Hanzel, T. E. Tope, D. Curran, C. Chiller, J. Palmer, R. Leonard, P. Sutcliffe, E. D. Fraser, R. Bunker, J. So, A. A. Chiller, M. R. While, A. Dobi, D. Hamilton, M. G. D. van der Grinten, W. J. Wisniewski, J. Li, A. Fan, J.S. Saba, C. Lynch, Henrique Araujo, S. Luitz, J. Nesbit, M. Horn, C. D. Kocher, Catarina Silva, F. G. O’Neill, J. C. Davis, J. J. Silk, M. C. Carmona-Benitez, Simon Fayer, M. Pangilinan, K. O’Sullivan, D. Lucero, Q. Xiao, D. Hemer, B. Boxer, J. M. Lyle, C. Chan, C. E. Tull, J. Genovesi, A. Vaitkus, M. Arthurs, V. B. Francis, S. Kravitz, X. Liu, H. J. Birch, R. Linehan, S. Walcott, C. H. Faham, T. J. Anderson, A. B.M.R. Sazzad, D. White, A. Kamaha, M. S. Witherell, R. Studley, K. Sundarnath, R. Liu, H. Oh, L. Korley, H. Flaecher, R. Conley, K. Kamdin, P. Beltrame, S. Stephenson, C. Pereira, C. R. Hall, R. Cabrita, B. Holbrook, B. G. Lenardo, P. Majewski, T.M. Stiegler, I. B. Peterson, A. Manalaysay, A. Monte, C. Ghag, H. Kraus, C. Loniewski, J. Makkinje, X. Xiang, Robert A. Taylor, M. N. Irving, S. Uvarov, Michael Schubnell, J. Heise, R. Coughlen, A. Lambert, F. Froborg, L. Oxborough, D.C. Malling, S. Greenwood, J. Yin, S. J. Haselschwardt, H. J. Krebs, W. H. Lippincott, K. J. Palladino, R. E. Smith, V. A. Kudryavtsev, I. Olcina, C. M. Ignarra, A. Harrison, A. J. Bailey, Minfang Yeh, D. Bauer, W. Skulski, J. Keefner, O. Hitchcock, Ben Carlson, E. Leason, Benjamin Krikler, A. Cottle, E. Mizrachi, Michele Cascella, M. Khaleeq, M. Solmaz, T. J. Whitis, J. J. Wang, N. Angelides, S. Gokhale, K. Skarpaas, Daniel McKinsey, S. Dardin, S. Kyre, D. Santone, P. R. Scovell, T. Vietanen, S. Powell, Y. Wang, David Leonard, E. Morrison, N. Swanson, M. Sarychev, M. A. Olevitch, E. K. Pease, M. Elnimr, P. Brás, N.J. Gantos, R. G. Jacobsen, J. Migneault, Yeongduk Kim, W. Turner, S. D. Worm, Seth Hillbrand, T. Fruth, G. Gregerson, Wenzhao Wei, V. Kasey, L. Kreczko, J. R. Watson, A. Bhatti, D. Naim, Ethan Bernard, B. J. Mount, V. N. Solovov, C. Nedlik, K. Wilson, Elena Korolkova, G. R. C. Rischbieter, P. Ford, A. Stevens, D. J. Taylor, H. N. Nelson, F. Neves, S. Aviles, W. T. Emmet, K. Stifter, B. Birrittella, J. T. White, S. J. Patton, D. Molash, M. Severson, T. A. Shutt, A. Richards, D Kodroff, J. Lin, Kareem Kazkaz, T. P. Biesiadzinski, David Colling, J. Liao, J. Mock, J. A. Morad, E. Holtom, J. E. Y. Dobson, Bjoern Penning, C. E. Dahl, A. Dushkin, A. Konovalov, D. J. Markley, G. W. Shutt, N. Parveen, M. G. D. Gilchriese, Yanwen Liu, C. Carels, Martin Breidenbach, Kathrin C. Walker, V.V. Sosnovtsev, A. Naylor, K. T. Lesko, N. A. Larsen, C. Lee, A. Pagac, J. J. Cherwinka, N. Decheine, J. Bang, J. A. Nikoleyczik, Patrick Bauer, J.P. Rodrigues, S. Branson, T. J. R. Davison, B. Lopez Paredes, D. Pagenkopf, J. S. Campbell, M. Tan, K. C. Oliver-Mallory, M.J. Barry, J. Belle, D. Yu. Akimov, M. Timalsina, S. Shaw, Alexander Bolozdynya, W. Ji, Sridhara Dasu, D. Q. Huang, J. Edwards, F. L. H. Wolfs, K. E. Boast, J. Busenitz, Ren-Jie Wang, S. Fiorucci, N. Stern, C. Rhyne, V. Bugaev, A. Laundrie, G. Rutherford, G. Pereira, E. H. Miller, W. W. Craddock, S. Alsum, J.P. da Cunha, Richard J. Smith, A. Cole, W. Wang, Julie Harrison, I. Khurana, M. Utes, R. J. Gaitskell, J. Kras, D. Khaitan, R. L. Mannino, J. D. Wolfs, H. Auyeung, L. de Viveiros, E. Voirin, E. M. Boulton, N. I. Chott, I. Stancu, L. Tvrznikova, Richard Rosero, P. MarrLaundrie, D. R. Tronstad, T. Benson, Dongming Mei, T. J. Sumner, O. Jahangir, J. Va’vra, Ross G. White, L. Sabarots, A. Currie, A. R. Smith, W. L. Waldron, J. P. Coleman, E. Lopez-Asamar, Wolfgang Lorenzon, A. Piepke, Carl Gwilliam, S. Hans, T. Harrington, Laura Manenti, A. Greenall, F.-T. Liao, G. Cox, J. R. Bensinger, V. M. Gehman, H. J. Rose, Christopher Brew, X. Bai, P. Sorensen, A. Arbuckle, Y. Qie, R. C. Webb, R.M. Gerhard, T.W. Hurteau, K.J. Thomas, P. Rossiter, C. Hasselkus, W. G. Jones, J. Johnson, R. Gelfand, T. G. Gonda, C. O. Vuosalo, A. St. J. Murphy, Adam Bernstein, Chao Zhang, A. Nilima, R. M. Preece, T. K. Edberg, Q. Riffard, B. P. Tennyson, Yue Meng, C. Maupin, J. E. Cutter, J. Reichenbacher, J.Y-K. Hor, N. Marangou, D. Temples, Eli Gibson, M. Hoff, H. S. Lee, J. H. Buckley, Z. J. Minaker, M.I. Lopes, M. Koyuncu, P. A. Terman, J.R. Verbus, Bhawna Gomber, J. A. Nikkel, A. Alquahtani, I. M. Fogarty Florang, D. Seymour, A. V. Kumpan, Antonin Vacheret, C. Hjemfelt, M.R. Stark, S. Pierson, M. Racine, D. R. Tiedt, D. S. Akerib, A. Khazov, W. C. Taylor, J. Balajthy, A.V. Khromov, A. C. Kaboth, V. M. Palmaccio, Duncan Carlsmith, K. Pushkin, S. A. Hertel, S. N. Jeffery, E. Druszkiewicz, R. W. Schnee, S. Pal, R. Bramante, B. N. Ratcliff, M. E. Monzani, J. O'Dell, P. Zarzhitsky, L. Wang, P. Johnson, Matthew Szydagis, W. H. To, J. E. Armstrong, U. Utku, Mani Tripathi, D. Woodward, D. Garcia, W. R. Edwards, Carl W. Akerlof, Jilei Xu, C. Nehrkorn, Ian S. Young, J. McLaughlin, J. Thomson, S. R. Eriksen, R. Rucinski, T.J. Martin, C. Levy, Sergey Burdin, A. Baxter, A. Lindote, L. Reichhart, Juhyeong Lee, S. Balashov, C. T. McConnell, M. F. Marzioni, A. Tomás, W. T. Kim, S. Weatherly, and Science and Technology Facilities Council (STFC)
- Subjects
BACKGROUNDS ,Particle physics ,Photomultiplier ,Physics - Instrumentation and Detectors ,Physics and Astronomy (miscellaneous) ,FOS: Physical sciences ,lcsh:Astrophysics ,Scintillator ,01 natural sciences ,High Energy Physics - Experiment ,Physics, Particles & Fields ,High Energy Physics - Experiment (hep-ex) ,WIMP ,lcsh:QB460-466 ,0103 physical sciences ,lcsh:Nuclear and particle physics. Atomic energy. Radioactivity ,Gamma spectroscopy ,Sensitivity (control systems) ,010306 general physics ,DETECTOR ,physics.ins-det ,0206 Quantum Physics ,Engineering (miscellaneous) ,Physics ,Science & Technology ,hep-ex ,010308 nuclear & particles physics ,Scattering ,IMPURITIES ,Instrumentation and Detectors (physics.ins-det) ,Nuclear & Particles Physics ,Physical Sciences ,0202 Atomic, Molecular, Nuclear, Particle and Plasma Physics ,Content (measure theory) ,lcsh:QC770-798 ,CONSTRUCTION MATERIALS - Abstract
LUX-ZEPLIN (LZ) is a second-generation direct dark matter experiment with spin-independent WIMP-nucleon scattering sensitivity above $1.4 \times 10^{-48}$ cm$^{2}$ for a WIMP mass of 40 GeV/c$^{2}$ and a 1000 d exposure. LZ achieves this sensitivity through a combination of a large 5.6 t fiducial volume, active inner and outer veto systems, and radio-pure construction using materials with inherently low radioactivity content. The LZ collaboration performed an extensive radioassay campaign over a period of six years to inform material selection for construction and provide an input to the experimental background model against which any possible signal excess may be evaluated. The campaign and its results are described in this paper. We present assays of dust and radon daughters depositing on the surface of components as well as cleanliness controls necessary to maintain background expectations through detector construction and assembly. Finally, examples from the campaign to highlight fixed contaminant radioassays for the LZ photomultiplier tubes, quality control and quality assurance procedures through fabrication, radon emanation measurements of major sub-systems, and bespoke detector systems to assay scintillator are presented., Comment: 45 pages (79 inc. tables), 7 figures, 9 tables
- Published
- 2020
- Full Text
- View/download PDF
69. Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment
- Author
-
Youngjin Yoo, R.S. Vishwanath, Eli Gibson, Subba R. Digumarthy, James M. Balter, Sasa Grbic, Dorin Comaniciu, Mannudeep K. Kalra, Yue Cao, Abishek Balachandran, Bogdan Georgescu, Awais Mansoor, Ramandeep Singh, and Florin C. Ghesu
- Subjects
FOS: Computer and information sciences ,Computer science ,media_common.quotation_subject ,Radiography ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Health Informatics ,2d ultrasound ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Radiology, Nuclear Medicine and imaging ,Computed radiography ,media_common ,Training set ,Radiological and Ultrasound Technology ,business.industry ,Image and Video Processing (eess.IV) ,Uncertainty ,Probabilistic logic ,Pattern recognition ,Ambiguity ,Electrical Engineering and Systems Science - Image and Video Processing ,Rejection rate ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Artifacts ,business ,030217 neurology & neurosurgery - Abstract
The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy., Under review at Medical Image Analysis
- Published
- 2020
70. Projected sensitivity of the LUX-ZEPLIN experiment to the 0νββ decay of Xe136
- Author
-
Richard Rosero, D. R. Tronstad, David Leonard, X. Bai, J. Johnson, X. Liu, J. Busenitz, P. Brás, C. Rhyne, A. Cole, R. J. Gaitskell, R. Linehan, Eli Gibson, C. Levy, Sergey Burdin, R. C. Webb, A. Baxter, L. de Viveiros, A. Alqahtani, A. Monte, C. Ghag, N. Angelides, P. Sorensen, S. Gokhale, S. Shaw, Catarina Silva, J. Palmer, B. Lopez Paredes, T. J. Sumner, N. Marangou, A. Manalaysay, K. E. Boast, O. Jahangir, R. W. Schnee, S. Pal, M. E. Monzani, Yue Meng, D. Naim, I. Stancu, J. Liao, J. J. Wang, K. O’Sullivan, V. Bugaev, N. I. Chott, A. Khazov, P. Zarzhitsky, B. J. Mount, T. A. Shutt, Matthew Szydagis, J. Lin, H. N. Nelson, K. C. Oliver-Mallory, Wolfgang Lorenzon, Ross G. White, J.J. Silk, J. R. Bensinger, E. Leason, Benjamin Krikler, M. G. D. Gilchriese, E. Druszkiewicz, Laura Manenti, J. E. Y. Dobson, C. Carels, E. Mizrachi, C. Chan, Henrique Araujo, J. A. Morad, L. Kreczko, J. R. Watson, F.-T. Liao, A. Vaitkus, S. Kravitz, W. C. Taylor, P. Rossiter, H. J. Birch, D. Khaitan, K. Stifter, A. Kamaha, J. Bang, J. A. Nikoleyczik, D. Seymour, W. Turner, U. Utku, E. D. Fraser, Mani Tripathi, K. T. Lesko, C. Nedlik, A. Biekert, J. Balajthy, V. A. Kudryavtsev, S. Fiorucci, A. C. Kaboth, J. McLaughlin, C. M. Ignarra, C. R. Hall, F. L. H. Wolfs, R. Cabrita, Kareem Kazkaz, G. Rutherford, D. R. Tiedt, A. Harrison, M. Horn, J. Li, T. P. Biesiadzinski, C. D. Kocher, M. G. D. van der Grinten, K. J. Palladino, M. F. Marzioni, M. C. Carmona-Benitez, J. Kras, Michele Cascella, Carl W. Akerlof, H. Kraus, C. E. Dahl, T. Fruth, T. J. Anderson, A. Lindote, J. M. Lyle, Jilei Xu, B. Boxer, C. Nehrkorn, A. B.M.R. Sazzad, A. Tomás, E. K. Pease, Juhyeong Lee, Michael Schubnell, D. S. Akerib, L. Korley, K. Kamdin, S. Balashov, Ethan Bernard, P. Majewski, W. H. Lippincott, V. N. Solovov, X. Xiang, Daniel McKinsey, N. Parveen, Minfang Yeh, A. St. J. Murphy, M. Solmaz, Adam Bernstein, Robert A. Taylor, E. Morrison, D. Woodward, A. Naylor, T. J. Whitis, S. J. Haselschwardt, D. Q. Huang, M.I. Lopes, N. Swanson, W. Wang, P. A. Terman, S. R. Eriksen, R. L. Mannino, Antonin Vacheret, M. Arthurs, S. Luitz, G. Pereira, Richard J. Smith, T. K. Edberg, Q. Riffard, D. Temples, K. Pushkin, J. E. Armstrong, J. H. Buckley, J. Genovesi, M. Tan, E. H. Miller, A. Cottle, I. Olcina, A. Bhatti, S. A. Hertel, C. Loniewski, Elena Korolkova, A. Stevens, G. R. C. Rischbieter, F. Neves, Bjoern Penning, M. Timalsina, L. Tvrznikova, Henning Flaecher, S. Alsum, I. Khurana, J. Y.K. Hor, J. E. Cutter, J. Reichenbacher, W. Ji, A. Fan, Duncan Carlsmith, D. Santone, A. Nilima, and R. Liu
- Subjects
Physics ,010308 nuclear & particles physics ,Active volume ,0103 physical sciences ,Analytical chemistry ,010306 general physics ,01 natural sciences ,Sensitivity (electronics) - Abstract
Author(s): Akerib, DS; Akerlof, CW; Alqahtani, A; Alsum, SK; Anderson, TJ; Angelides, N; Araujo, HM; Armstrong, JE; Arthurs, M; Bai, X; Balajthy, J; Balashov, S; Bang, J; Baxter, A; Bensinger, J; Bernard, EP; Bernstein, A; Bhatti, A; Biekert, A; Biesiadzinski, TP; Birch, HJ; Boast, KE; Boxer, B; Bras, P; Buckley, JH; Bugaev, VV; Burdin, S; Busenitz, JK; Cabrita, R; Carels, C; Carlsmith, DL; Carmona-Benitez, MC; Cascella, M; Chan, C; Chott, NI; Cole, A; Cottle, A; Cutter, JE; Dahl, CE; De Viveiros, L; Dobson, JEY; Druszkiewicz, E; Edberg, TK; Eriksen, SR; Fan, A; Fiorucci, S; Flaecher, H; Fraser, ED; Fruth, T; Gaitskell, RJ; Genovesi, J; Ghag, C; Gibson, E; Gilchriese, MGD; Gokhale, S; Van Der Grinten, MGD; Hall, CR; Harrison, A; Haselschwardt, SJ; Hertel, SA; Hor, JYK; Horn, M; Huang, DQ; Ignarra, CM; Jahangir, O; Ji, W; Johnson, J; Kaboth, AC; Kamaha, AC; Kamdin, K; Kazkaz, K; Khaitan, D; Khazov, A; Khurana, I; Kocher, CD; Korley, L; Korolkova, EV; Kras, J; Kraus, H; Kravitz, S; Kreczko, L; Krikler, B; Kudryavtsev, VA; Leason, EA; Lee, J | Abstract: The LUX-ZEPLIN (LZ) experiment will enable a neutrinoless double β decay search in parallel to the main science goal of discovering dark matter particle interactions. We report the expected LZ sensitivity to Xe136 neutrinoless double β decay, taking advantage of the significant (g600 kg) Xe136 mass contained within the active volume of LZ without isotopic enrichment. After 1000 live-days, the median exclusion sensitivity to the half-life of Xe136 is projected to be 1.06×1026 years (90% confidence level), similar to existing constraints. We also report the expected sensitivity of a possible subsequent dedicated exposure using 90% enrichment with Xe136 at 1.06×1027 years.
- Published
- 2020
- Full Text
- View/download PDF
71. Prostate lesion delineation from multiparametric magnetic resonance imaging based on locality alignment discriminant analysis
- Author
-
Bernard Chiu, Aaron D. Ward, Tommy W. S. Chow, Derek W. Cool, Zahra Kassam, Huageng Liang, Mingquan Lin, Weifu Chen, Mingbo Zhao, Eli Gibson, and Matthew Bastian-Jordan
- Subjects
Male ,Computer science ,Feature vector ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Image Processing, Computer-Assisted ,medicine ,Humans ,Effective diffusion coefficient ,Segmentation ,Radiation treatment planning ,Multiparametric Magnetic Resonance Imaging ,Pixel ,business.industry ,Discriminant Analysis ,Prostatic Neoplasms ,Multiparametric MRI ,Pattern recognition ,General Medicine ,medicine.disease ,Linear discriminant analysis ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Linear Models ,Artificial intelligence ,business ,Algorithms - Abstract
PURPOSE Multiparametric MRI (mpMRI) has shown promise in the detection and localization of prostate cancer foci. Although techniques have been previously introduced to delineate lesions from mpMRI, these techniques were evaluated in datasets with T2 maps available. The generation of T2 map is not included in the clinical prostate mpMRI consensus guidelines; the acquisition of which requires repeated T2-weighted (T2W) scans and would significantly lengthen the scan time currently required for the clinically recommended acquisition protocol, which includes T2W, diffusion-weighted (DW), and dynamic contrast-enhanced (DCE) imaging. The goal of this study is to develop and evaluate an algorithm that provides pixel-accurate lesion delineation from images acquired based on the clinical protocol. METHODS Twenty-five pixel-based features were extracted from the T2-weighted (T2W), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) images. The pixel-wise classification was performed on the reduced space generated by locality alignment discriminant analysis (LADA), a version of linear discriminant analysis (LDA) localized to patches in the feature space. Postprocessing procedures, including the removal of isolated points identified and filling of holes inside detected regions, were performed to improve delineation accuracy. The segmentation result was evaluated against the lesions manually delineated by four expert observers according to the Prostate Imaging-Reporting and Data System (PI-RADS) detection guideline. RESULTS The LADA-based classifier (60 ± 11%) achieved a higher sensitivity than the LDA-based classifier (51 ± 10%), thereby demonstrating, for the first time, that higher classification performance was attained on the reduced space generated by LADA than by LDA. Further sensitivity improvement (75 ± 14%) was obtained after postprocessing, approaching the sensitivities attained by previous mpMRI lesion delineation studies in which nonclinical T2 maps were available. CONCLUSION The proposed algorithm delineated lesions accurately and efficiently from images acquired following the clinical protocol. The development of this framework may potentially accelerate the clinical uses of mpMRI in prostate cancer diagnosis and treatment planning.
- Published
- 2018
- Full Text
- View/download PDF
72. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks
- Author
-
Ester Bonmati, Steve Bandula, Yipeng Hu, Dean C. Barratt, Stephen P. Pereira, Brian R. Davidson, Matthew J. Clarkson, Kurinchi Selvan Gurusamy, Francesco Giganti, and Eli Gibson
- Subjects
Radiography, Abdominal ,medicine.medical_specialty ,Radiography ,Kidney ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Segmentation ,Electrical and Electronic Engineering ,Esophagus ,Radiation treatment planning ,Radiological and Ultrasound Technology ,business.industry ,Gallbladder ,Stomach ,Image segmentation ,Computer Science Applications ,medicine.anatomical_structure ,Radiographic Image Interpretation, Computer-Assisted ,Radiology ,Tomography, X-Ray Computed ,business ,Pancreas ,Digestive System ,Algorithms ,Spleen ,030217 neurology & neurosurgery ,Software - Abstract
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
- Published
- 2018
- Full Text
- View/download PDF
73. A dimensionless dynamic contrast enhanced MRI parameter for intra-prostatic tumour target volume delineation: initial comparison with histology.
- Author
-
W. Thomas Hrinivich, Eli Gibson, Mena Gaed, Jose A. Gomez, Madeleine Moussa, Charles A. McKenzie, Glenn S. Bauman, Aaron D. Ward, Aaron Fenster, and Eugene Wong 0002
- Published
- 2014
- Full Text
- View/download PDF
74. Multiparametric MR imaging of prostate cancer foci: assessing the detectability and localizability of Gleason 7 peripheral zone cancers based on image contrasts.
- Author
-
Eli Gibson, Mena Gaed, W. Thomas Hrinivich, José A. Gómez, Madeleine Moussa, Cesare Romagnoli, Jonathan Mandel, Matthew Bastian-Jordan, Derek W. Cool, Suha Ghoul, Stephen E. Pautler, Joseph L. Chin, Cathie Crukley, Glenn S. Bauman, Aaron Fenster, and Aaron D. Ward
- Published
- 2014
- Full Text
- View/download PDF
75. Accuracy and variability of tumor burden measurement on multi-parametric MRI.
- Author
-
Mehrnoush Salarian, Eli Gibson, Maysam Shahedi, Mena Gaed, José A. Gómez, Madeleine Moussa, Cesare Romagnoli, Derek W. Cool, Matthew Bastian-Jordan, Joseph L. Chin, Stephen E. Pautler, Glenn S. Bauman, and Aaron D. Ward
- Published
- 2014
- Full Text
- View/download PDF
76. 3D reconstruction of digitized histological sections for vasculature quantification in the mouse hind limb.
- Author
-
Yiwen Xu, J. Geoffrey Pickering, Zengxuan Nong, Eli Gibson, and Aaron D. Ward
- Published
- 2014
- Full Text
- View/download PDF
77. 3D prostate histology reconstruction: an evaluation of image-based and fiducial-based algorithms.
- Author
-
Eli Gibson, Mena Gaed, José A. Gómez, Madeleine Moussa, Cesare Romagnoli, Joseph L. Chin, Cathie Crukley, Glenn S. Bauman, Aaron Fenster, and Aaron D. Ward
- Published
- 2013
- Full Text
- View/download PDF
78. Toward quantitative digital histopathology for prostate cancer: comparison of inter-slide interpolation methods for tumour measurement.
- Author
-
Mehrnoush Salarian, Maysam Shahedi, Eli Gibson, Mena Gaed, José A. Gómez, Madeleine Moussa, Glenn S. Bauman, and Aaron D. Ward
- Published
- 2013
- Full Text
- View/download PDF
79. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
- Author
-
Arnaud Arindra Adiyoso Setio, Dorin Comaniciu, Siqi Liu, Sasa Grbic, Eli Gibson, Bogdan Georgescu, and Florin C. Ghesu
- Subjects
FOS: Computer and information sciences ,Nodule detection ,Computer Science - Machine Learning ,Lung Neoplasms ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Image processing ,Computed tomography ,030218 nuclear medicine & medical imaging ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,Lung cancer ,Lung ,Early Detection of Cancer ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Image and Video Processing (eess.IV) ,Cancer ,Solitary Pulmonary Nodule ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Radiographic Image Interpretation, Computer-Assisted ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Software ,Lung cancer screening - Abstract
Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations., Comment: Published on IEEE Trans. on Medical Imaging
- Published
- 2020
- Full Text
- View/download PDF
80. Simulations of Events for the LUX-ZEPLIN (LZ) Dark Matter Experiment
- Author
-
T. Fruth, D. Seymour, J. Lin, W. Ji, J. McLaughlin, J. Palmer, L. Tvrznikova, M. Tan, P. R. Scovell, J. Genovesi, Ethan Bernard, E. H. Miller, F. L. H. Wolfs, G. Rutherford, V. Bugaev, J. Kras, D. Woodward, B. Lopez Paredes, M. Solmaz, E. Morrison, A. Biekert, I. Stancu, A. Lindote, K. E. Boast, A. Fan, Kareem Kazkaz, Juhyeong Lee, I. Olcina, A. Bhatti, A. Alqahtani, M. G. D. Gilchriese, S. Balashov, D. Santone, C. Carels, A. Piepke, Ross G. White, A. Cottle, M. E. Monzani, F.-T. Liao, T. P. Biesiadzinski, C. Loniewski, J. Y.K. Hor, J. E. Cutter, J. Reichenbacher, M. G. D. van der Grinten, N. I. Chott, P. Zarzhitsky, Matthew Szydagis, C. Levy, Sergey Burdin, K. J. Palladino, P. Rossiter, Elena Korolkova, Laura Manenti, G. R. C. Rischbieter, A. Baxter, D. R. Tiedt, C. E. Dahl, B. Boxer, T. K. Edberg, S. A. Hertel, Q. Riffard, U. Utku, K. C. Oliver-Mallory, R. W. Schnee, S. Pal, G. Pereira, Richard J. Smith, Michele Cascella, David Leonard, E. Druszkiewicz, D. S. Akerib, Mani Tripathi, A. Nilima, Simon Fayer, M. Timalsina, E. K. Pease, D. Temples, K. Pushkin, N. Angelides, P. Majewski, M. C. Carmona-Benitez, T. J. Anderson, N. Marangou, J. Busenitz, W. H. Lippincott, Michael Schubnell, A. B.M.R. Sazzad, C. Rhyne, J. E. Armstrong, V. N. Solovov, A. Manalaysay, A. Khazov, J.J. Silk, A. St. J. Murphy, Adam Bernstein, A. Naylor, D. Naim, S. Shaw, J. Li, S. Luitz, S. Gokhale, A. Cole, R. J. Gaitskell, J. H. Buckley, L. Korley, N. Parveen, K. Kamdin, R. Linehan, Minfang Yeh, W. C. Taylor, D. Q. Huang, B. J. Mount, T. A. Shutt, Catarina Silva, Carl W. Akerlof, Jilei Xu, C. Nehrkorn, H. J. Birch, J. Balajthy, M.I. Lopes, O. Jahangir, A. Monte, Daniel McKinsey, A. C. Kaboth, J. E. Y. Dobson, X. Xiang, A. Richards, C. Ghag, V. A. Kudryavtsev, C. Chan, A. Vaitkus, S. Kravitz, D. Khaitan, J. J. Wang, C. M. Ignarra, P. A. Terman, D. Bauer, M. F. Marzioni, A. Tomás, L. Kreczko, J. R. Watson, A. Harrison, R. L. Mannino, Antonin Vacheret, S. J. Haselschwardt, M. Arthurs, F. Neves, W. Wang, E. D. Fraser, P. Sorensen, E. Leason, T. J. Sumner, P. Brás, Henning Flaecher, Benjamin Krikler, K. T. Lesko, Yue Meng, T. J. Whitis, X. Bai, E. Mizrachi, J. Johnson, S. R. Eriksen, Duncan Carlsmith, Wolfgang Lorenzon, X. Liu, N. Swanson, J. R. Bensinger, Eli Gibson, A. Kamaha, H. N. Nelson, W. Turner, C. Nedlik, Henrique Araujo, C. R. Hall, R. Cabrita, H. Kraus, R. Liu, Richard Rosero, D. R. Tronstad, R. C. Webb, L. de Viveiros, K. Stifter, S. Fiorucci, J. Liao, J. A. Morad, J. Bang, J. A. Nikoleyczik, M. Horn, C. D. Kocher, J. M. Lyle, Robert A. Taylor, S. Alsum, I. Khurana, A. Stevens, and Bjoern Penning
- Subjects
Physics ,Particle physics ,Physics - Instrumentation and Detectors ,hep-ex ,010308 nuclear & particles physics ,Physics::Instrumentation and Detectors ,Monte Carlo method ,Detector ,Dark matter ,FOS: Physical sciences ,Astronomy and Astrophysics ,Instrumentation and Detectors (physics.ins-det) ,01 natural sciences ,High Energy Physics - Experiment ,High Energy Physics - Experiment (hep-ex) ,WIMP ,0103 physical sciences ,Sensitivity (control systems) ,Projection (set theory) ,physics.ins-det ,010303 astronomy & astrophysics ,Event (particle physics) ,Background radiation - Abstract
The LUX-ZEPLIN dark matter search aims to achieve a sensitivity to the WIMP-nucleon spin-independent cross-section down to (1--2)$\times10^{-12}$\,pb at a WIMP mass of 40 GeV/$c^2$. This paper describes the simulations framework that, along with radioactivity measurements, was used to support this projection, and also to provide mock data for validating reconstruction and analysis software. Of particular note are the event generators, which allow us to model the background radiation, and the detector response physics used in the production of raw signals, which can be converted into digitized waveforms similar to data from the operational detector. Inclusion of the detector response allows us to process simulated data using the same analysis routines as developed to process the experimental data., Comment: 24 pages, 19 figures; Corresponding Authors: A. Cottle, V. Kudryavtsev, D. Woodward
- Published
- 2020
- Full Text
- View/download PDF
81. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans
- Author
-
T G W Boers, F. van der Heijden, Dean C. Barratt, Henkjan J. Huisman, Eli Gibson, Jasenko Krdzalic, J J Hermans, Yipeng Hu, Ester Bonmati, Digital Society Institute, and Robotics and Mechatronics
- Subjects
Computer science ,Interactive 3d ,pancreatic cancer ,UT-Hybrid-D ,Computed tomography ,U-net ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Imaging, Three-Dimensional ,All institutes and research themes of the Radboud University Medical Center ,Medical imaging ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Set (psychology) ,Pancreas ,Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Deep learning ,22/2 OA procedure ,Pattern recognition ,interactive segmentation ,030220 oncology & carcinogenesis ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
The increasing incidence of pancreatic cancer will make it the second deadliest cancer in 2030. Imaging based early diagnosis and image guided treatment are emerging potential solutions. Artificial intelligence (AI) can help provide and improve widespread diagnostic expertise and accurate interventional image interpretation. Accurate segmentation of the pancreas is essential to create annotated data sets to train AI, and for computer assisted interventional guidance. Automated deep learning segmentation performance in pancreas computed tomography (CT) imaging is low due to poor grey value contrast and complex anatomy. A good solution seemed a recent interactive deep learning segmentation framework for brain CT that helped strongly improve initial automated segmentation with minimal user input. This method yielded no satisfactory results for pancreas CT, possibly due to a sub-optimal neural network architecture. We hypothesize that a state-of-the-art U-net neural network architecture is better because it can produce a better initial segmentation and is likely to be extended to work in a similar interactive approach. We implemented the existing interactive method, iFCN, and developed an interactive version of U-net method we call iUnet. The iUnet is fully trained to produce the best possible initial segmentation. In interactive mode it is additionally trained on a partial set of layers on user generated scribbles. We compare initial segmentation performance of iFCN and iUnet on a 100CT dataset using dice similarity coefficient analysis. Secondly, we assessed the performance gain in interactive use with three observers on segmentation quality and time. Average automated baseline performance was 78% (iUnet) versus 72% (FCN). Manual and semi-automatic segmentation performance was: 87% in 15 min. for manual, and 86% in 8 min. for iUNet. We conclude that iUnet provides a better baseline than iFCN and can reach expert manual performance significantly faster than manual segmentation in case of pancreas CT. Our novel iUnet architecture is modality and organ agnostic and can be a potential novel solution for semi-automatic medical imaging segmentation in general.
- Published
- 2020
82. The LUX-ZEPLIN (LZ) Experiment
- Author
-
P. Sorensen, J. C. Davis, J. J. Silk, M. C. Carmona-Benitez, E. Holtom, T. J. Anderson, A. B.M.R. Sazzad, V.V. Sosnovtsev, J. J. Cherwinka, V. B. Francis, A. Naylor, L. Korley, K. Kamdin, W. Craddock, P. Beltrame, X. Xiang, A. Manalaysay, P. R. Scovell, D. Q. Huang, Ethan Bernard, B. P. Tennyson, Yue Meng, I. B. Peterson, A. Cottle, A. Biekert, A. Kamaha, M. S. Witherell, R. Studley, M. E. Monzani, J. S. Campbell, Alexander Bolozdynya, C. Chiller, M. G. D. van der Grinten, Matthew Szydagis, C. O. Vuosalo, M. Tan, E. H. Miller, V. Bugaev, B. Boxer, R. Coughlen, J. Liao, I. Stancu, L. de Viveiros, P. MarrLaundrie, A. Lambert, L. Kreczko, J. R. Watson, Michael Schubnell, A. Piepke, S. Greenwood, J. P. Coleman, A. Geffre, D. Bauer, G. W. Shutt, W. H. Lippincott, Y. Qie, Dongming Mei, L. Sabarots, S. Hans, Bhawna Gomber, W. Skulski, J. A. Nikkel, S. A. Hertel, A. Arbuckle, F.-T. Liao, S. R. Eriksen, Daniel McKinsey, E. Druszkiewicz, Jilei Xu, P. Sutcliffe, H. J. Rose, C. Nehrkorn, K. T. Lesko, N. A. Larsen, C. Lee, B. Landerud, Martin Breidenbach, D. Molash, Candace Lynch, W. G. Jones, M. A. Olevitch, Ian S. Young, M. Sarychev, B. N. Edwards, A. Pagac, L. Tvrznikova, P. Ford, S. Luitz, T. J. R. Davison, D. Pagenkopf, P. Majewski, G. Gregerson, Chao Zhang, Mary Severson, A. Currie, Ross G. White, Carl Gwilliam, S. J. Patton, A. J. Bailey, David Colling, Minfang Yeh, K. Wilson, P. Rossiter, S. Alsum, G. Pereira, V. M. Gehman, A. Dushkin, I. Khurana, Richard J. Smith, Benjamin Krikler, N. Marangou, John Heise, C. Loniewski, J. Bang, J. A. Nikoleyczik, J. Belle, D. Yu. Akimov, E. Mizrachi, C. Levy, M. R. While, A. Dobi, H. N. Nelson, X. Bai, J. Johnson, J. O'Dell, W. T. Kim, S. Weatherly, L. Wang, P. Johnson, Elena Korolkova, S. Pierson, W. T. Emmet, K. Stifter, B. Birrittella, T. Tope, A. Richards, Yufeng Wang, Ren-Jie Wang, J. Genovesi, C. Maupin, Kathrin C. Walker, G. R. C. Rischbieter, Eli Gibson, Yeongduk Kim, W. Turner, S. D. Worm, E. M. Boulton, G. Rutherford, S. Kyre, J. Mock, Seth Hillbrand, D. White, Q. Xiao, Sridhara Dasu, R. Leonard, J. Y.K. Hor, B. Holbrook, C. Nedlik, S. Fiorucci, J. Keefner, J. Kras, J. Yin, F. Froborg, F. Neves, D. Khaitan, S. J. Haselschwardt, L. Oxborough, D.S. Hamilton, O. Hitchcock, J.P. da Cunha, J.S. Saba, Henrique Araujo, I. Olcina, David Leonard, A. Bhatti, U. Utku, W. J. Wisniewski, S. Powell, M. Pangilinan, F.L.H. Wolfs, J. Barthel, D. Naim, D. Hemer, M. Timalsina, B. J. Mount, T. A. Shutt, S. Shaw, J. A. Morad, N. Swanson, J.P. Rodrigues, J. Nesbit, W.L. Waldron, J. E. Y. Dobson, J. Busenitz, C. R. Hall, N. Stern, C. Rhyne, A. Cole, R. J. Gaitskell, T. K. Edberg, Q. Riffard, O. Jahangir, B. G. Lenardo, D. Temples, C. Chan, C. E. Tull, A. Vaitkus, D. Garcia, S. Kravitz, T. G. Gonda, M. Elnimr, K. Pushkin, M. Horn, Kareem Kazkaz, H. Kraus, J. Li, C. H. Faham, T.M. Stiegler, A. Monte, C. D. Kocher, T. P. Biesiadzinski, C. Ghag, J. E. Armstrong, H. S. Lee, W. R. Edwards, K. J. Palladino, T. Benson, J. H. Buckley, Z. J. Minaker, Carl W. Akerlof, R. Bunker, M. Solmaz, M. Koyuncu, J.R. Verbus, J. M. Lyle, C. E. Dahl, E. Morrison, J. E. Cutter, J. Reichenbacher, H.J. Krebs, J. So, S. Branson, Peter Bauer, H. Auyeung, D. Seymour, Michele Cascella, A. A. Chiller, Mani Tripathi, Richard Rosero, D. R. Tronstad, R. Liu, M. Khaleeq, N. Decheine, M. N. Irving, Yunpeng Liu, S. Uvarov, J. McLaughlin, A. R. Smith, S. Dardin, J. Thomson, K. Sundarnath, Robert A. Taylor, Ben Carlson, T. Harrington, C. T. McConnell, E. K. Pease, A. Greenall, T. J. Whitis, M. F. Marzioni, C. Silva, V. Kasey, D. Rynders, R. C. Webb, J. Palmer, P. Brás, V. N. Solovov, T.J. Martin, R. Conley, P. A. Terman, A. Tomás, B. Lopez Paredes, K. E. Boast, J. T. White, D. Markley, A. Konovalov, A. Lindote, A. Alquahtani, L. Reichhart, D.C. Malling, K. C. Oliver-Mallory, E. Leason, Juhyeong Lee, D. Curran, Henning Flaecher, S. Balashov, A.V. Khromov, B. N. Ratcliff, J. Migneault, A. V. Kumpan, Simon Fayer, Antonin Vacheret, C. Hjemfelt, R. Linehan, M. Arthurs, S. Stephenson, V. M. Palmaccio, N. I. Chott, M.R. Stark, Duncan Carlsmith, Wenzhao Wei, M. Utes, Laura Manenti, A. Stevens, D. J. Taylor, R. E. Smith, V. A. Kudryavtsev, S. Walcott, C. M. Ignarra, N.J. Gantos, R. G. Jacobsen, M. Racine, A. Fan, Bjoern Penning, J. Va’vra, A. Harrison, H. Oh, J. Makkinje, W. Ji, R. Rucinski, Sergey Burdin, A. Baxter, D. Santone, E. Voirin, R.M. Gerhard, T.W. Hurteau, K.J. Thomas, X. Liu, R. Gelfand, T. Vietanen, A. Nilima, R. M. Preece, A. Laundrie, Mikkel B. Johnson, K. Hanzel, F. G. O'Neill, A. Khazov, W. C. Taylor, J. Balajthy, M.J. Barry, K. O’Sullivan, D. Lucero, A. C. Kaboth, R. L. Mannino, J. D. Wolfs, J. Lin, C. Pereira, N. Angelides, S. Gokhale, A. St. J. Murphy, Adam Bernstein, M. Hoff, M.I. Lopes, M. G. D. Gilchriese, C. Carels, I. M. Fogarty Florang, J. Edwards, S. N. Jeffery, R. W. Schnee, S. Pal, R. Bramante, T. Fruth, T. J. Sumner, W. H. To, Wolfgang Lorenzon, J. R. Bensinger, J. J. Wang, D. Woodward, C. Hasselkus, K. Skarpaas, D. R. Tiedt, D. S. Akerib, and Science and Technology Facilities Council (STFC)
- Subjects
Nuclear and High Energy Physics ,Technology ,Physics - Instrumentation and Detectors ,0299 Other Physical Sciences ,Dark matter ,FOS: Physical sciences ,chemistry.chemical_element ,Scintillator ,01 natural sciences ,7. Clean energy ,Physics, Particles & Fields ,High Energy Physics - Experiment ,Nuclear physics ,LEAD ,High Energy Physics - Experiment (hep-ex) ,XENON ,Low energy ,Xenon ,WIMP ,0103 physical sciences ,0201 Astronomical and Space Sciences ,010306 general physics ,Nuclear Science & Technology ,Underground ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,Instrumentation ,Instruments & Instrumentation ,physics.ins-det ,Physics ,Dark matter detector ,Time projection chamber ,Science & Technology ,010308 nuclear & particles physics ,hep-ex ,Detector ,Instrumentation and Detectors (physics.ins-det) ,Nuclear & Particles Physics ,Neutron capture ,Physics, Nuclear ,chemistry ,Physical Sciences ,0202 Atomic, Molecular, Nuclear, Particle and Plasma Physics ,Liquid xenon ,EMISSION ,Astrophysics - Instrumentation and Methods for Astrophysics ,astro-ph.IM - Abstract
We describe the design and assembly of the LUX-ZEPLIN experiment, a direct detection search for cosmic WIMP dark matter particles. The centerpiece of the experiment is a large liquid xenon time projection chamber sensitive to low energy nuclear recoils. Rejection of backgrounds is enhanced by a Xe skin veto detector and by a liquid scintillator Outer Detector loaded with gadolinium for efficient neutron capture and tagging. LZ is located in the Davis Cavern at the 4850' level of the Sanford Underground Research Facility in Lead, South Dakota, USA. We describe the major subsystems of the experiment and its key design features and requirements. info:eu-repo/semantics/publishedVersion
- Published
- 2019
- Full Text
- View/download PDF
83. Validation of a fully automated liver segmentation algorithm using multi-scale deep reinforcement learning and comparison versus manual segmentation
- Author
-
David J. Winkel, Daniel T. Boll, Hanns-Christian Breit, Thomas Weikert, Dorin Comaniciu, Tobias Heye, Eli Gibson, and Guillaume Chabin
- Subjects
Reproducibility ,business.industry ,Liver Diseases ,Univariate ,Contrast (statistics) ,Reproducibility of Results ,General Medicine ,Deep Learning ,Liver ,Robustness (computer science) ,Approximation error ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Tomography ,business ,Tomography, X-Ray Computed ,Algorithm ,Algorithms ,Volume (compression) ,Retrospective Studies - Abstract
Purpose To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. Materials and methods We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. Results The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. Conclusion The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.
- Published
- 2019
84. Determination of the Association Between T2-weighted MRI and Gleason Sub-pattern: A Proof of Principle Study
- Author
-
Theo van der Kwast, Masoom A. Haider, Jenna Sykes, Eli Gibson, Michelle R Downes, and Aaron D. Ward
- Subjects
Adult ,Male ,medicine.medical_specialty ,Pathology ,Biopsy ,medicine.medical_treatment ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,medicine ,Carcinoma ,Humans ,Radiology, Nuclear Medicine and imaging ,Sampling (medicine) ,Aged ,Prostatectomy ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Magnetic resonance imaging ,Middle Aged ,medicine.disease ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Cribriform ,Radiology ,Neoplasm Grading ,business - Abstract
The study aimed to determine the relationship between T2-weighted magnetic resonance imaging (MRI) signal and histologic sub-patterns in prostate cancer areas with different Gleason grades.MR images of prostates (n = 25) were obtained prior to radical prostatectomy. These were processed as whole-mount specimens with tumors and the peripheral zone was annotated digitally by two pathologists. Gleason grade 3 was the most prevalent grade and was subdivided into packed, intermediate, and sparse based on gland-to-stroma ratio. Large cribriform, intraductal carcinoma, and small cribriform glands (grade 4 group) were separately annotated but grouped together for statistical analysis. The log MRI signal intensity for each contoured region (n = 809) was measured, and pairwise comparisons were performed using the open-source software R version 3.0.1.Packed grade 3 sub-pattern has a significantly lower MRI intensity than the grade 4 group (P 0.00001). Sparse grade 3 has a significantly higher MRI intensity than the packed grade 3 sub-pattern (P 0.0001). No significant difference in MRI intensity was observed between the Gleason grade 4 group and the sparse sub-pattern grade 3 group (P = 0.54). In multivariable analysis adjusting for peripheral zone, the P values maintained significance (packed grade 3 group vs grade 4 group, P 0.001; and sparse grade 3 sub-pattern vs packed grade 3 sub-pattern, P 0.001).This study demonstrated that T2-weighted MRI signal is dependent on histologic sub-patterns within Gleason grades 3 and 4 cancers, which may have implications for directed biopsy sampling and patient management.
- Published
- 2016
- Full Text
- View/download PDF
85. 3D prostate histology image reconstruction: Quantifying the impact of tissue deformation and histology section location
- Author
-
Eli Gibson, Mena Gaed, José A Gómez, Madeleine Moussa, Stephen Pautler, Joseph L Chin, Cathie Crukley, Glenn S Bauman, Aaron Fenster, and Aaron D Ward
- Subjects
Correlative histopathology ,image registration ,prostate cancer imaging ,validation ,3D histology reconstruction ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Pathology ,RB1-214 - Abstract
Background: Guidelines for localizing prostate cancer on imaging are ideally informed by registered post-prostatectomy histology. 3D histology reconstruction methods can support this by reintroducing 3D spatial information lost during histology processing. The need to register small, high-grade foci drives a need for high accuracy. Accurate 3D reconstruction method design is impacted by the answers to the following central questions of this work. (1) How does prostate tissue deform during histology processing? (2) What spatial misalignment of the tissue sections is induced by microtome cutting? (3) How does the choice of reconstruction model affect histology reconstruction accuracy? Materials and Methods: Histology, paraffin block face and magnetic resonance images were acquired for 18 whole mid-gland tissue slices from six prostates. 7-15 homologous landmarks were identified on each image. Tissue deformation due to histology processing was characterized using the target registration error (TRE) after landmark-based registration under four deformation models (rigid, similarity, affine and thin-plate-spline [TPS]). The misalignment of histology sections from the front faces of tissue slices was quantified using manually identified landmarks. The impact of reconstruction models on the TRE after landmark-based reconstruction was measured under eight reconstruction models comprising one of four deformation models with and without constraining histology images to the tissue slice front faces. Results: Isotropic scaling improved the mean TRE by 0.8-1.0 mm (all results reported as 95% confidence intervals), while skew or TPS deformation improved the mean TRE by
- Published
- 2013
- Full Text
- View/download PDF
86. Correction to: Stochastic Sequential Modeling: Toward Improved Prostate Cancer Diagnosis Through Temporal-Ultrasound
- Author
-
Aaron Fenster, Parvin Mousavi, Mena Gaed, Farhad Imani, Hagit Shatkay, Eli Gibson, Layan Nahlawi, Aaron D. Ward, Purang Abolmaesumi, Jose A. Gomez, and Madeleine Moussa
- Subjects
Male ,business.industry ,Computer science ,Prostate ,Biomedical Engineering ,Correction ,Prostatic Neoplasms ,Models, Theoretical ,computer.software_genre ,medicine.disease ,Markov Chains ,Sequential modeling ,Prostate cancer ,medicine ,Humans ,Artificial intelligence ,business ,computer ,Natural language processing ,Ultrasonography - Abstract
Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.
- Published
- 2020
- Full Text
- View/download PDF
87. Stochastic Modeling of Temporal Enhanced Ultrasound: Impact of Temporal Properties on Prostate Cancer Characterization
- Author
-
Mena Gaed, Madeleine Moussa, Layan Nahlawi, Hagit Shatkay, Aaron Fenster, Parvin Mousavi, Eli Gibson, Farhad Imani, Purang Abolmaesumi, Caroline Goncalves, Aaron D. Ward, and Jose A. Gomez
- Subjects
Male ,Computer science ,030232 urology & nephrology ,Biomedical Engineering ,Signal ,Sensitivity and Specificity ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Biopsy ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Hidden Markov model ,Divergence (statistics) ,Ultrasonography ,Stochastic Processes ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Prostate ,Prostatic Neoplasms ,Pattern recognition ,Tissue characterization ,medicine.disease ,Markov Chains ,Ultrasonic imaging ,Imaging technique ,Artificial intelligence ,business - Abstract
Objectives : Temporal enhanced ultrasound (TeUS) is a new ultrasound-based imaging technique that provides tissue-specific information. Recent studies have shown the potential of TeUS for improving tissue characterization in prostate cancer diagnosis. We study the temporal properties of TeUS—temporal order and length—and present a new framework to assess their impact on tissue information. Methods : We utilize a probabilistic modeling approach using hidden Markov models (HMMs) to capture the temporal signatures of malignant and benign tissues from TeUS signals of nine patients. We model signals of benign and malignant tissues (284 and 286 signals, respectively) in their original temporal order as well as under order permutations. We then compare the resulting models using the Kullback–Liebler divergence and assess their performance differences in characterization. Moreover, we train HMMs using TeUS signals of different durations and compare their model performance when differentiating tissue types. Results : Our findings demonstrate that models of order-preserved signals perform statistically significantly better (85% accuracy) in tissue characterization compared to models of order-altered signals (62% accuracy). The performance degrades as more changes in signal order are introduced. Additionally, models trained on shorter sequences perform as accurately as models of longer sequences. Conclusion : The work presented here strongly indicates that temporal order has substantial impact on TeUS performance; thus, it plays a significant role in conveying tissue-specific information. Furthermore, shorter TeUS signals can relay sufficient information to accurately distinguish between tissue types. Significance : Understanding the impact of TeUS properties facilitates the process of its adopting in diagnostic procedures and provides insights on improving its acquisition.
- Published
- 2018
88. Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images
- Author
-
Nooshin, Ghavami, Yipeng, Hu, Ester, Bonmati, Rachael, Rodell, Eli, Gibson, Caroline, Moore, and Dean, Barratt
- Subjects
Special Section on Artificial Intelligence in Medical Imaging - Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.
- Published
- 2018
89. Ultrasound-Based Characterization of Prostate Cancer Using Joint Independent Component Analysis
- Author
-
D. Robert Siemens, Farhad Imani, Mena Gaed, Cesare Romagnoli, Aaron D. Ward, Saman Nouranian, Silvia D. Chang, Michael Leveridge, Purang Abolmaesumi, Eli Gibson, Madeleine Moussa, Mahdi Ramezani, Amir Khojaste, Jose A. Gomez, Aaron Fenster, and Parvin Mousavi
- Subjects
Male ,Engineering ,Feature vector ,Feature extraction ,Wavelet Analysis ,030232 urology & nephrology ,Biomedical Engineering ,Cross-validation ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Humans ,Time series ,Ultrasonography ,Models, Statistical ,Receiver operating characteristic ,business.industry ,Prostate ,Prostatic Neoplasms ,Wavelet transform ,Pattern recognition ,Independent component analysis ,3. Good health ,Principal component analysis ,Artificial intelligence ,business - Abstract
Objective: This paper presents the results of a new approach for selection of RF time series features based on joint independent component analysis for in vivo characterization of prostate cancer. Methods: We project three sets of RF time series features extracted from the spectrum, fractal dimension, and the wavelet transform of the ultrasound RF data on a space spanned by five joint independent components. Then, we demonstrate that the obtained mixing coefficients from a group of patients can be used to train a classifier, which can be applied to characterize cancerous regions of a test patient. Results: In a leave-one-patient-out cross validation, an area under receiver operating characteristic curve of 0.93 and classification accuracy of 84% are achieved. Conclusion: Ultrasound RF time series can be used to accurately characterize prostate cancer, in vivo without the need for exhaustive search in the feature space. Significance: We use joint independent component analysis for systematic fusion of multiple sets of RF time series features, within a machine learning framework, to characterize PCa in an in vivo study.
- Published
- 2015
- Full Text
- View/download PDF
90. Class-Aware Adversarial Lung Nodule Synthesis in CT Images
- Author
-
Siqi Liu, Guillaume Chabin, Andrew F. Laine, Bogdan Georgescu, Eli Gibson, Arnaud Arindra Adiyoso Setio, Sasa Grbic, Dorin Comaniciu, Jie Yang, and Zhoubing Xu
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Malignancy ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,business.industry ,Deep learning ,Supervised learning ,Pattern recognition ,Nodule (medicine) ,Real image ,medicine.disease ,Class (biology) ,ComputingMethodologies_PATTERNRECOGNITION ,Binary classification ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,business - Abstract
Though large-scale datasets are essential for training deep learning systems, it is expensive to scale up the collection of medical imaging datasets. Synthesizing the objects of interests, such as lung nodules, in medical images based on the distribution of annotated datasets can be helpful for improving the supervised learning tasks, especially when the datasets are limited by size and class balance. In this paper, we propose the class-aware adversarial synthesis framework to synthesize lung nodules in CT images. The framework is built with a coarse-to-fine patch in-painter (generator) and two class-aware discriminators. By conditioning on the random latent variables and the target nodule labels, the trained networks are able to generate diverse nodules given the same context. By evaluating on the public LIDC-IDRI dataset, we demonstrate an example application of the proposed framework for improving the accuracy of the lung nodule malignancy estimation as a binary classification problem, which is important in the lung screening scenario. We show that combining the real image patches and the synthetic lung nodules in the training set can improve the mean AUC classification score across different network architectures by 2%.
- Published
- 2018
- Full Text
- View/download PDF
91. A Graph-Based Multi-kernel Feature Weight Learning Framework for Detection and Grading of Prostate Lesions Using Multi-parametric MR Images
- Author
-
Bernard Chiu, Aaron D. Ward, Huagen Liang, Eli Gibson, Derek W. Cool, Weifu Chen, Qi Shen, Matthew Bastian-Jordan, Zahra Kassam, and Guocan Feng
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Feature extraction ,Pattern recognition ,Magnetic resonance imaging ,Regression analysis ,medicine.disease ,computer.software_genre ,Regression ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,Voxel ,030220 oncology & carcinogenesis ,medicine ,Artificial intelligence ,business ,computer ,Grading (tumors) - Abstract
Prostate cancer is the third leading causes of death in men. However, the disease is curable if diagnosed early. During the past decades, multi-parametric magnetic resonance imaging (mpMRI) has been shown to be superior to trans-rectal ultrasound (TRUS) in detecting and localizing prostate cancer lesions to guide prostate biopsies and radiation therapies. The goal of this paper is to develop a simple and accurate graph-based regression framework for voxel-wise detection and grading of prostate cancer using mpMRIs. In the framework, groups of features were first extracted from the mpMRIs, and a graph-based multi-kernel model was proposed to learn the weights of the groups of features and the similarity matrix simultaneously. A Lapalacian regression model was then used to estimate the PI-RADS score of each voxels which characterizes how likely a voxel is cancerous. Experimental results of detection and grading of prostate lesions evaluated by six metrics show that the proposed method yielded convincing results.
- Published
- 2017
- Full Text
- View/download PDF
92. MP70-02 CORRELATION OF MPMRI CONTOURS WITH 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSY WITHIN THE PROMIS TRIAL PILOT: WHAT MARGINS ARE REQUIRED?
- Author
-
Dean C. Barratt, Esther Bonmati, Richard Kaplan, Eli Gibson, Hashim U. Ahmed, Mark Emberton, Yipeng Hu, Clement Orczyk, Alex Kirkham, Yolana Coraco-Moraes, Shonit Punwani, Ahmed El-Shater Bosaily, Louise Brown, and Katie Ward
- Subjects
medicine.medical_specialty ,medicine.anatomical_structure ,medicine.diagnostic_test ,Prostate ,business.industry ,Urology ,Biopsy ,medicine ,Radiology ,business - Published
- 2017
- Full Text
- View/download PDF
93. MP38-07 SHOULD WE AIM FOR THE CENTRE OF AN MRI PROSTATE LESION? CORRELATION BETWEEN MPMRI AND 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSIES FROM THE PROMIS TRIAL
- Author
-
Yipeng Hu, Dean C. Barratt, Esther Bonmati, Hashim U. Ahmed, Ahmed El-Shater Bosaily, Mark Emberton, Louise Brown, Shonit Punwani, Katie Ward, Alex Kirkham, Yolana Coraco-Moraes, Eli Gibson, Clement Orczyk, and Richard Kaplan
- Subjects
Lesion ,medicine.medical_specialty ,medicine.anatomical_structure ,Prostate ,business.industry ,Urology ,medicine ,Radiology ,medicine.symptom ,business - Published
- 2017
- Full Text
- View/download PDF
94. Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks
- Author
-
Weidi Xie, Li-Lin Lee, Tom Vercauteren, J. Alison Noble, Dean C. Barratt, Yipeng Hu, and Eli Gibson
- Subjects
FOS: Computer and information sciences ,Discriminator ,Pixel ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Real image ,Sample (graphics) ,Imaging phantom ,Machine Learning (cs.LG) ,Computer Science - Learning ,Range (mathematics) ,Position (vector) ,Computer vision ,Artificial intelligence ,business ,Spatial analysis - Abstract
Sonography synthesis has a wide range of applications, including medical procedure simulation, clinical training and multimodality image registration. In this paper, we propose a machine learning approach to simulate ultrasound images at given 3D spatial locations (relative to the patient anatomy), based on conditional generative adversarial networks (GANs). In particular, we introduce a novel neural network architecture that can sample anatomically accurate images conditionally on spatial position of the (real or mock) freehand ultrasound probe. To ensure an effective and efficient spatial information assimilation, the proposed spatially-conditioned GANs take calibrated pixel coordinates in global physical space as conditioning input, and utilise residual network units and shortcuts of conditioning data in the GANs' discriminator and generator, respectively. Using optically tracked B-mode ultrasound images, acquired by an experienced sonographer on a fetus phantom, we demonstrate the feasibility of the proposed method by two sets of quantitative results: distances were calculated between corresponding anatomical landmarks identified in the held-out ultrasound images and the simulated data at the same locations unseen to the networks; a usability study was carried out to distinguish the simulated data from the real images. In summary, we present what we believe are state-of-the-art visually realistic ultrasound images, simulated by the proposed GAN architecture that is stable to train and capable of generating plausibly diverse image samples., Comment: Accepted to MICCAI RAMBO 2017
- Published
- 2017
- Full Text
- View/download PDF
95. Prostate: Registration of Digital Histopathologic Images to in Vivo MR Images Acquired by Using Endorectal Receive Coil
- Author
-
Jose A. Gomez, Cesare Romagnoli, Eli Gibson, Glenn Bauman, Jacques Montreuil, Aaron D. Ward, Charles A. McKenzie, Aaron Fenster, Cathie Crukley, Joseph L. Chin, and Madeleine Moussa
- Subjects
Gadolinium DTPA ,Male ,medicine.medical_specialty ,medicine.medical_treatment ,Contrast Media ,Magnetic Resonance Imaging, Interventional ,Imaging, Three-Dimensional ,Fiducial Markers ,Prostate ,In vivo ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Prostatectomy ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Magnetic resonance imaging ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Research studies ,Prostate surgery ,Radiology ,Mr images ,business ,Fiducial marker - Abstract
To develop and evaluate a technique for the registration of in vivo prostate magnetic resonance (MR) images to digital histopathologic images by using image-guided specimen slicing based on strand-shaped fiducial markers relating specimen imaging to histopathologic examination.The study was approved by the institutional review board (the University of Western Ontario Health Sciences Research Ethics Board, London, Ontario, Canada), and written informed consent was obtained from all patients. This work proposed and evaluated a technique utilizing developed fiducial markers and real-time three-dimensional visualization in support of image guidance for ex vivo prostate specimen slicing parallel to the MR imaging planes prior to digitization, simplifying the registration process. Means, standard deviations, root-mean-square errors, and 95% confidence intervals are reported for all evaluated measurements.The slicing error was within the 2.2 mm thickness of the diagnostic-quality MR imaging sections, with a tissue block thickness standard deviation of 0.2 mm. Rigid registration provided negligible postregistration overlap of the smallest clinically important tumors (0.2 cm(3)) at histologic examination and MR imaging, whereas the tested nonrigid registration method yielded a mean target registration error of 1.1 mm and provided useful coregistration of such tumors.This method for the registration of prostate digital histopathologic images to in vivo MR images acquired by using an endorectal receive coil was sufficiently accurate for coregistering the smallest clinically important lesions with 95% confidence.
- Published
- 2012
- Full Text
- View/download PDF
96. A semi-automated method for identifying and measuring myelinated nerve fibers in scanning electron microscope images
- Author
-
Heather L. More, Mirza Faisal Beg, Eli Gibson, J. Maxwell Donelan, and Jingyun Chen
- Subjects
Time Factors ,Scanning electron microscope ,Computer science ,Myelinated nerve fiber ,business.industry ,General Neuroscience ,Pattern recognition ,Image segmentation ,Nerve Fibers, Myelinated ,Sciatic Nerve ,Axons ,Rats ,Myelin ,medicine.anatomical_structure ,nervous system ,Peripheral nerve ,Peripheral nervous system ,Microscopy, Electron, Scanning ,medicine ,Animals ,Artificial intelligence ,Axon ,business ,Neuroscience ,Automated method - Abstract
Diagnosing illnesses, developing and comparing treatment methods, and conducting research on the organization of the peripheral nervous system often require the analysis of peripheral nerve images to quantify the number, myelination, and size of axons in a nerve. Current methods that require manually labeling each axon can be extremely time-consuming as a single nerve can contain thousands of axons. To improve efficiency, we developed a computer-assisted axon identification and analysis method that is capable of analyzing and measuring sub-images covering the nerve cross-section, acquired using a scanning electron microscope. This algorithm performs three main procedures - it first uses cross-correlation to combine the acquired sub-images into a large image showing the entire nerve cross-section, then identifies and individually labels axons using a series of image intensity and shape criteria, and finally identifies and labels the myelin sheath of each axon using a region growing algorithm with the geometric centers of axons as seeds. To ensure accurate analysis of the image, we incorporated manual supervision to remove mislabeled axons and add missed axons. The typical user-assisted processing time for a two-megapixel image containing over 2000 axons was less than 1h. This speed was almost eight times faster than the time required to manually process the same image. Our method has proven to be well suited for identifying axons and their characteristics, and represents a significant time savings over traditional manual methods.
- Published
- 2011
- Full Text
- View/download PDF
97. Toward Prostate Cancer Contouring Guidelines on Magnetic Resonance Imaging: Dominant Lesion Gross and Clinical Target Volume Coverage Via Accurate Histology Fusion
- Author
-
Mena Gaed, Matthew Bastian-Jordan, Masoom A. Haider, Zahra Kassam, Stephen E. Pautler, Eli Gibson, Madeleine Moussa, Aaron Fenster, Jose A. Gomez, Glenn Bauman, Aaron D. Ward, Cesare Romagnoli, Derek W. Cool, Joseph L. Chin, and Cathie Crukley
- Subjects
Male ,Cancer Research ,medicine.medical_treatment ,Multimodal Imaging ,Sensitivity and Specificity ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Image Interpretation, Computer-Assisted ,medicine ,Effective diffusion coefficient ,Humans ,Radiology, Nuclear Medicine and imaging ,Multiparametric Magnetic Resonance Imaging ,Aged ,Prostatectomy ,Contouring ,Radiation ,medicine.diagnostic_test ,business.industry ,Margins of Excision ,Prostatic Neoplasms ,Reproducibility of Results ,Magnetic resonance imaging ,Middle Aged ,medicine.disease ,Magnetic Resonance Imaging ,Tumor Burden ,medicine.anatomical_structure ,Treatment Outcome ,Oncology ,Surgery, Computer-Assisted ,030220 oncology & carcinogenesis ,Practice Guidelines as Topic ,medicine.symptom ,business ,Nuclear medicine - Abstract
Purpose Defining prostate cancer (PCa) lesion clinical target volumes (CTVs) for multiparametric magnetic resonance imaging (mpMRI) could support focal boosting or treatment to improve outcomes or lower morbidity, necessitating appropriate CTV margins for mpMRI-defined gross tumor volumes (GTVs). This study aimed to identify CTV margins yielding 95% coverage of PCa tumors for prospective cases with high likelihood. Methods and Materials Twenty-five men with biopsy-confirmed clinical stage T1 or T2 PCa underwent pre-prostatectomy mpMRI, yielding T2-weighted, dynamic contrast-enhanced, and apparent diffusion coefficient images. Digitized whole-mount histology was contoured and registered to mpMRI scans (error ≤2 mm). Four observers contoured lesion GTVs on each mpMRI scan. CTVs were defined by isotropic and anisotropic expansion from these GTVs and from multiparametric (unioned) GTVs from 2 to 3 scans. Histologic coverage (proportions of tumor area on co-registered histology inside the CTV, measured for Gleason scores [GSs] ≥6 and ≥7) and prostate sparing (proportions of prostate volume outside the CTV) were measured. Nonparametric histologic-coverage prediction intervals defined minimal margins yielding 95% coverage for prospective cases with 78% to 92% likelihood. Results On analysis of 72 true-positive tumor detections, 95% coverage margins were 9 to 11 mm (GS ≥ 6) and 8 to 10 mm (GS ≥ 7) for single-sequence GTVs and were 8 mm (GS ≥ 6) and 6 mm (GS ≥ 7) for 3-sequence GTVs, yielding CTVs that spared 47% to 81% of prostate tissue for the majority of tumors. Inclusion of T2-weighted contours increased sparing for multiparametric CTVs with 95% coverage margins for GS ≥6, and inclusion of dynamic contrast-enhanced contours increased sparing for GS ≥7. Anisotropic 95% coverage margins increased the sparing proportions to 71% to 86%. Conclusions Multiparametric magnetic resonance imaging–defined GTVs expanded by appropriate margins may support focal boosting or treatment of PCa; however, these margins, accounting for interobserver and intertumoral variability, may preclude highly conformal CTVs. Multiparametric GTVs and anisotropic margins may reduce the required margins and improve prostate sparing.
- Published
- 2015
98. Using Hidden Markov Models to capture temporal aspects of ultrasound data in prostate cancer
- Author
-
Aaron Fenster, Parvin Mousavi, Farhad Imani, Mena Gaed, Purang Abolmaesumi, Hagit Shatkay, Eli Gibson, Madeleine Moussa, Aaron D. Ward, Layan Nahlawi, and Jose A. Gomez
- Subjects
Future studies ,business.industry ,Computer science ,Speech recognition ,Ultrasound ,Pattern recognition ,medicine.disease ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,medicine ,Artificial intelligence ,business ,Hidden Markov model - Abstract
Recent studies highlight temporal ultrasound data as highly promising in differentiating between malignant and benign tissues in prostate cancer patients. Since Hidden Markov Models can be used for capturing order and patterns in time varying signals, we employ them to model temporal aspects of ultrasound data that are typically not incorporated in existing models. By comparing order-preserving and orderaltering models, we demonstrate that the order encoded in the series is necessary to model the variability in ultrasound data of prostate tissues. In future studies, we will investigate the influence of order on the differentiation between malignant and benign tissues.
- Published
- 2015
- Full Text
- View/download PDF
99. Computer-Aided Prostate Cancer Detection Using Ultrasound RF Time Series: In Vivo Feasibility Study
- Author
-
Mena Gaed, Silvia D. Chang, Eli Gibson, Jose A. Gomez, Aaron Fenster, Parvin Mousavi, Cesare Romagnoli, Aaron D. Ward, Madeleine Moussa, Amir Khojaste, D. Robert Siemens, Michael Leveridge, Purang Abolmaesumi, and Farhad Imani
- Subjects
Male ,Pathology ,medicine.medical_specialty ,Feature extraction ,Wavelet ,Image Interpretation, Computer-Assisted ,medicine ,Humans ,Electrical and Electronic Engineering ,Time series ,Ultrasonography ,Radiological and Ultrasound Technology ,Receiver operating characteristic ,business.industry ,Ultrasound ,Prostate ,Prostatic Neoplasms ,Reproducibility of Results ,Pattern recognition ,3. Good health ,Computer Science Applications ,Hierarchical clustering ,Area Under Curve ,Computer-aided ,Feasibility Studies ,Artificial intelligence ,Radio frequency ,business ,Software - Abstract
This paper presents the results of a computer-aided intervention solution to demonstrate the application of RF time series for characterization of prostate cancer, in vivo. Methods: We pre-process RF time series features extracted from 14 patients using hierarchical clustering to remove possible outliers. Then, we demonstrate that the mean central frequency and wavelet features extracted from a group of patients can be used to build a nonlinear classifier which can be applied successfully to differentiate between cancerous and normal tissue regions of an unseen patient. Results: In a cross-validation strategy, we show an average area under receiver operating characteristic curve (AUC) of 0.93 and classification accuracy of 80%. To validate our results, we present a detailed ultrasound to histology registration framework. Conclusion: Ultrasound RF time series results in differentiation of cancerous and normal tissue with high AUC.
- Published
- 2015
100. Sci-Fri AM: MRI and Diagnostic Imaging - 04: How does prostate biopsy guidance error impact pathologic cancer risk assessment?
- Author
-
Joseph L. Chin, Aaron D. Ward, Aaron Fenster, Stephen E. Pautler, Madeleine Moussa, Peter R. Martin, Eli Gibson, Jose A. Gomez, Mena Gaed, and Derek W. Cool
- Subjects
medicine.medical_specialty ,Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Prostatectomy ,medicine.medical_treatment ,Ultrasound ,Image registration ,Magnetic resonance imaging ,General Medicine ,Cancer risk assessment ,Biopsy ,medicine ,Medical imaging ,Radiology ,business - Abstract
Purpose: MRI-targeted, 3D transrectal ultrasound (TRUS)-guided prostate biopsy aims to reduce the 21–47% false negative rate1 of clinical 2D TRUS-guided sextant biopsy, but still has a substantial false negative rate. This could be improved via biopsy needle target optimization, accounting for uncertainties due to guidance system errors and image registration errors. As an initial step toward this broader goal, we elucidated the impact of biopsy needle delivery error on the probability of obtaining tumour samples and on core involvement. These are both important parameters to patient risk stratification and treatment decision. Methods: We investigated this for cancer of all grades, and separately for intermediate/high grade (≥Gleason 4+3) cancer. We used expert-contoured gold-standard prostatectomy histology to simulate targeted biopsies using an isotropic Gaussian needle delivery error from 1 to 6 mm, and investigated the amount of cancer obtained in each biopsy core as determined by histology. Results: Needle delivery error resulted in core involvement variability that could influence treatment decisions; the presence or absence of cancer in 1/3 or more of each needle core can be attributed to needle delivery error of 4 mm (as observed in practice2). Conclusions: Repeated biopsies of the same tumour target can yield percent core involvement measures with sufficient variability to influence the decision between active surveillance and treatment. However, this may be mitigated by making more than one biopsy attempt at selected tumour targets.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.