90 results on '"Dean C. Barratt"'
Search Results
2. Prostate Radiofrequency Focal Ablation (ProRAFT) Trial: A Prospective Development Study Evaluating a Bipolar Radiofrequency Device to Treat Prostate Cancer
- Author
-
Rachael Rodell, Neil McCartan, Clement Orczyk, Mark Emberton, Hashim U. Ahmed, Chris Brew-Graves, Alex Freeman, Navin Ramachandran, Norman R. Williams, Ingrid Potyka, Yipeng Hu, and Dean C. Barratt
- Subjects
Male ,medicine.medical_specialty ,Radiofrequency ablation ,Biopsy ,Urology ,medicine.medical_treatment ,law.invention ,Prostate cancer ,law ,Prostate ,Biomarkers, Tumor ,Humans ,Medicine ,Bipolar radiofrequency ,Prospective Studies ,Multiparametric Magnetic Resonance Imaging ,Aged ,Neoplasm Staging ,Radiofrequency Ablation ,business.industry ,Prostatic Neoplasms ,Equipment Design ,Middle Aged ,Prostate-Specific Antigen ,medicine.disease ,Ablation ,medicine.anatomical_structure ,Radiology ,Focal ablation ,Neoplasm Grading ,business - Abstract
We determined the early efficacy of bipolar radiofrequency ablation with a coil design for focal ablation of clinically significant localized prostate cancer visible at multiparametric magnetic resonance imaging.A prospective IDEAL phase 2 development study (Focal Prostate Radiofrequency Ablation, NCT02294903) recruited treatment-naïve patients with a single focus of significant localized prostate cancer (Gleason 7 or 4 mm or more of Gleason 6) concordant with a lesion visible on multiparametric magnetic resonance imaging. Intervention was a focal ablation with a bipolar radiofrequency system (Encage™) encompassing the lesion and a predefined margin using nonrigid magnetic resonance imaging-ultrasound fusion. Primary outcome was the proportion of men with absence of significant localized disease on biopsy at 6 months. Trial followup consisted of serum prostate specific antigen, multiparametric magnetic resonance imaging at 1 week, and 6 and 12 months post-ablation. Validated patient reported outcome measures for urinary, erectile and bowel functions, and adverse events monitoring system were used. Analyses were done on a per-protocol basis.Of 21 patients recruited 20 received the intervention. Baseline characteristics were median age 66 years (IQR 63-69) and preoperative median prostate specific antigen 7.9 ng/ml (5.3-9.6). A total of 18 patients (90%) had Gleason 7 disease with median maximum cancer 7 mm (IQR 5-10), for a median of 2.8 cc multiparametric magnetic resonance imaging lesions (IQR 1.4-4.8). Targeted biopsy of the treated area (median number of cores 6, IQR 5-8) showed absence of significant localized prostate cancer in 16/20 men (80%), concordant with multiparametric magnetic resonance imaging. There was a low profile of side effects at patient reported outcome measures analysis and there were no serious adverse events.Focal therapy of significant localized prostate cancer associated with a magnetic resonance imaging lesion using bipolar radiofrequency showed early efficacy to ablate cancer with low rates of genitourinary and rectal side effects.
- Published
- 2021
3. False Positive Multiparametric Magnetic Resonance Imaging Phenotypes in the Biopsy-naïve Prostate: Are They Distinct from Significant Cancer-associated Lesions? Lessons from PROMIS
- Author
-
Louise Brown, Richard Hindley, Derek J. Rosario, Maneesh Ghei, Shonit Punwani, Elena Frangou, Solon Karapanagiotis, Hashim U. Ahmed, Iqbal S. Shergill, Mathias Winkler, Lina M. Carmona Echeverria, Alex Kirkham, Alastair Henderson, Tim Dudderidge, Simon Bott, Francesco Giganti, Vasilis Stavrinides, Tom Syer, Richard Kaplan, Mark Emberton, Chris Parker, Dean C. Barratt, Raj Persad, Joseph M. Norris, Hayley C. Whitaker, Ahmed El-Shater Bosaily, Robert Oldroyd, Nicholas Burns-Cox, Alex Freeman, and Yipeng Hu
- Subjects
Male ,medicine.medical_specialty ,Biopsy ,Urology ,030232 urology & nephrology ,Medical Overuse ,Disease ,PROMIS ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Multiparametric magnetic resonance imaging ,medicine ,Humans ,False Positive Reactions ,Editorial by Chris H. Bangma, Geert J.L.H. van Leenders, Monique J. Roobol, Ivo G. Schoots and on behalf of the Anser Prostate Cancer Network on pp. 30–32 of this issue ,Multiparametric Magnetic Resonance Imaging ,medicine.diagnostic_test ,business.industry ,Platinum Priority – Prostate Cancer ,Prostatic Neoplasms ,Cancer ,Magnetic resonance imaging ,Prostate-Specific Antigen ,medicine.disease ,Magnetic Resonance Imaging ,False positive lesions ,Phenotype ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Radiological weapon ,Radiology ,business - Abstract
Background False positive multiparametric magnetic resonance imaging (mpMRI) phenotypes prompt unnecessary biopsies. The Prostate MRI Imaging Study (PROMIS) provides a unique opportunity to explore such phenotypes in biopsy-naïve men with raised prostate-specific antigen (PSA) and suspected cancer. Objective To compare mpMRI lesions in men with/without significant cancer on transperineal mapping biopsy (TPM). Design, setting, and participants PROMIS participants (n = 235) underwent mpMRI followed by a combined biopsy procedure at University College London Hospital, including 5-mm TPM as the reference standard. Patients were divided into four mutually exclusive groups according to TPM findings: (1) no cancer, (2) insignificant cancer, (3) definition 2 significant cancer (Gleason ≥3 + 4 of any length and/or maximum cancer core length ≥4 mm of any grade), and (4) definition 1 significant cancer (Gleason ≥4 + 3 of any length and/or maximum cancer core length ≥6 mm of any grade). Outcome measurements and statistical analysis Index and/or additional lesions present in 178 participants were compared between TPM groups in terms of number, conspicuity, volume, location, and radiological characteristics. Results and limitations Most lesions were located in the peripheral zone. More men with significant cancer had two or more lesions than those without significant disease (67% vs 37%; p, Take Home Message Significant cancer-associated magnetic resonance imaging lesions in biopsy-naïve men with suspected prostate cancer are larger, more conspicuous, and more diffusion restricted than false positives. Prostate-specific antigen density and apparent diffusion coefficient are predictors of significant disease, and could help guide decisions to biopsy men with indeterminate phenotypes.
- Published
- 2021
4. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study
- Author
-
Matthew J. Clarkson, David J. Hawkes, Crispin Schneider, Sebastian Ourselin, Johannes Totz, M. H. Sodergren, Kurinchi Selvan Gurusamy, Moustafa Allam, Danail Stoyanov, Yi Song, Adrien E. Desjardins, Brian R. Davidson, Dean C. Barratt, and Stephen A. Thompson
- Subjects
Adult ,Male ,medicine.medical_specialty ,Computer-assisted surgery ,RESECTION ,medicine.medical_treatment ,03 medical and health sciences ,Patient safety ,0302 clinical medicine ,Semi-automatic registration ,Clinical endpoint ,Image-guided surgery ,Humans ,Medicine ,Medical physics ,Aged ,Aged, 80 and over ,Science & Technology ,Augmented Reality ,GUIDANCE ,business.industry ,Orientation (computer vision) ,Iterative closest point ,1103 Clinical Sciences ,Usability ,Stereoscopic surface reconstruction ,Middle Aged ,New Technology ,3. Good health ,Liver ,Surgery, Computer-Assisted ,030220 oncology & carcinogenesis ,Feasibility Studies ,Female ,030211 gastroenterology & hepatology ,Surgery ,Augmented reality ,LEARNING-CURVE ,business ,Life Sciences & Biomedicine ,Laparoscopic liver surgery ,SYSTEM - Abstract
Background The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. Methods Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. Results The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference − 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. Conclusion The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
- Published
- 2020
5. Accuracy of Transperineal Targeted Prostate Biopsies, Visual Estimation and Image Fusion in Men Needing Repeat Biopsy in the PICTURE Trial
- Author
-
Charles Jameson, Alex Freeman, Lucy A.M. Simmons, Susan C. Charman, Neil McCartan, Yipeng Hu, Hashim U. Ahmed, Shonit Punwani, Abi Kanthabalan, Mark Emberton, Caroline M. Moore, David J. Hawkes, Dean C. Barratt, Tim Briggs, Jan van der Muelen, and Manit Arya
- Subjects
Image-Guided Biopsy ,Male ,medicine.medical_specialty ,Urology ,030232 urology & nephrology ,Magnetic Resonance Imaging, Interventional ,Perineum ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Biopsy ,Image Processing, Computer-Assisted ,medicine ,Humans ,Prospective Studies ,Ultrasonography, Interventional ,Multiparametric Magnetic Resonance Imaging ,Aged ,Image fusion ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Cancer ,Magnetic resonance imaging ,Middle Aged ,medicine.disease ,Prostate-specific antigen ,Treatment Outcome ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Feasibility Studies ,Biopsy, Large-Core Needle ,Radiology ,business - Abstract
We evaluated the detection of clinically significant prostate cancer using magnetic resonance imaging targeted biopsies and compared visual estimation to image fusion targeting in patients requiring repeat prostate biopsies.The prospective, ethics committee approved PICTURE trial (ClinicalTrials.gov NCT01492270) enrolled 249 consecutive patients from January 11, 2012 to January 29, 2014. Men underwent multiparametric magnetic resonance imaging and were blinded to the results. All underwent transperineal template prostate mapping biopsies. In 200 men with a lesion this was preceded by visual estimation and image fusion targeted biopsies. As the primary study end point clinically significant prostate cancer was defined as Gleason 4 + 3 or greater and/or any grade of cancer with a length of 6 mm or greater. Other definitions of clinically significant prostate cancer were also evaluated.Mean ± SD patient age was 62.6 ± 7 years, median prostate specific antigen was 7.17 ng/ml (IQR 5.25-10.09), mean primary lesion size was 0.37 ± 1.52 cc with a mean of 4.3 ± 2.3 targeted cores per lesion on visual estimation and image fusion combined, and a mean of 48.7 ± 12.3 transperineal template prostate mapping biopsy cores. Transperineal template prostate mapping biopsies detected 97 clinically significant prostate cancers (48.5%) and 85 insignificant cancers (42.5%). Overall multiparametric magnetic resonance imaging targeted biopsies detected 81 clinically significant prostate cancers (40.5%) and 63 insignificant cancers (31.5%). In the 18 cases (9%) of clinically significant prostate cancer on magnetic resonance imaging targeted biopsies were benign or clinically insignificant on transperineal template prostate mapping biopsy. Clinically significant prostate cancer was detected in 34 cases (17%) on transperineal template prostate mapping biopsy but not on magnetic resonance imaging targeted biopsies and approximately half was present in nontargeted areas. Clinically significant prostate cancer was found on visual estimation and image fusion in 53 (31.3%) and 48 (28.4%) of the 169 patients (McNemar test p = 0.5322). Visual estimation missed 23 clinically significant prostate cancers (13.6%) detected by image fusion. Image fusion missed 18 clinically significant prostate cancers (10.8%) detected by visual estimation.Magnetic resonance imaging targeted biopsies are accurate for detecting clinically significant prostate cancer and reducing the over diagnosis of insignificant cancers. To maximize detection visual estimation as well as image fusion targeted biopsies are required.
- Published
- 2018
6. Image quality assessment for closed-loop computer-assisted lung ultrasound
- Author
-
Zachary M. C. Baum, Andrew Walden, Ester Bonmati, David J. Hawkes, Lorenzo Cristoni, Baris Kanber, Yipeng Hu, Geoffrey J. M. Parker, Claudia A. M. Wheeler-Kingshott, Ferran Prados, and Dean C. Barratt
- Subjects
Computer science ,Image quality ,business.industry ,media_common.quotation_subject ,Deep learning ,Machine learning ,computer.software_genre ,Novelty detection ,Intensive care ,Quality (business) ,Anomaly detection ,Sensitivity (control systems) ,Artificial intelligence ,business ,computer ,media_common ,Test data - Abstract
We describe a novel, two-stage computer assistance system for lung anomaly detection using ultrasound imaging in the intensive care setting to improve operator performance and patient stratification during coronavirus pandemics. The proposed system consists of two deep-learning-based models: a quality assessment module that automates predictions of image quality, and a diagnosis assistance module that determines the likelihood-of-anomaly in ultrasound images of sufficient quality. Our two-stage strategy uses a novelty detection algorithm to address the lack of control cases available for training the quality assessment classifier. The diagnosis assistance module can then be trained with data that are deemed of sufficient quality, guaranteed by the closed-loop feedback mechanism from the quality assessment module. Using more than 25,000 ultrasound images from 37 COVID-19-positive patients scanned at two hospitals, plus 12 control cases, this study demonstrates the feasibility of using the proposed machine learning approach. We report an accuracy of 86% when classifying between sufficient and insufficient quality images by the quality assessment module. For data of sufficient quality – as determined by the quality assessment module – the mean classification accuracy, sensitivity, and specificity in detecting COVID-19-positive cases were 0.95, 0.91, and 0.97, respectively, across five holdout test data sets unseen during the training of any networks within the proposed system. Overall, the integration of the two modules yields accurate, fast, and practical acquisition guidance and diagnostic assistance for patients with suspected respiratory conditions at pointof- care.
- Published
- 2021
7. Combined 3D super-resolution, de-noising and partial volume correction for percutaneous ablation
- Author
-
Steve Bandula, Dean C. Barratt, Yipeng Hu, and Mark A. Pinnock
- Subjects
Artificial neural network ,Computer science ,business.industry ,Image quality ,Noise reduction ,Partial volume ,Pattern recognition ,Artificial intelligence ,Spline interpolation ,business ,Convolutional neural network ,Random forest ,Interpolation - Abstract
Percutaneous cryoablation is becoming more popular for the treatment of renal cell carcinoma. Interventional computed tomography (iCT) is commonly used for guidance but reducing radiation dose and increasing slice thickness makes super-resolution (SR) essential for improving image quality. The proposed method takes low quality (LQ), thick slice images and converts them to high quality (HQ), thin slice images while performing denoising and partial volume correction in the z-direction. As LQ and HQ iCT images are challenging to pair up, we train a 3D U-Net equipped with an up-sampling module on simulated LQ (sLQ) data and then test on the real LQ (rLQ) images with cubic interpolation and random forest as comparison. During validation on sLQ data, the U-Net outperformed interpolation and random forest (SSIM 0.9991 vs 0.9959 and 0.9985 respectively), but performance suffered when testing on the out-of-distribution rLQ images. The Dice score showed a substantial improvement when used to compare needle segmentations performed on U-Net generated images versus those from interpolation and random forest (0.4073 vs. 0.2919 and 0.3777 respectively), indicating that the U-Net is reducing the z-direction partial volume effect to a greater degree than these techniques. We have shown that a neural network trained to perform SR on simulated data outperforms interpolation and random forest on real data in terms of localisation of clinically relevant objects such as needles, despite the differing data distribution.
- Published
- 2021
8. End-to-end forecasting of needle trajectory in percutaneous ablation
- Author
-
Dean C. Barratt, Mark A. Pinnock, Steve Bandula, and Yipeng Hu
- Subjects
Ground truth ,business.industry ,Computer science ,Monte Carlo method ,Inference ,Machine learning ,computer.software_genre ,Convolutional neural network ,Test case ,Sørensen–Dice coefficient ,Trajectory ,Artificial intelligence ,business ,computer ,Dropout (neural networks) - Abstract
Percutaneous techniques are becoming more widely adopted for treating solid organ tumours. With many of these techniques using image guidance, accuracy is essential to reduce the risk of complications and tumour recurrence. We propose a novel approach to needle trajectory forecasting using a 3D U-Net trained on interventional computed tomography (iCT) images from renal cryoablation procedures. The U-Net is trained to predict future needle locations from present iCT images, supervised by ground truth labels at future time points. Furthermore, we demonstrate that forecasting needle trajectory may be substantially improved by Monte Carlo dropout (MCDO), in addition to generating uncertainty maps of the predictions. With 122 training iCT volumes, a Dice coefficient of 0.48 on highly elongated needle morphology was achieved based on 41 unseen test cases, significantly outperforming two models not using MCDO at inference with Dice scores of 0.14 and 0.32. MCDO also greatly improved the predicted needle morphology and was able to incorporate directional information into the predictions. Our approach shows promise for improving accuracy and workflow in image-guided procedures, with an interesting research direction to predict uncertain future interventional events by ensemble in supervised approaches.
- Published
- 2021
9. Morphological Change Forecasting for Prostate Glands using Feature-based Registration and Kernel Density Extrapolation
- Author
-
Qianye Yang, Francesco Giganti, Yipeng Hu, Vasilis Stavrinides, Tom Vercauteren, Caroline M. Moore, Matthew J. Clarkson, Yunguan Fu, Nooshin Ghavami, and Dean C. Barratt
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Kernel density estimation ,Computer Science - Computer Vision and Pattern Recognition ,Boundary (topology) ,030218 nuclear medicine & medical imaging ,Set (abstract data type) ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Prostate disease ,Medical diagnosis ,Smoothness (probability theory) ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,3. Good health ,medicine.anatomical_structure ,Feature (computer vision) ,030220 oncology & carcinogenesis ,Artificial intelligence ,Prostate gland ,business - Abstract
Organ morphology is a key indicator for prostate disease diagnosis and prognosis. For instance, In longitudinal study of prostate cancer patients under active surveillance, the volume, boundary smoothness and their changes are closely monitored on time-series MR image data. In this paper, we describe a new framework for forecasting prostate morphological changes, as the ability to detect such changes earlier than what is currently possible may enable timely treatment or avoiding unnecessary confirmatory biopsies. In this work, an efficient feature-based MR image registration is first developed to align delineated prostate gland capsules to quantify the morphological changes using the inferred dense displacement fields (DDFs). We then propose to use kernel density estimation (KDE) of the probability density of the DDF-represented \textit{future morphology changes}, between current and future time points, before the future data become available. The KDE utilises a novel distance function that takes into account morphology, stage-of-progression and duration-of-change, which are considered factors in such subject-specific forecasting. We validate the proposed approach on image masks unseen to registration network training, without using any data acquired at the future target time points. The experiment results are presented on a longitudinal data set with 331 images from 73 patients, yielding an average Dice score of 0.865 on a holdout set, between the ground-truth and the image masks warped by the KDE-predicted-DDFs., Accepted by ISBI 2021
- Published
- 2021
10. Adaptable Image Quality Assessment Using Meta-Reinforcement Learning of Task Amenability
- Author
-
Yipeng Hu, Qianye Yang, J. Alison Noble, Dean C. Barratt, Zachary M. C. Baum, Geoffrey A. Sonn, Vasilis Stavrinides, Richard E. Fan, Shaheer U. Saeed, Mirabela Rusu, and Yunguan Fu
- Subjects
Contextual image classification ,Artificial neural network ,Image quality ,Computer science ,business.industry ,Deep learning ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Reinforcement learning ,Markov decision process ,Artificial intelligence ,Transfer of learning ,business ,computer ,030217 neurology & neurosurgery - Abstract
The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7\(\%\) and 29.6\(\%\) expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100\(\%\) expert labels.
- Published
- 2021
11. Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network
- Author
-
Dean C. Barratt, Nina Montaña-Brown, Stephen P. Pereira, Yipeng Hu, Alexander Grimwood, Zachary M. C. Baum, Gavin Johnson, Ester Bonmati, João Ramalhinho, and Matthew J. Clarkson
- Subjects
Endoscopic ultrasound ,Adversarial network ,medicine.diagnostic_test ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,Context (language use) ,Translation (geometry) ,Synthetic data ,Image synthesis ,Code (cryptography) ,medicine ,Artificial intelligence ,business - Abstract
Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Frechet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io.
- Published
- 2021
12. Real-time multimodal image registration with partial intraoperative point-set data
- Author
-
Dean C. Barratt, Zachary M. C. Baum, and Yipeng Hu
- Subjects
FOS: Computer and information sciences ,Male ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computation ,Feature extraction ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Health Informatics ,Article ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Point (geometry) ,Computer vision ,Transformer (machine learning model) ,image-guided interventions ,Ultrasonography ,Radiological and Ultrasound Technology ,Heuristic ,business.industry ,Image and Video Processing (eess.IV) ,Prostate ,Prostatic Neoplasms ,Point set registration ,Function (mathematics) ,Electrical Engineering and Systems Science - Image and Video Processing ,prostate cancer ,Computer Graphics and Computer-Aided Design ,Magnetic Resonance Imaging ,point-set registration ,medical image registration ,Transformation (function) ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Algorithms - Abstract
Highlights • Network predicts non-rigid point displacements for of MR-TRUS prostate volume registration. • Adopts “model-free” deformation via data-driven learning without heuristic constraints. • Network architecture accepts variable number of points in training or at inference. • Registration accuracy on sparse data similar to complete data in MR-TRUS registration., We present Free Point Transformer (FPT) – a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a “model-free” approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial., Graphical abstract Image, graphical abstract
- Published
- 2021
13. Controlling False Positive/Negative Rates for Deep-Learning-Based Prostate Cancer Detection on Multiparametric MR Images
- Author
-
Rachael Rodell, Fernando J. Bianco, Zhe Min, Yipeng Hu, Qianye Yang, Wen Yan, and Dean C. Barratt
- Subjects
medicine.diagnostic_test ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Cancer ,Magnetic resonance imaging ,Pattern recognition ,CAD ,02 engineering and technology ,medicine.disease ,Object detection ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Artificial intelligence ,Mr images ,business - Abstract
Prostate cancer (PCa) is one of the leading causes of death for men worldwide. Multi-parametric magnetic resonance (mpMR) imaging has emerged as a non-invasive diagnostic tool for detecting and localising prostate tumours by specialised radiologists. These radiological examinations, for example, for differentiating malignant lesions from benign prostatic hyperplasia in transition zones and for defining the boundaries of clinically significant cancer, remain challenging and highly skill-and-experience-dependent. We first investigate experimental results in developing object detection neural networks that are trained to predict the radiological assessment, using these high-variance labels. We further argue that such a computer-assisted diagnosis (CAD) system needs to have the ability to control the false-positive rate (FPR) or false-negative rate (FNR), in order to be usefully deployed in a clinical workflow, informing clinical decisions without further human intervention. However, training detection networks typically requires a multi-tasking loss, which is not trivial to be adapted for a direct control of FPR/FNR. This work in turn proposes a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function, to manage the lesion- and slice-level costs, respectively. Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost; 2) The slice-level FNR was reduced from 0.19 to 0.00 by taking into account the slice-level cost; (3) Both lesion-level and slice-level FNRs were reduced with lower FP/FPR by changing the lesion-level or slice-level costs, compared with post-training threshold adjustment using networks without the proposed cost-aware training. For the PCa application of interest, the proposed CAD system is capable of substantially reducing FNR with a relatively preserved FPR, therefore is considered suitable for PCa screening applications.
- Published
- 2021
14. PSA density and clinical outcome in MRI-based active surveillance for prostate cancer: A joint longitudinal-survival analysis
- Author
-
Shonit Punwani, Hayley C. Whitaker, D. Danks, Geoffrey A. Sonn, Vasilis Stavrinides, Clare Allen, Caroline M. Moore, Dean C. Barratt, Bruce J. Trock, Mark Emberton, Georgios Papageorgiou, Francesco Giganti, Alex Kirkham, Nora Pashayan, and Alex Freeman
- Subjects
Oncology ,medicine.medical_specialty ,Prostate cancer ,business.industry ,Urology ,Internal medicine ,Psa density ,medicine ,medicine.disease ,business ,Outcome (game theory) ,Survival analysis - Published
- 2021
15. Multimodality Biomedical Image Registration Using Free Point Transformer Networks
- Author
-
Yipeng Hu, Zachary M. C. Baum, and Dean C. Barratt
- Subjects
business.industry ,Computer science ,Variable size ,Deep learning ,Physics::Medical Physics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Biomedical image ,Point set registration ,030218 nuclear medicine & medical imaging ,Multimodality ,Extractor ,03 medical and health sciences ,0302 clinical medicine ,Spatial transformation ,030220 oncology & carcinogenesis ,Multilayer perceptron ,Computer vision ,Artificial intelligence ,business - Abstract
We describe a point-set registration algorithm based on a novel free point transformer (FPT) network, designed for points extracted from multimodal biomedical images for registration tasks, such as those frequently encountered in ultrasound-guided interventional procedures. FPT is constructed with a global feature extractor which accepts unordered source and target point-sets of variable size. The extracted features are conditioned by a shared multilayer perceptron point transformer module to predict a displacement vector for each source point, transforming it into the target space. The point transformer module assumes no vicinity or smoothness in predicting spatial transformation and, together with the global feature extractor, is trained in a data-driven fashion with an unsupervised loss function. In a multimodal registration task using prostate MR and sparsely acquired ultrasound images, FPT yields comparable or improved results over other rigid and non-rigid registration methods. This demonstrates the versatility of FPT to learn registration directly from real, clinical training data and to generalize to a challenging task, such as the interventional application presented.
- Published
- 2020
16. Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using Image Sequence Classification
- Author
-
Emma J. Harris, Helen McNair, Yipeng Hu, Alexander Grimwood, Dean C. Barratt, and Ester Bonmati
- Subjects
Contextual image classification ,Computer science ,business.industry ,medicine.medical_treatment ,Ultrasound guided ,030218 nuclear medicine & medical imaging ,Set (abstract data type) ,Radiation therapy ,03 medical and health sciences ,Range (mathematics) ,0302 clinical medicine ,Ultrasound probe ,medicine.anatomical_structure ,Prostate ,030220 oncology & carcinogenesis ,medicine ,Prostate radiotherapy ,Computer vision ,External beam radiotherapy ,Artificial intelligence ,Transperineal ultrasound ,business - Abstract
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up. Probe placement and ultrasound image interpretation are manual tasks contingent upon operator skill, leading to interoperator uncertainties that degrade radiotherapy precision. We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data. Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image classifier using a recurrent neural network to generate two sets of predictions in real-time. The first set identifies relevant prostate anatomy visible in the field of view using the classes: outside prostate, prostate periphery, prostate centre. The second set recommends a probe angular adjustment to achieve alignment between the probe and prostate centre with the classes: move left, move right, stop. The algorithm was trained and tested on 9,743 clinical images from 61 treatment sessions across 32 patients. We evaluated classification accuracy against class labels derived from three experienced observers at 2/3 and 3/3 agreement thresholds. For images with unanimous consensus between observers, anatomical classification accuracy was 97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7° (1.2°) from angle labels with full observer consensus, comparable to the 2.8° (2.6°) mean interobserver range. We propose such an algorithm could assist radiotherapy practitioners with limited experience of ultrasound image interpretation by providing effective real-time feedback during patient set-up.
- Published
- 2020
17. Prostate Motion Modelling Using Biomechanically-Trained Deep Neural Networks on Unstructured Nodes
- Author
-
Yipeng Hu, Mark Emberton, Dean C. Barratt, Mark A. Pinnock, Zeike A. Taylor, and Shaheer U. Saeed
- Subjects
Yield (engineering) ,020205 medical informatics ,Computer science ,business.industry ,Feature vector ,Deep learning ,Pattern recognition ,02 engineering and technology ,Displacement (vector) ,Finite element method ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Point (geometry) ,Segmentation ,Artificial intelligence ,business - Abstract
In this paper, we propose to train deep neural networks with biomechanical simulations, to predict the prostate motion encountered during ultrasound-guided interventions. In this application, unstructured points are sampled from segmented pre-operative MR images to represent the anatomical regions of interest. The point sets are then assigned with point-specific material properties and displacement loads, forming the un-ordered input feature vectors. An adapted PointNet can be trained to predict the nodal displacements, using finite element (FE) simulations as ground-truth data. Furthermore, a versatile bootstrap aggregating mechanism is validated to accommodate the variable number of feature vectors due to different patient geometries, comprised of a training-time bootstrap sampling and a model averaging inference. This results in a fast and accurate approximation to the FE solutions without requiring subject-specific solid meshing. Based on 160,000 nonlinear FE simulations on clinical imaging data from 320 patients, we demonstrate that the trained networks generalise to unstructured point sets sampled directly from holdout patient segmentation, yielding a near real-time inference and an expected error of 0.017 mm in predicted nodal displacement.
- Published
- 2020
18. DeepReg: a deep learning toolkit for medical image registration
- Author
-
Yipeng Hu, Shaheer U. Saeed, Stefano B. Blumberg, Qianye Yang, Juan Eugenio Iglesias, Zhe Min, Dean C. Barratt, Zachary M. C. Baum, Adrià Casamitjana, Ester Bonmati, Yunguan Fu, Nina Montaña Brown, Alexander Grimwood, Rémi Delaunay, Tom Vercauteren, Daniel C. Alexander, and Matthew J. Clarkson
- Subjects
FOS: Computer and information sciences ,Image fusion ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Electrical Engineering and Systems Science - Image and Video Processing ,Python (programming language) ,030218 nuclear medicine & medical imaging ,3. Good health ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,computer.programming_language - Abstract
DeepReg (https://github.com/DeepRegNet/DeepReg) is a community-supported open-source toolkit for research and education in medical image registration using deep learning., Comment: Accepted in The Journal of Open Source Software (JOSS)
- Published
- 2020
- Full Text
- View/download PDF
19. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans
- Author
-
T G W Boers, F. van der Heijden, Dean C. Barratt, Henkjan J. Huisman, Eli Gibson, Jasenko Krdzalic, J J Hermans, Yipeng Hu, Ester Bonmati, Digital Society Institute, and Robotics and Mechatronics
- Subjects
Computer science ,Interactive 3d ,pancreatic cancer ,UT-Hybrid-D ,Computed tomography ,U-net ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Imaging, Three-Dimensional ,All institutes and research themes of the Radboud University Medical Center ,Medical imaging ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Set (psychology) ,Pancreas ,Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Deep learning ,22/2 OA procedure ,Pattern recognition ,interactive segmentation ,030220 oncology & carcinogenesis ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
The increasing incidence of pancreatic cancer will make it the second deadliest cancer in 2030. Imaging based early diagnosis and image guided treatment are emerging potential solutions. Artificial intelligence (AI) can help provide and improve widespread diagnostic expertise and accurate interventional image interpretation. Accurate segmentation of the pancreas is essential to create annotated data sets to train AI, and for computer assisted interventional guidance. Automated deep learning segmentation performance in pancreas computed tomography (CT) imaging is low due to poor grey value contrast and complex anatomy. A good solution seemed a recent interactive deep learning segmentation framework for brain CT that helped strongly improve initial automated segmentation with minimal user input. This method yielded no satisfactory results for pancreas CT, possibly due to a sub-optimal neural network architecture. We hypothesize that a state-of-the-art U-net neural network architecture is better because it can produce a better initial segmentation and is likely to be extended to work in a similar interactive approach. We implemented the existing interactive method, iFCN, and developed an interactive version of U-net method we call iUnet. The iUnet is fully trained to produce the best possible initial segmentation. In interactive mode it is additionally trained on a partial set of layers on user generated scribbles. We compare initial segmentation performance of iFCN and iUnet on a 100CT dataset using dice similarity coefficient analysis. Secondly, we assessed the performance gain in interactive use with three observers on segmentation quality and time. Average automated baseline performance was 78% (iUnet) versus 72% (FCN). Manual and semi-automatic segmentation performance was: 87% in 15 min. for manual, and 86% in 8 min. for iUNet. We conclude that iUnet provides a better baseline than iFCN and can reach expert manual performance significantly faster than manual segmentation in case of pancreas CT. Our novel iUnet architecture is modality and organ agnostic and can be a potential novel solution for semi-automatic medical imaging segmentation in general.
- Published
- 2020
20. Longitudinal Image Registration with Temporal-Order and Subject-Specificity Discrimination
- Author
-
Yunguan Fu, Qianye Yang, Yipeng Hu, Tom Vercauteren, J. Alison Noble, Dean C. Barratt, Nooshin Ghavami, Qingchao Chen, and Francesco Giganti
- Subjects
FOS: Computer and information sciences ,Training set ,Similarity (geometry) ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Image registration ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Morphological analysis of longitudinal MR images plays a key role in monitoring disease progression for prostate cancer patients, who are placed under an active surveillance program. In this paper, we describe a learning-based image registration algorithm to quantify changes on regions of interest between a pair of images from the same patient, acquired at two different time points. Combining intensity-based similarity and gland segmentation as weak supervision, the population-data-trained registration networks significantly lowered the target registration errors (TREs) on holdout patient data, compared with those before registration and those from an iterative registration algorithm. Furthermore, this work provides a quantitative analysis on several longitudinal-data-sampling strategies and, in turn, we propose a novel regularisation method based on maximum mean discrepancy, between differently-sampled training image pairs. Based on 216 3D MR images from 86 patients, we report a mean TRE of 5.6 mm and show statistically significant differences between the different training data sampling strategies., Accepted at MICCAI 2020
- Published
- 2020
21. Immunohistochemical biomarker validation in highly selective needle biopsy microarrays derived from mpMRI-characterized prostates
- Author
-
Jonathan Kay, Charles Jameson, Tim Briggs, Neil McCartan, Hashim U. Ahmed, Jonathan Olivier, Susan C. Charman, Vasilis Stavrinides, Abi Kanthabalan, Lina M. Carmona Echeverria, Mark Emberton, Dean C. Barratt, Hayley Pye, Yipeng Hu, Zeba Ahmed, Manit Arya, Jan van der Muelen, Caroline M. Moore, Alex Freeman, James Gelister, David J. Hawkes, Shonit Punwani, Susan Heavey, Lucy A.M. Simmons, Hayley C. Whitaker, and Wellcome Trust
- Subjects
Image-Guided Biopsy ,Male ,medicine.medical_specialty ,Urology ,030232 urology & nephrology ,Endocrinology & Metabolism ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,Biopsy ,Biomarkers, Tumor ,medicine ,Humans ,DIAGNOSTIC-ACCURACY ,1112 Oncology and Carcinogenesis ,MSMB ,Oncology & Carcinogenesis ,Science & Technology ,tissue microarrays ,Tissue microarray ,medicine.diagnostic_test ,business.industry ,CONSTRUCTING TISSUE MICROARRAYS ,Prostatic Neoplasms ,Cancer ,1103 Clinical Sciences ,Urology & Nephrology ,prostate cancer ,medicine.disease ,CANCER ,Immunohistochemistry ,Magnetic Resonance Imaging ,SPECIMENS ,PATHOLOGY ,medicine.anatomical_structure ,Oncology ,030220 oncology & carcinogenesis ,1114 Paediatrics and Reproductive Medicine ,Cancer biomarkers ,Radiology ,Neoplasm Grading ,business ,Life Sciences & Biomedicine ,MRI - Abstract
Introduction Diagnosing prostate cancer routinely involves tissue biopsy and increasingly image guided biopsy using multiparametric MRI (mpMRI). Excess tissue after diagnosis can be used for research to improve the diagnostic pathway and the vertical assembly of prostate needle biopsy cores into tissue microarrays (TMAs) allows the parallel immunohistochemical (IHC) validation of cancer biomarkers in routine diagnostic specimens. However, tissue within a biopsy core is often heterogeneous and cancer is not uniformly present, resulting in needle biopsy TMAs that suffer from highly variable cancer detection rates that complicate parallel biomarker validation. Materials and methods The prostate cores with the highest tumor burden (in terms of Gleason score and/or maximum cancer core length) were obtained from 249 patients in the PICTURE trial who underwent transperineal template prostate mapping (TPM) biopsy at 5 mm intervals preceded by mpMRI. From each core, 2 mm segments containing tumor or benign tissue (as assessed on H&E pathology) were selected, excised and embedded vertically into a new TMA block. TMA sections were then IHC-stained for the routinely used prostate cancer biomarkers PSA, PSMA, AMACR, p63, and MSMB and assessed using the h-score method. H-scores in patient matched malignant and benign tissue were correlated with the Gleason grade of the original core and the MRI Likert score for the sampled prostate area. Results A total of 2240 TMA cores were stained and IHC h-scores were assigned to 1790. There was a statistically significant difference in h-scores between patient matched malignant and adjacent benign tissue that is independent of Likert score. There was no association between the h-scores and Gleason grade or Likert score within each of the benign or malignant groups. Conclusion The construction of highly selective TMAs from prostate needle biopsy cores is possible. IHC data obtained through this method are highly reliable and can be correlated with imaging. IHC expression patterns for PSA, PSMA, AMACR, p63, and MSMB are distinct in malignant and adjacent benign tissue but did not correlate with mpMRI Likert score.
- Published
- 2018
22. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks
- Author
-
Ester Bonmati, Steve Bandula, Yipeng Hu, Dean C. Barratt, Stephen P. Pereira, Brian R. Davidson, Matthew J. Clarkson, Kurinchi Selvan Gurusamy, Francesco Giganti, and Eli Gibson
- Subjects
Radiography, Abdominal ,medicine.medical_specialty ,Radiography ,Kidney ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Segmentation ,Electrical and Electronic Engineering ,Esophagus ,Radiation treatment planning ,Radiological and Ultrasound Technology ,business.industry ,Gallbladder ,Stomach ,Image segmentation ,Computer Science Applications ,medicine.anatomical_structure ,Radiographic Image Interpretation, Computer-Assisted ,Radiology ,Tomography, X-Ray Computed ,business ,Pancreas ,Digestive System ,Algorithms ,Spleen ,030217 neurology & neurosurgery ,Software - Abstract
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
- Published
- 2018
23. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures
- Author
-
Geri Keane, Yipeng Hu, Ester Bonmati, Stephen P. Pereira, Kurinchi Gurusami, Dean C. Barratt, Matthew J. Clarkson, Laura Uribarri, Brian R. Davidson, and Eli Gibson
- Subjects
Endoscopic ultrasound ,Percentile ,Computer science ,Biomedical Engineering ,Image registration ,Health Informatics ,02 engineering and technology ,Endosonography ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Imaging, Three-Dimensional ,Pancreatectomy ,0302 clinical medicine ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Upper gastrointestinal ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Pancreas ,EUS ,Retrospective Studies ,Landmark ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Pancreatic cancer ,General Medicine ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Pancreatic Neoplasms ,Planning ,Surgery, Computer-Assisted ,Feature (computer vision) ,Computer-assisted interventions ,Original Article ,020201 artificial intelligence & image processing ,Surgery ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Tomography, X-Ray Computed ,business - Abstract
Purpose Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. Methods A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document}n=9) or retrospective clinical (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1$$\end{document}n=1) EUS landmarks. Results The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$< 0.01$$\end{document}
- Published
- 2018
24. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound‐guided prostate interventions
- Author
-
Hashim U. Ahmed, Paul Martin, Dean C. Barratt, Yipeng Hu, Lianghao Han, Mark Emberton, Ian Donaldson, Rachael Rodell, Barbara Villarini, Ester Bonmati, and Caroline M. Moore
- Subjects
Image-Guided Biopsy ,Male ,Computer science ,030232 urology & nephrology ,Image registration ,Targeted biopsy ,Imaging phantom ,030218 nuclear medicine & medical imaging ,Lesion ,03 medical and health sciences ,Prostate needle biopsy ,Prostate cancer ,0302 clinical medicine ,Prostate ,Image Processing, Computer-Assisted ,medicine ,Humans ,Computer vision ,Ultrasonography ,medicine.diagnostic_test ,Transperineal approach ,business.industry ,Biopsy, Needle ,Ultrasound ,Cancer ,Magnetic resonance imaging ,General Medicine ,medicine.disease ,Magnetic Resonance Imaging ,Ultrasound guided ,3. Good health ,Focal therapy ,medicine.anatomical_structure ,Research Design ,Needle placement ,Artificial intelligence ,medicine.symptom ,business - Abstract
Purpose: Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally-invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. Methods: A set of 9 measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Results: Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0±1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0±1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. Conclusions: The application of a comprehensive, unbiased validation assessment for MR/TRUS guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behaviour of these systems.
- Published
- 2018
25. Development and Phantom Validation of a 3-D-Ultrasound-Guided System for Targeting MRI-Visible Lesions During Transrectal Prostate Biopsy
- Author
-
Shonit Punwani, Yipeng Hu, Caroline M. Moore, Matthew J. Clarkson, Hashim U. Ahmed, Stephen A. Thompson, David J. Hawkes, Lucy A.M. Simmons, Mark Emberton, Dean C. Barratt, Veeru Kasivisvanathan, and Taimur T. Shah
- Subjects
Male ,medicine.medical_specialty ,Prostate biopsy ,030232 urology & nephrology ,Biomedical Engineering ,Image registration ,Magnetic Resonance Imaging, Interventional ,Sensitivity and Specificity ,Article ,Imaging phantom ,Endosonography ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Biopsy ,medicine ,Humans ,Endoscopic Ultrasound-Guided Fine Needle Aspiration ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Ultrasound ,Prostatic Neoplasms ,Reproducibility of Results ,Magnetic resonance imaging ,Equipment Design ,Image Enhancement ,Equipment Failure Analysis ,Transrectal biopsy ,Radiology ,business ,Nuclear medicine ,Guidance system - Abstract
Objective: Three- and four-dimensional transrectal ultrasound transducers are now available from most major ultrasound equipment manufacturers, but currently are incorporated into only one commercial prostate biopsy guidance system. Such transducers offer the benefits of rapid volumetric imaging, but can cause substantial measurement distortion in electromagnetic tracking sensors, which are commonly used to enable 3-D navigation. In this paper, we describe the design, development, and validation of a 3-D-ultrasound-guided transrectal prostate biopsy system that employs high-accuracy optical tracking to localize the ultrasound probe and prostate targets in 3-D physical space. Methods: The accuracy of the system was validated by evaluating the targeted needle placement error after inserting a biopsy needle to sample planned targets in a phantom using standard 2-D ultrasound guidance versus real-time 3-D guidance provided by the new system. Results: The overall mean needle-segment-to-target distance error was 3.6 ± 4.0 mm and mean needle-to-target distance was 3.2 ± 2.4 mm. Conclusion: A significant increase in needle placement accuracy was observed when using the 3-D guidance system compared with visual targeting of invisible (virtual) lesions using a standard B-mode ultrasound-guided biopsy technique.
- Published
- 2017
26. The PICTURE study: diagnostic accuracy of multiparametric MRI in men requiring a repeat prostate biopsy
- Author
-
Dean C. Barratt, Susan C. Charman, Shonit Punwani, Charles Jameson, Mark Emberton, Abi Kanthabalan, Hashim U. Ahmed, Caroline M. Moore, Yipeng Hu, Navin Ramachandran, Tim Briggs, Lucy A.M. Simmons, Jan van der Meulen, Neil McCartan, David J. Hawkes, Alex Freeman, James Gelister, and Manit Arya
- Subjects
Image-Guided Biopsy ,Male ,Oncology ,Cancer Research ,medicine.medical_specialty ,Prostate biopsy ,Biopsy ,030232 urology & nephrology ,Cohort Studies ,03 medical and health sciences ,Prostate cancer ,ULTRASOUND-GUIDED BIOPSY ,0302 clinical medicine ,Breast cancer ,Prostate ,Internal medicine ,medicine ,Humans ,Prospective Studies ,Oncology & Carcinogenesis ,MAPPING BIOPSY ,Ultrasound, High-Intensity Focused, Transrectal ,Aged ,Cervical cancer ,Science & Technology ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,MULTI-PARAMETRIC MRI ,Middle Aged ,prostate cancer ,medicine.disease ,CANCER ,Magnetic Resonance Imaging ,3. Good health ,multiparametric magnetic resonance imaging (mpMRI) ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Clinical Study ,Ultrasound-Guided Biopsy ,diagnostic accuracy ,Radiology ,business ,Life Sciences & Biomedicine ,1112 Oncology And Carcinogenesis - Abstract
background: Transrectal prostate biopsy has limited diagnostic accuracy. Prostate Imaging Compared to Transperineal Ultrasound-guided biopsy for significant prostate cancer Risk Evaluation (PICTURE) was a paired-cohort confirmatory study designed to assess diagnostic accuracy of multiparametric magnetic resonance imaging (mpMRI) in men requiring a repeat biopsy. methods: All underwent 3 T mpMRI and transperineal template prostate mapping biopsies (TTPM biopsies). Multiparametric MRI was reported using Likert scores and radiologists were blinded to initial biopsies. Men were blinded to mpMRI results. Clinically significant prostate cancer was defined as Gleason greater than or equal to4+3 and/or cancer core length greater than or equal to6 mm. results: Two hundred and forty-nine had both tests with mean (s.d.) age was 62 (7) years, median (IQR) PSA 6.8 ng ml (4.98–9.50), median (IQR) number of previous biopsies 1 (1–2) and mean (s.d.) gland size 37 ml (15.5). On TTPM biopsies, 103 (41%) had clinically significant prostate cancer. Two hundred and fourteen (86%) had a positive prostate mpMRI using Likert score greater than or equal to3; sensitivity was 97.1% (95% confidence interval (CI): 92–99), specificity 21.9% (15.5–29.5), negative predictive value (NPV) 91.4% (76.9–98.1) and positive predictive value (PPV) 46.7% (35.2–47.8). One hundred and twenty-nine (51.8%) had a positive mpMRI using Likert score greater than or equal to4; sensitivity was 80.6% (71.6–87.7), specificity 68.5% (60.3–75.9), NPV 83.3% (75.4–89.5) and PPV 64.3% (55.4–72.6). conclusions: In men advised to have a repeat prostate biopsy, prostate mpMRI could be used to safely avoid a repeat biopsy with high sensitivity for clinically significant cancers. However, such a strategy can miss some significant cancers and overdiagnose insignificant cancers depending on the mpMRI score threshold used to define which men should be biopsied.
- Published
- 2017
27. Distinct immunohistochemical findings for common biomarkers in malignant and adjacent benign prostate: A study on needle biopsy microarrays derived from mpMRI-characterized tissue
- Author
-
Hashim U. Ahmed, Susan Heavey, Jonathan Olivier, Vasilis Stavrinides, James Gelister, Shonit Punwani, N. Mc Cartan, Zeba Ahmed, Jonathan Kay, Yipeng Hu, Hayley C. Whitaker, J van der Meulen, Susan A. Charman, Abi Kanthabalan, Lucy A.M. Simmons, David J. Hawkes, Caroline M. Moore, Tim Briggs, Dean C. Barratt, A. Freeman, Manit Arya, Charles Jameson, Hayley Pye, and M. Emberton
- Subjects
Pathology ,medicine.medical_specialty ,business.industry ,Urology ,Needle biopsy ,medicine ,Immunohistochemistry ,DNA microarray ,business ,Benign prostate - Published
- 2018
28. Conditional Segmentation in Lieu of Image Registration
- Author
-
Yipeng Hu, Mark Emberton, Tom Vercauteren, Dean C. Barratt, J. Alison Noble, and Eli Gibson
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Pixel ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Image registration ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Measure (mathematics) ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Displacement field ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy., Accepted to MICCAI 2019
- Published
- 2019
29. Registration of Untracked 2D Laparoscopic Ultrasound Liver Images to CT Using Content-Based Retrieval and Kinematic Priors
- Author
-
Henry F. J. Tregidgo, Brian R. Davidson, João Ramalhinho, Moustafa Allam, David J. Hawkes, Matthew J. Clarkson, Nikolina Travlou, Dean C. Barratt, and Kurinchi Selvan Gurusamy
- Subjects
Computer science ,business.industry ,Prior probability ,Ultrasound ,Laparoscopic ultrasound ,Computer vision ,Field of view ,Bayesian framework ,Artificial intelligence ,Kinematics ,business ,Resection ,Content based retrieval - Abstract
Laparoscopic Ultrasound (LUS) can enhance the safety of laparoscopic liver resection by providing information on the location of major blood vessels and tumours. Since many tumours are not visible in ultrasound, registration to a pre-operative CT has been proposed as a guidance method. In addition to being multi-modal, this registration problem is greatly affected by the differences in field of view between CT and LUS, and thus requires an accurate initialisation. We propose a novel method of registering smaller field of view slices to a larger volume globally using a Content-based retrieval framework. This problem is under-constrained for a single slice registration, resulting in non-unique solutions. Therefore, we introduce kinematic priors in a Bayesian framework in order to jointly register groups of ultrasound images. Our method then produces an estimate of the most likely sequence of CT images to represent the ultrasound acquisition and does not require tracking information nor an accurate initialisation. We demonstrate the feasibility of this approach in multiple LUS acquisitions taken from three sets of clinical data.
- Published
- 2019
30. Technical note: automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network
- Author
-
Hans Peter Dietz, Jan D'hooge, Jan Deprest, Ester Bonmati, Yipeng Hu, Nikhil Sindhwani, Tom Vercauteren, and Dean C. Barratt
- Subjects
Similarity (geometry) ,Pelvic floor ,medicine.diagnostic_test ,Artificial neural network ,Biometrics ,Computer science ,business.industry ,Ultrasound ,Pattern recognition ,Convolutional neural network ,medicine.anatomical_structure ,medicine ,3D ultrasound ,Segmentation ,Artificial intelligence ,business - Abstract
Segmentation of the levator hiatus in ultrasound allows to extract biometrics which are of importance for pelvic floor disorder assessment. In this work, we present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalising activation function. SELU has important advantages such as being parameter-free and mini-batch independent. A dataset with 91 images from 35 patients all labelled by three operators, is used for training and evaluation in a leave-one-patient-out cross-validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalisation. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semi-automatic approach.
- Published
- 2018
31. Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks
- Author
-
Yipeng Hu, Eli Gibson, Ester Bonmati, Caroline M. Moore, Dean C. Barratt, Nooshin Ghavami, and Rachel Rodell
- Subjects
Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Computer science ,Deep learning ,Image registration ,urologic and male genital diseases ,medicine.disease ,Convolutional neural network ,Cross-validation ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,030220 oncology & carcinogenesis ,medicine ,Computer vision ,Segmentation ,Artificial intelligence ,business - Abstract
This paper, originally published on 12 March 2018, was replaced with a corrected/revised version on 1 June 2018. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Clinically important targets for ultrasound-guided prostate biopsy and prostate cancer focal therapy can be defined on MRI. However, localizing these targets on transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of the prostate on intraoperative TRUS images is an important step towards automating most MRI-TRUS image registration workflows so that they become more acceptable in clinical practice. In this paper, we propose a deep learning method using convolutional neural networks (CNNs) for automatic prostate segmentation in 2D TRUS slices and 3D TRUS volumes. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS-guided targeted biopsy. Segmentation accuracy was measured by comparison to manual prostate segmentation in 2D on 4055 TRUS images and in 3D on the corresponding 110 volumes, in a 10-fold patient-level cross validation. The proposed method achieved a mean 2D Dice score coefficient (DSC) of 0.91±0.12 and a mean absolute boundary segmentation error of 1.23±1.46mm. Dice scores (0.91±0.04) were also calculated for 3D volumes on the patient level. These suggest a promising approach to aid a wide range of TRUS-guided prostate cancer procedures needing multimodality data fusion.
- Published
- 2018
32. A pre-operative planning framework for global registration of laparoscopic ultrasound to CT images
- Author
-
Maria Robu, Kurinchi Selvan Gurusamy, Brian R. Davidson, David J. Hawkes, João Ramalhinho, Matthew J. Clarkson, Stephen A. Thompson, and Dean C. Barratt
- Subjects
Computer science ,Swine ,Biomedical Engineering ,Health Informatics ,Surgical planning ,Feature-based registration ,Global registration ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Laparoscopic ultrasound ,Animals ,Hepatectomy ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Reliability (statistics) ,Ultrasonography ,Rigid registration ,business.industry ,Whole liver ,Liver Neoplasms ,Reproducibility of Results ,Usability ,General Medicine ,Branching points ,Computer Graphics and Computer-Aided Design ,Pre operative ,3. Good health ,Computer Science Applications ,Tree (data structure) ,Liver ,030220 oncology & carcinogenesis ,Surgery ,Laparoscopy ,Original Article ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Tomography, X-Ray Computed - Abstract
Purpose Laparoscopic ultrasound (LUS) enhances the safety of laparoscopic liver resection by enabling real-time imaging of internal structures such as vessels. However, LUS probes can be difficult to use, and many tumours are iso-echoic and hence are not visible. Registration of LUS to a pre-operative CT or MR scan has been proposed as a method of image guidance. However, the field of view of the probe is very small compared to the whole liver, making the registration task challenging and dependent on a very accurate initialisation. Methods We propose the use of a subject-specific planning framework that provides information on which anatomical liver regions it is possible to acquire vascular data that is unique enough for a globally optimal initial registration. Vessel-based rigid registration on different areas of the pre-operative CT vascular tree is used in order to evaluate predicted accuracy and reliability. Results The planning framework is tested on one porcine subject where we have taken 5 independent sweeps of LUS data from different sections of the liver. Target registration error of vessel branching points was used to measure accuracy. Global registration based on vessel centrelines is applied to the 5 datasets. In 3 out of 5 cases registration is successful and in agreement with the planning. Further tests with a CT scan under abdominal insufflation show that the framework can provide valuable information in all of the 5 cases. Conclusions We have introduced a planning framework that can guide the surgeon on how much LUS data to collect in order to provide a reliable globally unique registration without the need for an initial manual alignment. This could potentially improve the usability of these methods in clinic.
- Published
- 2018
33. Inter-site Variability in Prostate Segmentation Accuracy Using Deep Learning
- Author
-
Dean C. Barratt, Caroline M. Moore, Hashim U. Ahmed, Henkjan J. Huisman, Mark Emberton, Yipeng Hu, Eli Gibson, and Nooshin Ghavami
- Subjects
Training set ,Computer science ,business.industry ,Deep learning ,education ,Pattern recognition ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Medical imaging ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Prostate segmentation - Abstract
Deep-learning-based segmentation tools have yielded higher reported segmentation accuracies for many medical imaging applications. However, inter-site variability in image properties can challenge the translation of these tools to data from ‘unseen’ sites not included in the training data. This study quantifies the impact of inter-site variability on the accuracy of deep-learning-based segmentations of the prostate from magnetic resonance (MR) images, and evaluates two strategies for mitigating the reduced accuracy for data from unseen sites: training on multi-site data and training with limited additional data from the unseen site. Using 376 T2-weighted prostate MR images from six sites, we compare the segmentation accuracy (Dice score and boundary distance) of three deep-learning-based networks trained on data from a single site and on various configurations of data from multiple sites. We found that the segmentation accuracy of a single-site network was substantially worse on data from unseen sites than on data from the training site. Training on multi-site data yielded marginally improved accuracy and robustness. However, including as few as 8 subjects from the unseen site, e.g. during commissioning of a new clinical system, yielded substantial improvement (regaining 75% of the difference in Dice score).
- Published
- 2018
34. Adversarial Deformation Regularization for Training Image Registration Neural Networks
- Author
-
Nooshin Ghavami, Ester Bonmati, J. Alison Noble, Tom Vercauteren, Caroline M. Moore, Mark Emberton, Yipeng Hu, Dean C. Barratt, and Eli Gibson
- Subjects
medicine.diagnostic_test ,Artificial neural network ,Computer science ,business.industry ,Image registration ,Magnetic resonance imaging ,Pattern recognition ,medicine.disease ,Regularization (mathematics) ,Convolutional neural network ,Finite element method ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.
- Published
- 2018
35. Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy
- Author
-
Wendy J. M. van de Ven, Jelle O. Barentsz, Yipeng Hu, Dean C. Barratt, Henkjan J. Huisman, and Nico Karssemeijer
- Subjects
medicine.medical_specialty ,Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Image registration ,General Medicine ,Volume mesh ,Image segmentation ,Mesh generation ,medicine ,Segmentation ,Polygon mesh ,Radiology ,Thin plate spline ,business ,Biomedical engineering - Abstract
Purpose: Adding magnetic resonance (MR)-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound (US) by using MR-US registration. A common approach is to use surface-based registration. The authors hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular nonrigid surface-based registration method. The authors developed a novel method by extending a nonrigid surface-based registration algorithm with biomechanical finite element (FE) modeling to better predict internal deformations of the prostate. Methods: Data were collected from ten patients and the MR and TRUS images were rigidly registered to anatomically align prostate orientations. The prostate was manually segmented in both images and corresponding surface meshes were generated. Next, a tetrahedral volume mesh was generated from the MR image. Prostate deformations due to the TRUS probe were simulated using the surface displacements as the boundary condition. A three-dimensional thin-plate spline deformation field was calculated by registering the mesh vertices. The target registration errors (TREs) of 35 reference landmarks determined by surface and volume mesh registrations were compared. Results: The median TRE of a surface-based registration with biomechanical regularization was 2.76 (0.81–7.96) mm. This was significantly different than the median TRE of 3.47 (1.05–7.80) mm for regular surface-based registration without biomechanical regularization. Conclusions: Biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to a regular nonrigid surface-based registration algorithm and can help to improve the effectiveness of MR guided TRUS biopsy procedures.
- Published
- 2015
36. Focal Therapy: Patients, Interventions, and Outcomes—A Report from a Consensus Meeting
- Author
-
Arnauld Villers, Simon Bott, Neil McCartan, Richard Hindley, Peter A. Pinto, Viktor Berge, Mark Emberton, Thomas J. Polascik, Eric Barret, Hashim U. Ahmed, Lucy A.M. Simmons, Jan van der Meulen, David Bottomley, Caroline M. Moore, Roberto Alonzi, S Willis, Ian Donaldson, Scott E. Eggener, Dean C. Barratt, Tom Leslie, Alec Miners, and Behfar Ehdaie
- Subjects
Male ,medicine.medical_specialty ,Consensus ,Biopsy ,Urology ,Treatment outcome ,Psychological intervention ,Prostate cancer ,Prostate ,London ,Medicine ,Humans ,Intensive care medicine ,Aged ,Organ-sparing treatments ,Science & Technology ,business.industry ,Patient Selection ,Prostate Cancer ,Treatment options ,1103 Clinical Sciences ,Urology & Nephrology ,Middle Aged ,Prostate-Specific Antigen ,medicine.disease ,Magnetic Resonance Imaging ,PROSTATE-CANCER ,3. Good health ,Focal therapy ,Prostate-specific antigen ,Treatment Outcome ,medicine.anatomical_structure ,Consensus development conference ,Prostatic neoplasms ,business ,Life Sciences & Biomedicine - Abstract
Background Focal therapy as a treatment option for localized prostate cancer (PCa) is an increasingly popular and rapidly evolving field. Objective To gather expert opinion on patient selection, interventions, and meaningful outcome measures for focal therapy in clinical practice and trial design. Design, setting, and participants Fifteen experts in focal therapy followed a modified two-stage RAND/University of California, Los Angeles (UCLA) Appropriateness Methodology process. All participants independently scored 246 statements prior to rescoring at a face-to-face meeting. The meeting occurred in June 2013 at the Royal Society of Medicine, London, supported by the Wellcome Trust and the UK Department of Health. Outcome measurements and statistical analysis Agreement, disagreement, or uncertainty were calculated as the median panel score. Consensus was derived from the interpercentile range adjusted for symmetry level. Results and limitations Of 246 statements, 154 (63%) reached consensus. Items of agreement included the following: patients with intermediate risk and patients with unifocal and multifocal PCa are eligible for focal treatment; magnetic resonance imaging–targeted or template-mapping biopsy should be used to plan treatment; planned treatment margins should be 5 mm from the known tumor; prostate volume or age should not be a primary determinant of eligibility; foci of indolent cancer can be left untreated when treating the dominant index lesion; histologic outcomes should be defined by targeted biopsy at 1 yr; residual disease in the treated area of ≤3 mm of Gleason 3 + 3 did not need further treatment; and focal retreatment rates of ≤20% should be considered clinically acceptable but subsequent whole-gland therapy deemed a failure of focal therapy. All statements are expert opinion and therefore constitute level 5 evidence and may not reflect wider clinical consensus. Conclusions The landscape of PCa treatment is rapidly evolving with new treatment technologies. This consensus meeting provides guidance to clinicians on current expert thinking in the field of focal therapy. Patient summary In this report we present expert opinion on patient selection, interventions, and meaningful outcomes for clinicians working in focal therapy for prostate cancer., Take Home Message Focal therapy as an active treatment for prostate cancer is rapidly evolving. Expert opinion gathered in this report using robust consensus methodology captures current thinking and can help direct future research and clinical care.
- Published
- 2015
- Full Text
- View/download PDF
37. Multiattribute probabilistic prostate elastic registration (MAPPER): Application to fusion of ultrasound and magnetic resonance imaging
- Author
-
B. Nicolas Bloch, Daniel Moses, Dean C. Barratt, Lee Ponsky, Rachel Sparks, Anant Madabhushi, and Ernest J. Feleppa
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,medicine.medical_treatment ,Brachytherapy ,Image registration ,Magnetic resonance imaging ,General Medicine ,urologic and male genital diseases ,medicine.disease ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,Biopsy ,Medicine ,Radiology ,business ,Image-Guided Biopsy ,Fiducial marker - Abstract
Purpose: Transrectal ultrasound (TRUS)-guided needle biopsy is the current gold standard for prostate cancer diagnosis. However, up to 40% of prostate cancer lesions appears isoechoic on TRUS. Hence, TRUS-guided biopsy has a high false negative rate for prostate cancer diagnosis. Magnetic resonance imaging (MRI) is better able to distinguish prostate cancer from benign tissue. However, MRI-guided biopsy requires special equipment and training and a longer procedure time. MRI-TRUS fusion, where MRI is acquired preoperatively and then aligned to TRUS, allows for advantages of both modalities to be leveraged during biopsy. MRI-TRUS-guided biopsy increases the yield of cancer positive biopsies. In this work, the authors present multiattribute probabilistic postate elastic registration (MAPPER) to align prostate MRI and TRUS imagery. Methods: MAPPER involves (1) segmenting the prostate on MRI, (2) calculating a multiattribute probabilistic map of prostate location on TRUS, and (3) maximizing overlap between the prostate segmentation on MRI and the multiattribute probabilistic map on TRUS, thereby driving registration of MRI onto TRUS. MAPPER represents a significant advancement over the current state-of-the-art as it requires no user interaction during the biopsy procedure by leveraging texture and spatial information to determine the prostate location on TRUS. Although MAPPER requires manual interaction tomore » segment the prostate on MRI, this step is performed prior to biopsy and will not substantially increase biopsy procedure time. Results: MAPPER was evaluated on 13 patient studies from two independent datasets—Dataset 1 has 6 studies acquired with a side-firing TRUS probe and a 1.5 T pelvic phased-array coil MRI; Dataset 2 has 7 studies acquired with a volumetric end-firing TRUS probe and a 3.0 T endorectal coil MRI. MAPPER has a root-mean-square error (RMSE) for expert selected fiducials of 3.36 ± 1.10 mm for Dataset 1 and 3.14 ± 0.75 mm for Dataset 2. State-of-the-art MRI-TRUS fusion methods report RMSE of 3.06–2.07 mm. Conclusions: MAPPER aligns MRI and TRUS imagery without manual intervention ensuring efficient, reproducible registration. MAPPER has a similar RMSE to state-of-the-art methods that require manual intervention.« less
- Published
- 2015
38. MP70-02 CORRELATION OF MPMRI CONTOURS WITH 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSY WITHIN THE PROMIS TRIAL PILOT: WHAT MARGINS ARE REQUIRED?
- Author
-
Dean C. Barratt, Esther Bonmati, Richard Kaplan, Eli Gibson, Hashim U. Ahmed, Mark Emberton, Yipeng Hu, Clement Orczyk, Alex Kirkham, Yolana Coraco-Moraes, Shonit Punwani, Ahmed El-Shater Bosaily, Louise Brown, and Katie Ward
- Subjects
medicine.medical_specialty ,medicine.anatomical_structure ,medicine.diagnostic_test ,Prostate ,business.industry ,Urology ,Biopsy ,medicine ,Radiology ,business - Published
- 2017
39. MP38-07 SHOULD WE AIM FOR THE CENTRE OF AN MRI PROSTATE LESION? CORRELATION BETWEEN MPMRI AND 3-DIMENSIONAL 5MM TRANSPERINEAL PROSTATE MAPPING BIOPSIES FROM THE PROMIS TRIAL
- Author
-
Yipeng Hu, Dean C. Barratt, Esther Bonmati, Hashim U. Ahmed, Ahmed El-Shater Bosaily, Mark Emberton, Louise Brown, Shonit Punwani, Katie Ward, Alex Kirkham, Yolana Coraco-Moraes, Eli Gibson, Clement Orczyk, and Richard Kaplan
- Subjects
Lesion ,medicine.medical_specialty ,medicine.anatomical_structure ,Prostate ,business.industry ,Urology ,medicine ,Radiology ,medicine.symptom ,business - Published
- 2017
40. MP33-20 THE SMARTTARGET BIOPSY TRIAL: A PROSPECTIVE PAIRED BLINDED TRIAL WITH RANDOMISATION TO COMPARE VISUAL-ESTIMATION AND IMAGE-FUSION TARGETED PROSTATE BIOPSIES
- Author
-
Paul L. Martin, Sami Hamid, Barbara Villarini, Ingrid Potyka, Neil McCartan, Dean C. Barratt, Norman R. Williams, Chris Brew-Graves, Ester Bonmati, Rachel Rodell, Caroline M. Moore, Yipeng Hu, Ian Donaldson, Mark Emberton, Hashim U. Ahmed, and David J. Hawkes
- Subjects
Image fusion ,medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Urology ,Ethics committee ,020207 software engineering ,02 engineering and technology ,medicine.disease ,Clinical trial ,03 medical and health sciences ,Prostate cancer ,Fusion system ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,030220 oncology & carcinogenesis ,Biopsy ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Visual estimation ,Radiology ,business - Abstract
Multi-parametric MRI targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the diagnosis of clinically insignificant cancers. There is debate whether visual estimated targeting is sufficient or whether image-fusion software is required. We conducted an ethics committee approved, prospective, blinded, paired validating clinical trial of visual estimated targeted biopsies compared to non-rigid MR/US image-fusion using an academically developed fusion system (SmartTarget®).
- Published
- 2017
41. Deep residual networks for automatic segmentation of laparoscopic videos of the liver
- Author
-
C Schneider, Maria Robu, Kurinchi Selvan Gurusamy, Eli Gibson, Dean C. Barratt, Stephen A. Thompson, David J. Hawkes, Brian R. Davidson, Eddie Edwards, and Matthew J. Clarkson
- Subjects
medicine.medical_specialty ,Computer science ,Population ,02 engineering and technology ,Liver resections ,Residual ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Liver tissue ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Segmentation ,education ,Laparoscopy ,education.field_of_study ,medicine.diagnostic_test ,business.industry ,Deep learning ,Image segmentation ,medicine.disease ,020201 artificial intelligence & image processing ,Radiology ,Artificial intelligence ,Liver cancer ,business - Abstract
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores ≥0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.
- Published
- 2017
42. Assessment of Electromagnetic Tracking Accuracy for Endoscopic Ultrasound
- Author
-
Ester Bonmati, Brian R. Davidson, Matthew J. Clarkson, Yipeng Hu, Kurinchi Selvan Gurusamy, Dean C. Barratt, and Stephen P. Pereira
- Subjects
Endoscopic ultrasound ,Accuracy and precision ,Channel (digital image) ,Endoscope ,medicine.diagnostic_test ,business.industry ,Computer science ,Tracking (particle physics) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Transducer ,030220 oncology & carcinogenesis ,medicine ,Ultrasonic sensor ,Computer vision ,Artificial intelligence ,business ,Jitter - Abstract
Endoscopic ultrasound (EUS) is a minimally-invasive imaging technique that can be technically difficult to perform due to the small field of view and uncertainty in the endoscope position. Electromagnetic (EM) tracking is emerging as an important technology in guiding endoscopic interventions and for training in endotherapy by providing information on endoscope location by fusion with pre-operative images. However, the accuracy of EM tracking could be compromised by the endoscopic ultrasound transducer. In this work, we quantify the precision and accuracy of EM tracking sensors inserted into the working channel of a flexible endoscope, with the ultrasound transducer turned on and off. The EUS device was found to have little (no significant) effect on static tracking accuracy although jitter increased significantly. A significant change in the measured distance between sensors arranged in a fixed geometry was found during a dynamic acquisition. In conclusion, EM tracking accuracy was not found to be significantly affected by the flexible endoscope.
- Published
- 2017
43. Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks
- Author
-
Weidi Xie, Li-Lin Lee, Tom Vercauteren, J. Alison Noble, Dean C. Barratt, Yipeng Hu, and Eli Gibson
- Subjects
FOS: Computer and information sciences ,Discriminator ,Pixel ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Real image ,Sample (graphics) ,Imaging phantom ,Machine Learning (cs.LG) ,Computer Science - Learning ,Range (mathematics) ,Position (vector) ,Computer vision ,Artificial intelligence ,business ,Spatial analysis - Abstract
Sonography synthesis has a wide range of applications, including medical procedure simulation, clinical training and multimodality image registration. In this paper, we propose a machine learning approach to simulate ultrasound images at given 3D spatial locations (relative to the patient anatomy), based on conditional generative adversarial networks (GANs). In particular, we introduce a novel neural network architecture that can sample anatomically accurate images conditionally on spatial position of the (real or mock) freehand ultrasound probe. To ensure an effective and efficient spatial information assimilation, the proposed spatially-conditioned GANs take calibrated pixel coordinates in global physical space as conditioning input, and utilise residual network units and shortcuts of conditioning data in the GANs' discriminator and generator, respectively. Using optically tracked B-mode ultrasound images, acquired by an experienced sonographer on a fetus phantom, we demonstrate the feasibility of the proposed method by two sets of quantitative results: distances were calculated between corresponding anatomical landmarks identified in the held-out ultrasound images and the simulated data at the same locations unseen to the networks; a usability study was carried out to distinguish the simulated data from the real images. In summary, we present what we believe are state-of-the-art visually realistic ultrasound images, simulated by the proposed GAN architecture that is stable to train and capable of generating plausibly diverse image samples., Comment: Accepted to MICCAI RAMBO 2017
- Published
- 2017
- Full Text
- View/download PDF
44. Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks
- Author
-
Stephen P. Pereira, Brian R. Davidson, Eli Gibson, Yipeng Hu, Francesco Giganti, Ester Bonmati, Dean C. Barratt, Steve Bandula, Matthew J. Clarkson, and Kurinchi Selvan Gurusamy
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Stomach ,Abdominal ct ,Image registration ,Multi organ ,030218 nuclear medicine & medical imaging ,Endoscopy ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,medicine ,Segmentation ,Radiology ,Esophagus ,Pancreas ,business ,030217 neurology & neurosurgery - Abstract
Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-based algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures.
- Published
- 2017
45. Image-directed, tissue-preserving focal therapy of prostate cancer: a feasibility study of a novel deformable magnetic resonance-ultrasound (MR-US) registration system
- Author
-
Mark Emberton, Alex Kirkham, Clare Allen, Hashim U. Ahmed, Louise Dickinson, Dean C. Barratt, and Yipeng Hu
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Urology ,Ultrasound ,030232 urology & nephrology ,Image registration ,Registration system ,Magnetic resonance imaging ,medicine.disease ,Focused ultrasound ,030218 nuclear medicine & medical imaging ,3. Good health ,Focal therapy ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Medicine ,Radiographic Image Enhancement ,Radiology ,business - Abstract
To evaluate the feasibility of using computer-assisted, deformable image registration software to enable three-dimensional (3D), multi-parametric (mp) magnetic resonance imaging (MRI)-derived information on tumour location and extent, to inform the planning and conduct of focal high-intensity focused ultrasound (HIFU) therapy.
- Published
- 2013
46. A hybrid patient-specific biomechanical model based image registration method for the motion estimation of lungs
- Author
-
Liangxiu Han, Hua Dong, Lianghao Han, Jamie R. McClelland, Dean C. Barratt, and David J. Hawkes
- Subjects
Computer science ,Image registration ,Health Informatics ,Poisson distribution ,Displacement (vector) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,Motion ,0302 clinical medicine ,Motion estimation ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Four-Dimensional Computed Tomography ,Lung ,Landmark ,Radiological and Ultrasound Technology ,business.industry ,Computer Graphics and Computer-Aided Design ,Finite element method ,Euclidean distance ,030220 oncology & carcinogenesis ,Displacement field ,symbols ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms - Abstract
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson’s ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated.
- Published
- 2016
47. 2D-3D Registration Accuracy Estimation for Optimised Planning of Image-Guided Pancreatobiliary Interventions
- Author
-
Steven Bandula, Yipeng Hu, John H. Hipwell, Ester Bonmati, Stephen P. Pereira, Dean C. Barratt, David J. Hawkes, and Eli Gibson
- Subjects
Estimation ,3d registration ,business.industry ,Orientation (computer vision) ,Computer science ,Monte Carlo method ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Computer vision ,In patient ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Simulation - Abstract
We describe a fast analytical method to estimate landmark-based 2D-3D registration accuracy to aid the planning of pancreatobiliary interventions in which ERCP images are combined with information from diagnostic 3D MR or CT images. The method analytically estimates a target registration error (TRE), accounting for errors in the manual selection of both 2D- and 3D landmarks, that agrees with Monte Carlo simulation to within 4.5 ± 3.6 % (mean ± SD). We also show how to analytically estimate a planning uncertainty incorporating uncertainty in patient positioning, and utilise it to support ERCP-guided procedure planning by selecting the optimal patient position and X-ray C-arm orientation that minimises the expected TRE. Simulated- and derived planning uncertainties agreed to within 17.9 ± 9.7 % when the root-mean-square error was less than 50°. We demonstrate the feasibility of this approach on clinical data from two patients.
- Published
- 2016
48. MR to ultrasound registration for image-guided prostate interventions
- Author
-
Zeike A. Taylor, Mark Emberton, David J. Hawkes, Dean C. Barratt, Hashim U. Ahmed, Clare Allen, and Yipeng Hu
- Subjects
Male ,Image registration ,Health Informatics ,urologic and male genital diseases ,Sensitivity and Specificity ,Pattern Recognition, Automated ,Image Interpretation, Computer-Assisted ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Ultrasonography ,Prostatectomy ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Prostatic Neoplasms ,Reproducibility of Results ,Statistical model ,Magnetic resonance imaging ,Image Enhancement ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,Finite element method ,Surgery, Computer-Assisted ,Feature (computer vision) ,Subtraction Technique ,Displacement field ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Normal ,Algorithms - Abstract
A deformable registration method is described that enables automatic alignment of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland. The method employs a novel "model-to-image" registration approach in which a deformable model of the gland surface, derived from an MR image, is registered automatically to a TRUS volume by maximising the likelihood of a particular model shape given a voxel-intensity-based feature that represents an estimate of surface normal vectors at the boundary of the gland. The deformation of the surface model is constrained by a patient-specific statistical model of gland deformation, which is trained using data provided by biomechanical simulations. Each simulation predicts the motion of a volumetric finite element mesh due to the random placement of a TRUS probe in the rectum. The use of biomechanical modelling in this way also allows a dense displacement field to be calculated within the prostate, which is then used to non-rigidly warp the MR image to match the TRUS image. Using data acquired from eight patients, and anatomical landmarks to quantify the registration accuracy, the median final RMS target registration error after performing 100 MR-TRUS registrations for each patient was 2.40 mm.
- Published
- 2012
49. A biopsy simulation study to assess the accuracy of several transrectal ultrasonography (TRUS)-biopsy strategies compared with template prostate mapping biopsies in patients who have undergone radical prostatectomy
- Author
-
Yipeng Hu, Emilie Lecornet, Alex Freeman, Hashim U. Ahmed, Arnauld Villers, Pierre Nevoux, David J. Hawkes, Timothy J. Carter, Winston E. Barzell, Mark Emberton, Dean C. Barratt, and Nimalan Arumainayagam
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Receiver operating characteristic ,Prostatectomy ,business.industry ,Urology ,medicine.medical_treatment ,medicine.disease ,Surgery ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,Trus biopsy ,Biopsy ,medicine ,Transrectal ultrasonography ,Sampling (medicine) ,business ,Nuclear medicine - Abstract
Study Type – Diagnostic (validating cohort) Level of Evidence 1b What's known on the subject? and What does the study add? Transrectal ultrasonography (TRUS)-guided biopsies can miss prostate cancer and misclassify risk in a diagnostic setting; the exact extent to which it does so in a repeat biopsy strategy in men with low–intermediate risk prostate cancer is unknown. A simulation study of different biopsy strategies showed that repeat 12-core TRUS biopsy performs poorly. Adding anterior sampling improves on this but the highest accuracy is achieved using transperineal template prostate mapping using a 5 mm sampling frame. OBJECTIVE • To determine the effectiveness of two sampling strategies; repeat transrectal ultrasonography (TRUS)-biopsy and transperineal template prostate mapping (TPM) to detect and exclude lesions of ≥0.2 mL or ≥0.5 mL using computer simulation on reconstructed three-dimensional (3-D) computer models of radical whole-mount specimens. PATIENTS AND METHODS • Computer simulation on reconstructed 3-D computer models of radical whole-mount specimens was used to evaluate the performance characteristics of repeat TRUS-biopsy and TPM to detect and exclude lesions of ≥0.2 mL or ≥0.5 mL. • In all, 107 consecutive cases were analysed (1999–2001) with simulations repeated 500 times for each biopsy strategy. • TPM and five different TRUS-biopsy strategies were simulated; the latter involved a standard 12-core sampling and incorporated variable amounts of error, as well as the addition of anterior cores. • Sensitivity, specificity, negative and positive predictive values for detection of lesions with a volume of ≥0.2 mL or ≥0.5 mL were calculated. RESULTS • The mean (sd) age and PSA concentration were 61 (6.4) years and 8.5 (5.9) ng/mL, respectively.In all, 53% (57/107) had low–intermediate risk disease. • In all, 665 foci were reconstructed; there were 149 foci ≥0.2 mL and 97 ≥ 0.5 mL in the full cohort and 68 ≥ 0.2 mL and 43 ≥ 0.5 mL in the low–intermediate risk group. • Overall, TPM accuracy (area under the receiver operating curve, AUC) was ≈0.90 compared with AUC 0.70–0.80 for TRUS-biopsy. • In addition, at best, TRUS-biopsy missed 30–40% of lesions of ≥0.2 mL and ≥0.5 mL whilst TPM missed 5% of such lesions. CONCLUSION • TPM under simulation conditions appears the most effective re-classification strategy, although augmented TRUS-biopsy techniques are better than standard TRUS-biopsy.
- Published
- 2012
50. Modelling Prostate Motion for Data Fusion During Image-Guided Interventions
- Author
-
Clare Allen, Hashim U. Ahmed, Mark Emberton, Timothy J. Carter, Dean C. Barratt, David J. Hawkes, and Yipeng Hu
- Subjects
Male ,Finite Element Analysis ,Image registration ,Image processing ,Sensitivity and Specificity ,Motion ,Imaging, Three-Dimensional ,Robustness (computer science) ,Image Interpretation, Computer-Assisted ,Image Processing, Computer-Assisted ,Humans ,Medicine ,Computer Simulation ,Computer vision ,Electrical and Electronic Engineering ,Ultrasonography, Interventional ,Ultrasound, High-Intensity Focused, Transrectal ,Image fusion ,Models, Statistical ,Radiological and Ultrasound Technology ,business.industry ,Prostate ,Prostatic Neoplasms ,Statistical model ,Image segmentation ,Image Enhancement ,Sensor fusion ,Computer Science Applications ,Subtraction Technique ,Artificial intelligence ,Deformation (engineering) ,business ,Software - Abstract
There is growing clinical demand for image registration techniques that allow multimodal data fusion for accurate targeting of needle biopsy and ablative prostate cancer treatments. However, during procedures where transrectal ultrasound (TRUS) guidance is used, substantial gland deformation can occur due to TRUS probe pressure. In this paper, the ability of a statistical shape/motion model, trained using finite element simulations, to predict and compensate for this source of motion is investigated. Three-dimensional ultrasound images acquired on five patient prostates, before and after TRUS-probe-induced deformation, were registered using a nonrigid, surface-based method, and the accuracy of different deformation models compared. Registration using a statistical motion model was found to outperform alternative elastic deformation methods in terms of accuracy and robustness, and required substantially fewer target surface points to achieve a successful registration. The mean final target registration error (based on anatomical landmarks) using this method was 1.8 mm. We conclude that a statistical model of prostate deformation provides an accurate, rapid and robust means of predicting prostate deformation from sparse surface data, and is therefore well-suited to a number of interventional applications where there is a need for deformation compensation.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.