6 results on '"Grimwood, Alexander"'
Search Results
2. In Vivo Validation of Elekta's Clarity Autoscan for Ultrasound-based Intrafraction Motion Estimation of the Prostate During Radiation Therapy
- Author
-
Grimwood, Alexander, McNair, Helen A., O'Shea, Tuathan P., Gilroy, Stephen, Thomas, Karen, Bamber, Jeffrey C., Tree, Alison C., and Harris, Emma J.
- Published
- 2018
- Full Text
- View/download PDF
3. Voice-Assisted Image Labeling for Endoscopic Ultrasound Classification Using Neural Networks.
- Author
-
Bonmati, Ester, Hu, Yipeng, Grimwood, Alexander, Johnson, Gavin J., Goodchild, George, Keane, Margaret G., Gurusamy, Kurinchi, Davidson, Brian, Clarkson, Matthew J., Pereira, Stephen P., and Barratt, Dean C.
- Subjects
IMAGE analysis ,ULTRASONIC imaging ,DEEP learning ,CONVOLUTIONAL neural networks ,THREE-dimensional imaging ,ENDOSCOPIC ultrasonography ,SONICATION ,FETAL ultrasonic imaging - Abstract
Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using Image Sequence Classification
- Author
-
Grimwood, Alexander, McNair, Helen, Hu, Yipeng, Bonmati, Ester, Barratt, Dean, and Harris, Emma
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Physical sciences ,Medical Physics (physics.med-ph) ,Physics - Medical Physics - Abstract
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up. Probe placement and ultrasound image inter-pretation are manual tasks contingent upon operator skill, leading to interoperator uncertainties that degrade radiotherapy precision. We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data. Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time. The first set identifies relevant prostate anatomy visible in the field of view using the classes: outside prostate, prostate periphery, prostate centre. The second set recommends a probe angular adjustment to achieve alignment between the probe and prostate centre with the classes: move left, move right, stop. The algo-rithm was trained and tested on 9,743 clinical images from 61 treatment sessions across 32 patients. We evaluated classification accuracy against class labels de-rived from three experienced observers at 2/3 and 3/3 agreement thresholds. For images with unanimous consensus between observers, anatomical classification accuracy was 97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$^{\circ}$ (1.2$^{\circ}$) from angle labels with full observer consensus, comparable to the 2.8$^{\circ}$ (2.6$^{\circ}$) mean interobserver range. We propose such an algorithm could assist ra-diotherapy practitioners with limited experience of ultrasound image interpreta-tion by providing effective real-time feedback during patient set-up., Accepted to MICCAI 2020
- Published
- 2020
5. Improving 3D ultrasound prostate localisation in radiotherapy through increased automation of interfraction matching.
- Author
-
Grimwood, Alexander, Rivaz, Hassan, Zhou, Hang, McNair, Helen A., Jakubowski, Klaudiusz, Bamber, Jeffrey C., Tree, Alison C., and Harris, Emma J.
- Subjects
- *
PROSTATE , *IMAGE registration , *AUTOMATION , *THREE-dimensional imaging , *RADIOTHERAPY - Abstract
• Automated matching improves accuracy and precision of ultrasound guided radiotherapy. • Image registration software lowers setup errors and sonography training requirements. • Spatial regularisation improves registration algorithm performance. Daily image guidance is standard care for prostate radiotherapy. Innovations which improve the accuracy and efficiency of ultrasound guidance are needed, particularly with respect to reducing interobserver variation. This study explores automation tools for this purpose, demonstrated on the Elekta Clarity Autoscan®. The study was conducted as part of the Clarity-Pro trial (NCT02388308). Ultrasound scan volumes were collected from 32 patients. Prostate matches were performed using two proposed workflows and the results compared with Clarity's proprietary software. Gold standard matches derived from manually localised landmarks provided a reference. The two workflows incorporated a custom 3D image registration algorithm, which was benchmarked against a third-party application (Elastix). Significant reductions in match errors were reported from both workflows compared to standard protocol. Median (IQR) absolute errors in the left–right, anteroposterior and craniocaudal axes were lowest for the Manually Initiated workflow: 0.7(1.0) mm, 0.7(0.9) mm, 0.6(0.9) mm compared to 1.0(1.7) mm, 0.9(1.4) mm, 0.9(1.2) mm for Clarity. Median interobserver variation was ≪0.01 mm in all axes for both workflows compared to 2.2 mm, 1.7 mm, 1.5 mm for Clarity in left–right, anteroposterior and craniocaudal axes. Mean matching times was also reduced to 43 s from 152 s for Clarity. Inexperienced users of the proposed workflows attained better match precision than experienced users on Clarity. Automated image registration with effective input and verification steps should increase the efficacy of interfraction ultrasound guidance compared to the current commercially available tools. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Real-time intrafraction motion monitoring in external beam radiotherapy.
- Author
-
Bertholet J, Knopf A, Eiben B, McClelland J, Grimwood A, Harris E, Menten M, Poulsen P, Nguyen DT, Keall P, and Oelfke U
- Subjects
- Humans, Motion, Magnetic Resonance Imaging methods, Neoplasms radiotherapy, Proton Therapy methods, Radiotherapy Planning, Computer-Assisted methods
- Abstract
Radiotherapy (RT) aims to deliver a spatially conformal dose of radiation to tumours while maximizing the dose sparing to healthy tissues. However, the internal patient anatomy is constantly moving due to respiratory, cardiac, gastrointestinal and urinary activity. The long term goal of the RT community to 'see what we treat, as we treat' and to act on this information instantaneously has resulted in rapid technological innovation. Specialized treatment machines, such as robotic or gimbal-steered linear accelerators (linac) with in-room imaging suites, have been developed specifically for real-time treatment adaptation. Additional equipment, such as stereoscopic kilovoltage (kV) imaging, ultrasound transducers and electromagnetic transponders, has been developed for intrafraction motion monitoring on conventional linacs. Magnetic resonance imaging (MRI) has been integrated with cobalt treatment units and more recently with linacs. In addition to hardware innovation, software development has played a substantial role in the development of motion monitoring methods based on respiratory motion surrogates and planar kV or Megavoltage (MV) imaging that is available on standard equipped linacs. In this paper, we review and compare the different intrafraction motion monitoring methods proposed in the literature and demonstrated in real-time on clinical data as well as their possible future developments. We then discuss general considerations on validation and quality assurance for clinical implementation. Besides photon RT, particle therapy is increasingly used to treat moving targets. However, transferring motion monitoring technologies from linacs to particle beam lines presents substantial challenges. Lessons learned from the implementation of real-time intrafraction monitoring for photon RT will be used as a basis to discuss the implementation of these methods for particle RT.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.