92 results on '"John E. Gillam"'
Search Results
2. Prediction Models of Functional Outcomes for Individuals in the Clinical High-Risk State for Psychosis or with Recent-Onset Depression: A Multimodal, Multisite Machine Learning Analysis
- Author
-
Lana Kambeitz-Ilankovic, Raimo K. R. Salokangas, Dominic B. Dwyer, Nikolaos Koutsouleris, Dirk Beque, Stefan Borgwardt, Christos Pantelis, Anita Riecher-Rössler, André Schmidt, Paolo Brambilla, Stephen J. Wood, Eva Meisenzahl, John E. Gillam, Katharine Chisholm, Jarmo Hietala, Anne Ruef, Marco Paolini, Stephan Ruhrmann, Joseph Kambeitz, Frauke Schultze-Lutter, Marlene Rosen, Maximilian F. Reiser, Rachel Upthegrove, Peter Falkai, and Theresa Haidl
- Subjects
Adult ,Male ,Psychosis ,MEDLINE ,Neuroimaging ,Neuropsychological Tests ,Machine learning ,computer.software_genre ,Machine Learning ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Humans ,Medicine ,Generalizability theory ,Gray Matter ,Precision Medicine ,Young adult ,Default mode network ,Original Investigation ,Depressive Disorder ,Depression ,business.industry ,Case-control study ,Correction ,Prognosis ,medicine.disease ,ta3124 ,030227 psychiatry ,Psychiatry and Mental health ,Psychotic Disorders ,Case-Control Studies ,Anxiety ,Female ,Artificial intelligence ,medicine.symptom ,business ,Social Adjustment ,computer ,030217 neurology & neurosurgery - Abstract
Importance Social and occupational impairments contribute to the burden of psychosis and depression. There is a need for risk stratification tools to inform personalized functional-disability preventive strategies for individuals in at-risk and early phases of these illnesses. Objective To determine whether predictors associated with social and role functioning can be identified in patients in clinical high-risk (CHR) states for psychosis or with recent-onset depression (ROD) using clinical, imaging-based, and combined machine learning; assess the geographic, transdiagnostic, and prognostic generalizability of machine learning and compare it with human prognostication; and explore sequential prognosis encompassing clinical and combined machine learning. Design, Setting, and Participants This multisite naturalistic study followed up patients in CHR states, with ROD, and with recent-onset psychosis, and healthy control participants for 18 months in 7 academic early-recognition services in 5 European countries. Participants were recruited between February 2014 and May 2016, and data were analyzed from April 2017 to January 2018. Main Outcomes and Measures Performance and generalizability of prognostic models. Results A total of 116 individuals in CHR states (mean [SD] age, 24.0 [5.1] years; 58 [50.0%] female) and 120 patients with ROD (mean [SD] age, 26.1 [6.1] years; 65 [54.2%] female) were followed up for a mean (SD) of 329 (142) days. Machine learning predicted the 1-year social-functioning outcomes with a balanced accuracy of 76.9% of patients in CHR states and 66.2% of patients with ROD using clinical baseline data. Balanced accuracy in models using structural neuroimaging was 76.2% in patients in CHR states and 65.0% in patients with ROD, and in combined models, it was 82.7% for CHR states and 70.3% for ROD. Lower functioning before study entry was a transdiagnostic predictor. Medial prefrontal and temporo-parieto-occipital gray matter volume (GMV) reductions and cerebellar and dorsolateral prefrontal GMV increments had predictive value in the CHR group; reduced mediotemporal and increased prefrontal-perisylvian GMV had predictive value in patients with ROD. Poor prognoses were associated with increased risk of psychotic, depressive, and anxiety disorders at follow-up in patients in the CHR state but not ones with ROD. Machine learning outperformed expert prognostication. Adding neuroimaging machine learning to clinical machine learning provided a 1.9-fold increase of prognostic certainty in uncertain cases of patients in CHR states, and a 10.5-fold increase of prognostic certainty for patients with ROD. Conclusions and Relevance Precision medicine tools could augment effective therapeutic strategies aiming at the prevention of social functioning impairments in patients with CHR states or with ROD.
- Published
- 2018
3. Direct Estimation of Voxel-Wise Neurotransmitter Response Maps From Dynamic PET Data
- Author
-
Roger Fulton, William J. Ryder, Steven R. Meikle, Georgios I. Angelis, and John E. Gillam
- Subjects
Male ,Accuracy and precision ,Iterative reconstruction ,computer.software_genre ,Standard deviation ,030218 nuclear medicine & medical imaging ,Rats, Sprague-Dawley ,03 medical and health sciences ,0302 clinical medicine ,Voxel ,Expectation–maximization algorithm ,Image Processing, Computer-Assisted ,Animals ,Computer Simulation ,Electrical and Electronic Engineering ,Parametric statistics ,Physics ,Neurotransmitter Agents ,Radiological and Ultrasound Technology ,Estimation theory ,Phantoms, Imaging ,Brain ,Reconstruction algorithm ,Computer Science Applications ,Rats ,Raclopride ,Positron-Emission Tomography ,Radiopharmaceuticals ,computer ,Algorithm ,Software ,Algorithms - Abstract
Computational methods, such as the linear parametric neurotransmitter PET (lp-ntPET) method, have been developed to characterize the transient changes in radiotracer kinetics in the target tissue during endogenous neurotransmitter release. In this paper, we describe and evaluate a parametric reconstruction algorithm that uses an expectation maximization framework, along with the lp-ntPET model, to estimate the endogenous neurotransmitter response to stimuli directly from the measured PET data. Computer simulations showed that the proposed direct reconstruction method offers improved accuracy and precision for the estimated timing parameters of the neurotransmitter response at the voxel level ( ${t}_{d}=1\pm 2 $ min, for activation onset bias and standard deviation) compared with conventional post reconstruction modeling ( ${t}_{d}=4\pm 7 $ min). In addition, we applied the proposed direct parameter estimation methodology to a [11C]raclopride displacement study of an awake rat and generated parametric maps illustrating the magnitude of ligand displacement from striatum. Although the estimated parametric maps of activation magnitude obtained from both direct and post reconstruction methodologies suffered from false positive activations, the proposed direct reconstruction framework offered more reliable parametric maps when the activation onset parameter was constrained.
- Published
- 2018
4. Denoising non-steady state dynamic PET data using a feed-forward neural network
- Author
-
Georgios I. Angelis, Steven R. Meikle, John E. Gillam, and Oliver K. Fuller
- Subjects
Statistical noise ,Computer science ,Noise reduction ,Signal-To-Noise Ratio ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Signal-to-noise ratio ,Image Processing, Computer-Assisted ,Animals ,Humans ,Radiology, Nuclear Medicine and imaging ,Radiological and Ultrasound Technology ,Artificial neural network ,Phantoms, Imaging ,business.industry ,Reproducibility of Results ,Pattern recognition ,Filter (signal processing) ,Noise ,Positron-Emission Tomography ,Feedforward neural network ,Neural Networks, Computer ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
The quality of reconstructed dynamic PET images, as well as the statistical reliability of the estimated pharmacokinetic parameters is often compromised by high levels of statistical noise, particularly at the voxel level. Many denoising strategies have been proposed, both in the temporal and spatial domain, which substantially improve the signal to noise ratio of the reconstructed dynamic images. However, although most filtering approaches are fairly successful in reducing the spatio-temporal inter-voxel variability, they may also average out or completely eradicate the critically important temporal signature of a transient neurotransmitter activation response that may be present in a non-steady state dynamic PET study. In this work, we explore an approach towards temporal denoising of non-steady state dynamic PET images using an artificial neural network, which was trained to identify the temporal profile of a time-activity curve, while preserving any potential activation response. We evaluated the performance of a feed-forward perceptron neural network to improve the signal to noise ratio of dynamic [11C]raclopride activation studies and compared it with the widely used highly constrained back projection (HYPR) filter. Results on both simulated Geant4 Application for Tomographic Emission data of a realistic rat brain phantom and experimental animal data of a freely moving animal study showed that the proposed neural network can efficiently improve the noise characteristics of dynamic data in the temporal domain, while it can lead to a more reliable estimation of voxel-wise activation response in target region. In addition, improvements in signal-to-noise ratio achieved by denoising the dynamic data using the proposed neural network led to improved accuracy and precision of the estimated model parameters of the lp-ntPET model, compared to the HYPR filter. The performance of the proposed denoising approach strongly depends on the amount of noise in the dynamic PET data, with higher noise leading to substantially higher variability in the estimated parameters of the activation response. Overall, the feed-forward network led to a similar performance as the HYPR filter in terms of spatial denoising, but led to notable improvements in terms of temporal denoising, which in turn improved the estimation activation parameters.
- Published
- 2021
5. List-mode image reconstruction for positron emission tomography using tetrahedral voxels
- Author
-
Georgios I. Angelis, Steven R. Meikle, and John E. Gillam
- Subjects
Movement ,Physics::Medical Physics ,Point cloud ,Iterative reconstruction ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Voxel ,Image Processing, Computer-Assisted ,medicine ,Animals ,Partition (number theory) ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Mathematics ,Motion compensation ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Rats ,Positron emission tomography ,Positron-Emission Tomography ,030220 oncology & carcinogenesis ,Tetrahedron ,Tomography ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
Image space decomposition based on tetrahedral voxels are interesting candidates for use in emission tomography. Tetrahedral voxels provide many of the advantages of point clouds with irregular spacing, such as being intrinsically multi-resolution, yet they also serve as a volumetric partition of the image space and so are comparable to more standard cubic voxels. Additionally, non-rigid displacement fields can be applied to the tetrahedral mesh in a straight-forward manner. So far studies incorporating tetrahedral decomposition of the image space have concentrated on pre-calculated, node-based, system matrix elements which reduces the flexibility of the tetrahedral approach and the capacity to accurately define regions of interest. Here, a list-mode on-the-fly calculation of the system matrix elements is described using a tetrahedral decomposition of the image space and volumetric elements-voxels. The algorithm is demonstrated in the context of awake animal PET which may require both rigid and non-rigid motion compensation, as well as quantification within small regions of the brain. This approach allows accurate, event based, motion compensation including non-rigid deformations.
- Published
- 2016
6. Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography
- Author
-
John E. Gillam and Magdalena Rafecas
- Subjects
Positron emission tomography ,Nuclear and High Energy Physics ,medicine.medical_specialty ,Single photon emission tomography ,Monte Carlo method ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Iterative reconstruction ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,Data acquisition ,medicine ,Medical physics ,Instrumentation ,Physics ,medicine.diagnostic_test ,Process (computing) ,Monte-Carlo simulations ,Range (mathematics) ,PET ,Computer engineering ,SPECT ,030220 oncology & carcinogenesis ,Image reconstruction ,Tomography - Abstract
Emission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios.
- Published
- 2016
- Full Text
- View/download PDF
7. An investigation of 68Ga positron range correction through de-blurring: A simulation study
- Author
-
Rukiah A L, John E. Gillam, Peter L. Kench, and Steven R. Meikle
- Subjects
Materials science ,medicine.diagnostic_test ,010405 organic chemistry ,Image quality ,Reconstruction algorithm ,010402 general chemistry ,01 natural sciences ,Imaging phantom ,0104 chemical sciences ,Positron ,Kernel (image processing) ,Positron emission tomography ,medicine ,Image noise ,Image resolution ,Biomedical engineering - Abstract
Image quality of 68Ga Positron Emission Tomography (PET) studies is impacted by its relatively high positron range compared with 18F, thus limiting spatial resolution. This simulation study describes a post-reconstruction 2-dimensional (2D) positron range correction (post-PRC) using the Richardson-Lucy (RL) de-blurring method for 68Ga. The method proposed is based on a 2-dimensional de-blurring kernel derived in bone, soft-tissue and lung medium. and the kernel was applied to the simulated image of the NEMA Image Quality phantom geometry with a target-to-background ratio of 8:1. The post-PRC method can recover spatial resolution loss by approximately 1%, 9% and 56% in bone, water and lung medium, respectively. However, the proposed de-blurring method increases image noise, particularly in the lung medium. This convenient post-processing de-blurring approach does not require raw data or modification to the reconstruction algorithm allowing it to be adopted to the reconstructed volumes of any PET scanner.
- Published
- 2018
8. Evaluation of resistive-plate-chamber-based TOF-PET applied to in-beam particle therapy monitoring
- Author
-
Samuel España, F. Diblen, Fine Fiedler, John E. Gillam, Magdalena Rafecas, P. Solevi, D. Watts, H. Rohling, Stefaan Vandenberghe, and I. Torres-Espallardo
- Subjects
Scanner ,Technology and Engineering ,PROTON THERAPY ,Materials science ,medicine.medical_treatment ,Particle therapy ,Range deviation ,DELIVERY ,POSITRON-EMISSION-TOMOGRAPHY ,Positron ,Optics ,VERIFICATION ,Radiation Monitoring ,DETECTORS ,Proton Therapy ,medicine ,In-beam ,RECONSTRUCTION ,Radiology, Nuclear Medicine and imaging ,Sensitivity (control systems) ,Proton therapy ,Partial-ring ,Radiological and Ultrasound Technology ,RANGE ,business.industry ,TOF ,Detector ,IRRADIATION ,TIME-OF-FLIGHT ,Time of flight ,PET ,Positron-Emission Tomography ,RPC ,Tera ,Nuclear medicine ,business ,RADIOTHERAPY - Abstract
Particle therapy is a highly conformal radiotherapy technique which reduces the dose deposited to the surrounding normal tissues. In order to fully exploit its advantages, treatment monitoring is necessary to minimize uncertainties related to the dose delivery. Up to now, the only clinically feasible technique for the monitoring of therapeutic irradiation with particle beams is Positron Emission Tomography (PET). In this work we have compared a Resistive Plate Chamber (RPC)-based PET scanner with a scintillation-crystal-based PET scanner for this application. In general, the main advantages of the RPC-PET system are its excellent timing resolution, low cost, and the possibility of building large area systems. We simulated a partial-ring scanner based on an RPC prototype under construction within the Fondazione per Adroterapia Oncologica (TERA). For comparison with the crystal-based PET scanner we have chosen the geometry of a commercially available PET scanner, the Philips Gemini TF. The coincidence time resolution used in the simulations takes into account the current achievable values as well as expected improvements of both technologies. Several scenarios (including patient data) have been simulated to evaluate the performance of different scanners. Initial results have shown that the low sensitivity of the RPC hampers its application to hadron-beam monitoring, which has an intrinsically low positron yield compared to diagnostic PET. In addition, for in-beam PET there is a further data loss due to the partial ring configuration. In order to improve the performance of the RPC-based scanner, an improved version of the RPC detector (modifying the thickness of the gas and glass layers), providing a larger sensitivity, has been simulated and compared with an axially extended version of the crystal-based device. The improved version of the RPC shows better performance than the prototype, but the extended version of the crystal-based PET outperforms all other options.
- Published
- 2015
9. Noise evaluation of Compton camera imaging for proton therapy
- Author
-
Paola Sala, Gabriela Llosa, John E. Gillam, Josep F. Oliver, I. Torres-Espallardo, Carlos Lacasta, F. Cerutti, Arnaud Ferrari, Pablo G. Ortega, P. Solevi, and Magdalena Rafecas
- Subjects
Diagnostic Imaging ,Photon ,Image quality ,Compton telescope ,medicine.medical_treatment ,Image processing ,Iterative reconstruction ,Signal-To-Noise Ratio ,Imaging phantom ,Optics ,Image Processing, Computer-Assisted ,Proton Therapy ,medicine ,Humans ,Computer Simulation ,Gamma Cameras ,Radiology, Nuclear Medicine and imaging ,Probability ,Neutrons ,Physics ,Photons ,Particle therapy ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,business.industry ,Compton scattering ,business ,Monte Carlo Method ,Algorithms - Abstract
Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.
- Published
- 2015
10. Rigid motion correction of dual opposed planar projections in single photon imaging
- Author
-
Peter L. Kench, William J. Ryder, Georgios I. Angelis, Frederic Boisson, John E. Gillam, Andre Kyme, Roger Fulton, Steven R. Meikle, Instituto de Fisica Corpuscular (IFIC), Consejo Superior de Investigaciones Científicas [Spain] (CSIC)-Universitat de València (UV), Institut Pluridisciplinaire Hubert Curien (IPHC), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université de Strasbourg (UNISTRA)-Centre National de la Recherche Scientifique (CNRS), Imaging Physics Laboratory Brain and Mind Centre, Faculty of Health Sciences University of Sydney, Concord Repatriation General Hospital, Australian National Imaging Facility, Université de Strasbourg (UNISTRA)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS), Biomedical Engineering School of AMME, Department of Genetics [Saint-Louis], Washington University in Saint Louis (WUSTL), School of Physics [UNSW Sydney] (UNSW), University of New South Wales [Sydney] (UNSW), Westmead Hospital [Sydney], and Australian Research Council Discovery Project Grant (DP110102912)
- Subjects
Planar Imaging ,Planar projection ,planar imaging ,[PHYS.PHYS.PHYS-BIO-PH]Physics [physics]/Physics [physics]/Biological Physics [physics.bio-ph] ,Movement ,Geometry ,Iterative reconstruction ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Motion estimation ,Image Processing, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,motion correction ,Projection (set theory) ,ComputingMilieux_MISCELLANEOUS ,Mathematics ,Tomography, Emission-Computed, Single-Photon ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,Rotation around a fixed axis ,Reconstruction algorithm ,030220 oncology & carcinogenesis ,Linear motion ,[PHYS.PHYS.PHYS-MED-PH]Physics [physics]/Physics [physics]/Medical Physics [physics.med-ph] ,awake animal imaging ,Artifacts ,Monte Carlo Method ,Algorithms - Abstract
International audience; Awake and/or freely moving small animal single photon emission imaging allows the continuous study of molecules exhibiting slow kinetics without the need to restrain or anaesthetise the animals. Estimating motion free projections in freely moving small animal planar imaging can be considered as a limited angle tomography problem, except that we wish to estimate the 2D planar projections rather than the 3D volume, where the angular sampling in all three axes depends on the rotational motion of the animal. In this study, we hypothesise that the motion corrected planar projections estimated by reconstructing an estimate of the 3D volume using an iterative motion compensating reconstruction algorithm and integrating it along the projection path, will closely match the true, motion-less, planar distribution regardless of the object motion. We tested this hypothesis for the case of rigid motion using Monte-Carlo simulations and experimental phantom data based on a dual opposed detector system, where object motion was modelled with 6 degrees of freedom. In addition, we investigated the quantitative accuracy of the regional activity extracted from the geometric mean of opposing motion corrected planar projections. Results showed that it is feasible to estimate qualitatively accurate motion-corrected projections for a wide range of motions around all 3 axes. Errors in the geometric mean estimates of regional activity were relatively small and within 10% of expected true values. In addition, quantitative regional errors were dependent on the observed motion, as well as on the surrounding activity of overlapping organs. We conclude that both qualitatively and quantitatively accurate motion-free projections of the tracer distribution in a rigidly moving object can be estimated from dual opposed detectors using a correction approach within an iterative reconstruction framework and we expect this approach can be extended to the case of non-rigid motion.
- Published
- 2017
11. Motion compensation using origin ensembles in awake small animal positron emission tomography
- Author
-
John E. Gillam, Georgios I. Angelis, Steven R. Meikle, and Andre Kyme
- Subjects
Computer science ,Movement ,Posterior probability ,Context (language use) ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Compensation (engineering) ,03 medical and health sciences ,0302 clinical medicine ,Voxel ,Image Processing, Computer-Assisted ,Animals ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Tissue Distribution ,Motion compensation ,Tomographic reconstruction ,Radiological and Ultrasound Technology ,business.industry ,Phantoms, Imaging ,Pattern recognition ,Covariance ,Models, Theoretical ,Data set ,030220 oncology & carcinogenesis ,Positron-Emission Tomography ,Artificial intelligence ,Radiopharmaceuticals ,business ,computer ,Algorithms - Abstract
In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.
- Published
- 2017
12. A low energy bound atomic electron Compton scattering model for Geant4
- Author
-
David M. Paganin, John E. Gillam, Jeremy M. C. Brown, and Matthew Richard Dimmock
- Subjects
Elastic scattering ,Physics ,Nuclear and High Energy Physics ,Photon ,Scattering ,Astrophysics::High Energy Astrophysical Phenomena ,Monte Carlo method ,Compton scattering ,Electron ,Inelastic scattering ,Two-body problem ,Computational physics ,Atomic physics ,Instrumentation - Abstract
A two-body fully relativistic three-dimensional scattering framework has been utilised to develop an alternative Compton scattering computational model to those adapted from Ribberfors’ work for Monte Carlo modelling of Compton scattering. Using a theoretical foundation that ensures the conservation of energy and momentum in the relativistic impulse approximation, this new model, the Monash University Compton scattering model, develops energy and directional algorithms for both the scattered photon and ejected Compton electron from first principles. The Monash University Compton scattering model was developed to address the limitation of the Compton electron directionality algorithms of other computational models adapted from Ribberfors’ work. Here the development of the Monash University Compton scattering model, including its implementation in a Geant4 low energy electromagnetic physics class, G4LowEPComptonModel, is outlined. Assessment of the performance of G4LowEPComptonModel was undertaken in two steps: (1) comparison with respect to the two standard Compton scattering classes of Geant4 version 9.5, G4LivermoreComptonModel and G4PenelopeComptonModel, and (2) experimental comparison with respect to Compton electron kinetic energy spectra obtained from the Compton scattering of 662 keV photons off the K-shell of gold. Both studies illustrate that the Monash University Compton scattering model, and in turn G4LowEPComptonModel, is a viable replacement for the majority of computational models that have been adapted from Ribberfors’ work. It was also shown that the Monash University Compton scattering model is able to reproduce the Compton scattering triply differential cross-section Compton electron kinetic energy spectra of 662 keV photons K-shell scattering off of gold to within experimental uncertainty.
- Published
- 2014
13. Study of a high-resolution PET system using a Silicon detector probe
- Author
-
John E. Gillam, K Brzeziński, Magdalena Rafecas, and Josep F. Oliver
- Subjects
Point spread function ,Silicon ,Materials science ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,Image quality ,business.industry ,Detector ,Field of view ,Iterative reconstruction ,equipment and supplies ,Imaging phantom ,Optics ,Positron-Emission Tomography ,Image Processing, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,Nuclear medicine ,business ,Pixelization ,Image resolution ,Algorithms - Abstract
A high-resolution silicon detector probe, in coincidence with a conventional PET scanner, is expected to provide images of higher quality than those achievable using the scanner alone. Spatial resolution should improve due to the finer pixelization of the probe detector, while increased sensitivity in the probe vicinity is expected to decrease noise. A PET-probe prototype is being developed utilizing this principle. The system includes a probe consisting of ten layers of silicon detectors, each a 80 × 52 array of 1 × 1 × 1 mm(3) pixels, to be operated in coincidence with a modern clinical PET scanner. Detailed simulation studies of this system have been performed to assess the effect of the additional probe information on the quality of the reconstructed images. A grid of point sources was simulated to study the contribution of the probe to the system resolution at different locations over the field of view (FOV). A resolution phantom was used to demonstrate the effect on image resolution for two probe positions. A homogeneous source distribution with hot and cold regions was used to demonstrate that the localized improvement in resolution does not come at the expense of the overall quality of the image. Since the improvement is constrained to an area close to the probe, breast imaging is proposed as a potential application for the novel geometry. In this sense, a simplified breast phantom, adjacent to heart and torso compartments, was simulated and the effect of the probe on lesion detectability, through measurements of the local contrast recovery coefficient-to-noise ratio (CNR), was observed. The list-mode ML-EM algorithm was used for image reconstruction in all cases. As expected, the point spread function of the PET-probe system was found to be non-isotropic and vary with position, offering improvement in specific regions. Increase in resolution, of factors of up to 2, was observed in the region close to the probe. Images of the resolution phantom showed visible improvement in resolution when including the probe in the simulations. The image quality study demonstrated that contrast and spill-over ratio in other areas of the FOV were not sacrificed for this enhancement. The CNR study performed on the breast phantom indicates increased lesion detectability provided by the probe.
- Published
- 2014
14. Sensitivity recovery for the AX-PET prototype using inter-crystal scattering events
- Author
-
Matthieu Heller, C. Casella, Magdalena Rafecas, Christian Joram, P. Solevi, John E. Gillam, and Josep F. Oliver
- Subjects
Photons ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,Computer science ,Scattering ,Iterative reconstruction ,Sensitivity and Specificity ,Imaging phantom ,Signal-to-noise ratio ,Positron-Emission Tomography ,Image Processing, Computer-Assisted ,Scattering, Radiation ,Radiology, Nuclear Medicine and imaging ,Tomography ,Sensitivity (control systems) ,Algorithm ,Algorithms ,Simulation - Abstract
The development of novel detection devices and systems such as the AX-positron emission tomography (PET) demonstrator often introduce or increase the measurement of atypical coincidence events such as inter-crystal scattering (ICS). In more standard systems, ICS events often go undetected and the small measured fraction may be ignored. As the measured quantity of such events in the data increases, so too does the importance of considering them during image reconstruction. Generally, treatment of ICS events will attempt to determine which of the possible candidate lines of response (LoRs) correctly determine the annihilation photon trajectory. However, methods of assessment often have low success rates or are computationally demanding. In this investigation alternative approaches are considered. Experimental data was taken using the AX-PET prototype and a NEMA phantom. Three methods of ICS treatment were assessed--each of which considered all possible candidate LoRs during image reconstruction. Maximum likelihood expectation maximization was used in conjunction with both standard (line-like) and novel (V-like in this investigation) detection responses modeled within the system matrix. The investigation assumed that no information other than interaction locations was available to distinguish between candidates, yet the methods assessed all provided means by which such information could be included. In all cases it was shown that the signal to noise ratio is increased using ICS events. However, only one method, which used full modeling of the ICS response in the system matrix--the V-like model--provided enhancement in all figures of merit assessed in this investigation. Finally, the optimal method of ICS incorporation was demonstrated using data from two small animals measured using the AX-PET demonstrator.
- Published
- 2014
15. Laplacian Erosion: An Image Deblurring Technique for Multi-Plane Gamma-Cameras
- Author
-
Matthew Richard Dimmock, John E. Gillam, David M. Paganin, and Jeremy M. C. Brown
- Subjects
Physics ,Nuclear and High Energy Physics ,Deblurring ,Planar Imaging ,Image quality ,business.industry ,Monte Carlo method ,Detector ,Astrophysics::Instrumentation and Methods for Astrophysics ,Imaging phantom ,Optics ,Nuclear Energy and Engineering ,Computer Science::Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,business ,Laplace operator ,Image restoration - Abstract
Laplacian Erosion, an image deblurring technique for multi-plane Gamma-cameras, has been developed and tested for planar imaging using a GEANT4 Monte Carlo model of the Pixelated Emission Detector for RadioisOtopes (PEDRO) as a test platform. A contrast and Derenzo-like phantom composed of 125I were both employed to investigate the dependence of detection plane and pinhole geometry on the performance of Laplacian Erosion. Three different pinhole geometries were tested. It was found that, for the test system, the performance of Laplacian Erosion was inversely proportional to the detection plane offset, and directly proportional to the pinhole diameter. All tested pinhole geometries saw a reduction in the level of image blurring associated with the pinhole geometry. However, the reduction in image blurring came at the cost of signal to noise ratio in the image. The application of Laplacian Erosion was shown to reduce the level of image blurring associated with pinhole geometry and improve recovered image quality in multi-plane Gamma-cameras for the targeted radiotracer 125I.
- Published
- 2013
16. PET Reconstruction From Truncated Projections Using Total-Variation Regularization for Hadron Therapy Monitoring
- Author
-
I. Torres-Espallardo, John E. Gillam, Magdalena Rafecas, and Jorge Cabello
- Subjects
Physics ,Nuclear and High Energy Physics ,PET-CT ,Nuclear Energy and Engineering ,Image quality ,Dose profile ,Dosimetry ,Bragg peak ,Iterative reconstruction ,Electrical and Electronic Engineering ,Total variation denoising ,Image resolution ,Biomedical engineering - Abstract
Hadron therapy exploits the properties of ion beams to treat tumors by maximizing the dose released to the target and sparing healthy tissue. With hadron beams, the dose distribution shows a relatively low entrance dose which rises sharply at the end of the range, providing the characteristic Bragg peak that drops quickly thereafter. It is of critical importance in order not to damage surrounding healthy tissues and/or avoid targeting underdosage to know where the delivered dose profile ends-the location of the Bragg peak. During hadron therapy, short-lived β+-emitters are produced along the beam path, their distribution being correlated with the delivered dose. Following positron annihilation, two photons are emitted, which can be detected using a positron emission tomography (PET) scanner. The low yield of emitters, their short half-life, and the wash out from the target region make the use of PET, even only a few minutes after hadron irradiation, a challenging application. In-beam PET represents a potential candidate to estimate the distribution of β+-emitters during or immediately after irradiation, at the cost of truncation effects and degraded image quality due to the partial rings required of the PET scanner. Time-of-flight (ToF) information can potentially be used to compensate for truncation effects and to enhance image contrast. However, the highly demanding timing performance required in ToF-PET makes this option costly. Alternatively, the use of maximum-a-posteriori- expectation-maximization (MAP-EM), including total variation (TV) in the cost function, produces images with low noise, while preserving spatial resolution. In this paper, we compare data reconstructed with maximum-likelihood-expectation-maximization (ML-EM) and MAP-EM using TV as prior, and the impact of including ToF information, from data acquired with a complete and a partial-ring PET scanner, of simulated hadron beams interacting with a polymethyl methacrylate (PMMA) target. The results show that MAP-EM, in the absence of ToF information, produces lower noise images and more similar data compared to the simulated β+ distributions than ML-EM with ToF information in the order of 200-600 ps. The investigation is extended to the combination of MAP-EM and ToF information to study the limit of performance using both approaches.
- Published
- 2013
17. The AX-PET experiment: A demonstrator for an axial Positron Emission Tomograph
- Author
-
P. Weilhammer, C. Casella, Matthieu Heller, E. Nappi, Viviana Fanti, Uygar Tuna, John E. Gillam, A. Rudge, C. Joram, Thomas Schneider, Günther Dissertori, Steinar Stapnes, Werner Lustermann, Felicitas Pauss, Magdalena Rafecas, D. Schinzel, Jacques Séguinot, Josep F. Oliver, E. Chesi, E. Bolle, R. De Leo, Ulla Ruotsalainen, and P. Solevi
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.medical_specialty ,business.industry ,Detector ,Photodetector ,Image processing ,Iterative reconstruction ,Particle detector ,Lyso ,Semiconductor detector ,Optics ,Silicon photomultiplier ,medicine ,Medical physics ,business ,Instrumentation - Abstract
AX-PET stands for a new geometrical concept for a high resolution and high sensitivity PET scanner, based on an axial arrangement of long scintillating crystals in the tomograph, for a parallax free PET detector. Two identical AX-PET modules – consisting of matrices of LYSO crystals interleaved with WLS strips – have been built. They form the AX-PET Demonstrator, which has been extensively characterized and successfully used for the reconstruction of images of several phantoms. In this paper we report on the current status of the project, with emphasis on the most relevant results achieved both in terms of detector characterization and image reconstruction. We also discuss the recent preliminary results obtained with the digital SiPM from Philips (dSiPM), which are currently being tested as a possible alternative photodetector for AX-PET. With their very good intrinsic time resolution, dSiPM could add Time of Flight capability to the AX-PET concept.
- Published
- 2013
18. A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation
- Author
-
Josep F. Oliver, P. Weilhammer, Matthieu Heller, Uygar Tuna, Steinar Stapnes, A. Rudge, Thomas Schneider, Viviana Fanti, Marta Lai, Ulla Ruotsalainen, E. Bolle, Jacques Séguinot, John E. Gillam, P. Solevi, Magdalena Rafecas, Werner Lustermann, C. Joram, R. De Leo, E. Chesi, Felicitas Pauss, C. Casella, E. Nappi, Günther Dissertori, and D. Schinzel
- Subjects
Photon ,Radiological and Ultrasound Technology ,Physics::Instrumentation and Detectors ,010308 nuclear & particles physics ,Computer science ,Detector ,Monte Carlo method ,Wavelength shifter ,01 natural sciences ,030218 nuclear medicine & medical imaging ,Computational science ,03 medical and health sciences ,0302 clinical medicine ,Positron-Emission Tomography ,0103 physical sciences ,Radiology, Nuclear Medicine and imaging ,Sensitivity (control systems) ,Parallax ,Monte Carlo Method ,Image resolution ,Simulation - Abstract
AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.
- Published
- 2013
19. First Compton telescope prototype based on continuous LaBr3-SiPM detectors
- Author
-
L. Raux, Gabriela Llosa, Carlos Lacasta, S. Callier, Jorge Cabello, Carles Solaz, John E. Gillam, M. Trovato, V. Stankova, John Barrio, C. De La Taille, and Magdalena Rafecas
- Subjects
Nuclear and High Energy Physics ,medicine.medical_specialty ,Compton telescope ,Integrated circuit ,Scintillator ,01 natural sciences ,7. Clean energy ,Lyso ,030218 nuclear medicine & medical imaging ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,Silicon photomultiplier ,Optics ,law ,0103 physical sciences ,medicine ,Medical physics ,Instrumentation ,Image resolution ,Physics ,010308 nuclear & particles physics ,business.industry ,Detector ,Compton scattering ,business - Abstract
A first prototype of a Compton camera based on continuous scintillator crystals coupled to silicon photomultiplier (SiPM) arrays has been successfully developed and operated. The prototype is made of two detector planes. The first detector is made of a continuous 16×18×5 mm 3 LaBr 3 crystal coupled to a 16-elements SiPM array. The elements have a size of 3×3 mm 3 in a 4.5×4.05 mm 2 pitch. The second detector, selected by availability, consists of a continuous 16×18×5 mm 3 LYSO crystal coupled to a similar SiPM array. The SPIROC1 ASIC is employed in the readout electronics. Data have been taken with a 22 Na source placed at different positions and images have been reconstructed with the simulated one-pass list-mode (SOPL) algorithm. Detector development for the construction of a second prototype with three detector planes is underway. LaBr 3 crystals of 32×36 mm 2 size and 5/10 mm thickness have been acquired and tested with a PMT. The resolution obtained is 3.5% FWHM at 511 keV. Each crystal will be coupled to four MPPC arrays. Different options are being tested for the prototype readout.
- Published
- 2013
- Full Text
- View/download PDF
20. AX-PET: A novel PET concept with G-APD readout
- Author
-
Thomas Schneider, P. Weilhammer, John E. Gillam, Günther Dissertori, Felicitas Pauss, Josep F. Oliver, E. Bolle, Jacques Séguinot, D. Schinzel, Matthieu Heller, P. Solevi, Werner Lustermann, A. Rudge, Viviana Fanti, C. Casella, E. Nappi, Magdalena Rafecas, Uygar Tuna, R. De Leo, C. Joram, U. Ruotsalainen, Steinar Stapnes, and E. Chesi
- Subjects
Physics ,Nuclear and High Energy Physics ,Photon ,Physics::Instrumentation and Detectors ,business.industry ,Physics::Medical Physics ,Detector ,Compton scattering ,Field of view ,Lyso ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Optics ,Silicon photomultiplier ,030220 oncology & carcinogenesis ,Tomography ,business ,Instrumentation ,Image resolution - Abstract
The AX-PET collaboration has developed a novel concept for high resolution PET imaging to overcome some of the performance limitations of classical PET cameras, in particular the compromise between spatial resolution and sensitivity introduced by the parallax error. The detector consists of an arrangement of long LYSO scintillating crystals axially oriented around the field of view together with arrays of wave length shifter strips orthogonal to the crystals. This matrix allows a precise 3D measurement of the photon interaction point. This is valid both for photoelectric absorption at 511 keV and for Compton scattering down to deposited energies of about 100 keV. Crystals and WLS strips are individually read out using Geiger-mode Avalanche Photo Diodes (G-APDs). The sensitivity of such a detector can be adjusted by changing the number of layers and the resolution is defined by the crystal and strip dimensions. Two AX-PET modules were built and fully characterized in dedicated test set-ups at CERN, with point-like 22 Na sources. Their performance in terms of energy ( R energy ≈ 11.8 % (FWMH) at 511 keV) and spatial resolution was assessed ( σ axial ≈ 0.65 mm ), both individually and for the two modules in coincidence. Test campaigns at ETH Zurich and at the company AAA allowed the tomographic reconstructions of more complex phantoms validating the 3D reconstruction algorithms. The concept of the AX-PET modules will be presented together with some characterization results. We describe a count rate model which allows to optimize the planing of the tomographic scans.
- Published
- 2012
21. Direct regional quantification and uncertainty estimation using origin ensembles
- Author
-
Georgios I. Angelis, Steven R. Meikle, and John E. Gillam
- Subjects
Computer science ,business.industry ,Posterior probability ,Sampling (statistics) ,Pattern recognition ,Iterative reconstruction ,Image segmentation ,Covariance ,computer.software_genre ,Bayesian inference ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Voxel ,030220 oncology & carcinogenesis ,Segmentation ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
Regional quantification is often considered the end-point of many studies in Positron Emission Tomography (PET). However, in neurological small animal imaging, features can be sufficiently small that device resolution and voxel granularity inhibit the definition of segmentation boundaries that closely conform to organs or regions of interest. Even if well defined, estimation of uncertainty over small regions is problematic, particularly given that in some cases — such as in awake animal imaging — studies are unique and non-repeatable. The Origin Ensembles algorithm is an approach to image reconstruction based on Bayesian inference which utilises Monte-Carlo driven sampling to determine the posterior distribution of the emission counts given the measured data. The algorithm explores the full posterior and so can be used to estimate measures of uncertainty in quantitative parameters drawn from the data. However, being inference based, Origin Ensembles is sensitive to the dimension of the parameter space over which estimates are made. In this investigation the dimension of the image space is reduced using a regional segmentation based on an initial MLEM reconstruction of detected data. Origin Ensembles is used to reconstruct data directly to the regions of interest over which quantification is desired providing estimation of the full posterior distribution for each region. Origin Ensembles sampling allows the uncertainty in quantitative values to be determined both in terms of parameter variance and the covariance between recovered regional intensities. Direct regional reconstruction was employed in this study using simulated data and was shown to enhance the precision and accuracy of recovered values.
- Published
- 2016
22. Modelling the motion dependent point spread function in motion corrected small animal PET imaging
- Author
-
Roger Fulton, Steven R. Meikle, John E. Gillam, Georgios I. Angelis, and Andre Kyme
- Subjects
Physics ,Point spread function ,010308 nuclear & particles physics ,business.industry ,Reconstruction algorithm ,Iterative reconstruction ,01 natural sciences ,Motion (physics) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Motion estimation ,0103 physical sciences ,Computer vision ,Artificial intelligence ,Deconvolution ,business ,Parallax ,Image resolution - Abstract
Motion corrected images from awake and freely moving animals often exhibit reduced resolution when compared to their stationary counterparts. This could be attributed to the combination of brief periods of fast animal motion and insufficient motion sampling speed. In this paper we hypothesise that we can measure the motion dependent point spread function of a given study and mitigate the motion blurring artifacts in the reconstructed images, in a similar way that a measured system response point spread function can improve resolution due to geometric effects (e.g. parallax errors). We investigated this hypothesis on a set of experimentally measured phantom data, which underwent a series of distinctively different motion patterns, ranging from slow to fast. Preliminary results showed that motion corrected images have reduced resolution compared to the stationary image and noticeable motion blurring artefacts, particularly for fast speed/acceleration settings. In addition, images deconvolved after reconstruction with the measured motion dependent PSF appear to be sharper compared to their unprocessed counterparts, yet without completely eliminating the motion blurring artefacts. Work is in progress to refine the methodology, by decomposing the geometric and motion components of the PSF, as well as including the deconvolution within the reconstruction algorithm.
- Published
- 2016
23. Experimental evaluation of the resolution improvement provided by a silicon PET probe
- Author
-
Milan Grkovski, Magdalena Rafecas, S. Smith, Andrej Studen, K. Brzeziński, H. Kagan, Josep F. Oliver, John E. Gillam, Neal H. Clinthorne, Carlos Lacasta, Gabriela Llosa, and Research unit Medical Physics
- Subjects
Materials science ,Image quality ,PET PET/CT ,Medical-image reconstruction methods and algorithms ,Imaging phantom ,Article ,030218 nuclear medicine & medical imaging ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,Optics ,Region of interest ,law ,computer-aided software ,Instrumentation ,Image resolution ,Gamma camera ,coronary CT angiography (CTA) ,Mathematical Physics ,Pixel ,business.industry ,Resolution (electron density) ,Detector ,030220 oncology & carcinogenesis ,SPECT ,business - Abstract
A high-resolution PET system, which incorporates a silicon detector probe into a conventional PET scanner, has been proposed to obtain increased image quality in a limited region of interest. Detailed simulation studies have previously shown that the additional probe information improves the spatial resolution of the reconstructed image and increases lesion detectability, with no cost to other image quality measures. The current study expands on the previous work by using a laboratory prototype of the silicon PET-probe system to examine the resolution improvement in an experimental setting. Two different versions of the probe prototype were assessed, both consisting of a back-to-back pair of 1-mm thick silicon pad detectors, one arranged in 32 × 16 arrays of 1.4 mm × 1.4 mm pixels and the other in 40 × 26 arrays of 1.0 mm × 1.0 mm pixels. Each detector was read out by a set of VATAGP7 ASICs and a custom-designed data acquisition board which allowed trigger and data interfacing with the PET scanner, itself consisting of BGO block detectors segmented into 8 × 6 arrays of 6 mm × 12 mm × 30 mm crystals. Limited-angle probe data was acquired from a group of Na-22 point-like sources in order to observe the maximum resolution achievable using the probe system. Data from a Derenzo-like resolution phantom was acquired, then scaled to obtain similar statistical quality as that of previous simulation studies. In this case, images were reconstructed using measurements of the PET ring alone and with the inclusion of the probe data. Images of the Na-22 source demonstrated a resolution of 1.5 mm FWHM in the probe data, the PET ring resolution being approximately 6 mm. Profiles taken through the image of the Derenzo-like phantom showed a clear increase in spatial resolution. Improvements in peak-to-valley ratios of 50% and 38%, in the 4.8 mm and 4.0 mm phantom features respectively, were observed, while previously unresolvable 3.2 mm features were brought to light by the addition of the probe. These results support the possibility of improving the image resolution of a clinical PET scanner using the silicon PET-probe.
- Published
- 2016
24. Polarisation-based coincidence event discrimination: an in silico study towards a feasible scheme for Compton-PET
- Author
-
Aimee L. McNamara, John E. Gillam, Zdenka Kuncic, and M Toghyani
- Subjects
Photon ,Image quality ,Monte Carlo method ,Signal-To-Noise Ratio ,01 natural sciences ,Coincidence ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Optics ,0103 physical sciences ,Scattering, Radiation ,Radiology, Nuclear Medicine and imaging ,Computer Simulation ,Physics ,Photons ,Annihilation ,Tomographic reconstruction ,Radiological and Ultrasound Technology ,010308 nuclear & particles physics ,business.industry ,Phantoms, Imaging ,Models, Theoretical ,Polarization (waves) ,Positron-Emission Tomography ,business ,Coincidence detection in neurobiology - Abstract
Current positron emission tomography (PET) systems use temporally localised coincidence events discriminated by energy and time-of-flight information. The two annihilation photons are in an entangled polarisation state and, in principle, additional information from the polarisation correlation of photon pairs could be used to improve the accuracy of coincidence classification. In a previous study, we demonstrated that in principle, the polarisation correlation information could be transferred to an angular correlation in the distribution of scattered photon pairs in a planar Compton camera system. In the present study, we model a source-phantom-detector system using Geant4 and we develop a coincidence classification scheme that exploits the angular correlation of scattered annihilation quanta to improve the accuracy of coincidence detection. We find a [Formula: see text] image quality improvement in terms of the peak signal-to-noise ratio when scattered coincidence events are discriminated solely by their angular correlation, thus demonstrating the feasibility of this novel classification scheme. By integrating scatter events (both single-single and single-only) with unscattered coincidence events discriminated using conventional methods, our results suggest that Compton-PET may be a promising candidate for optimal emission tomographic imaging.
- Published
- 2016
25. An OpenCL Implementation of Pinhole Image Reconstruction
- Author
-
Dmitri A. Nikulin, John E. Gillam, Chuong Nguyen, and Matthew Richard Dimmock
- Subjects
Nuclear and High Energy Physics ,Computer science ,business.industry ,Detector ,Graphics processing unit ,Probability density function ,Iterative reconstruction ,Computational science ,Software ,Nuclear Energy and Engineering ,Stack (abstract data type) ,Probability distribution ,Pinhole (optics) ,Electrical and Electronic Engineering ,business - Abstract
A C++/OpenCL software platform for emission image reconstruction of data from pinhole cameras has been developed. The software incorporates a new, accurate but computationally costly, probability distribution function for operating on list-mode data from detector stacks. The platform architecture is more general than previous works, supporting advanced models such as arbitrary probability distribution, collimation geometry and detector stack geometry. The software was implemented such that all performance-critical operations occur on OpenCL devices, generally GPUs. The performance of the software is tested on several commodity CPU and GPU devices.
- Published
- 2012
26. Towards Optimal Collimator Design for the PEDRO Hybrid Imaging System
- Author
-
Jeremy M. C. Brown, David V. Martin, Dmitri A. Nikulin, Matthew Richard Dimmock, Chuong Nguyen, and John E. Gillam
- Subjects
Physics ,Nuclear and High Energy Physics ,business.industry ,Monte Carlo method ,Detector ,Compton scattering ,Collimator ,law.invention ,Optics ,Nuclear Energy and Engineering ,Sampling (signal processing) ,law ,Position (vector) ,Electrical and Electronic Engineering ,Photonics ,business ,Image resolution - Abstract
The Pixelated Emission Detector for RadiOisotopes (PEDRO) is a hybrid imaging system designed for the measurement of single photon emission from small animal models. The proof-of-principle device consists of a Compton-camera situated behind a mechanical collimator and is intended to provide optimal detection characteristics over a broad spectral range, from 30 to 511 keV. An automated routine has been developed for the optimization of large-area slits in the outer regions of a collimator which has a central region allocated for pinholes. The optimization was tested with a GEANT4 model of the experimental prototype. The data were blurred with the expected position and energy resolution parameters and a Bayesian interaction ordering algorithm was applied. Images were reconstructed using cone back-projection. The results show that the optimization technique allows the large-area slits to both sample fully and extend the primary field of view (FoV) determined by the pinholes. The slits were found to provide truncation of the back-projected cones of response and also an increase in the success rate of the interaction ordering algorithm. These factors resulted in an increase in the contrast and signal-to-noise ratio of the reconstructed image estimates. Of the two configurations tested, the cylindrical geometry outperformed the square geometry, primarily because of a decrease in artifacts. This was due to isotropic modulation of the cone surfaces, that can be achieved with a circular shape. Also, the cylindrical geometry provided increased sampling of the FoV due to more optimal positioning of the slits. The use of the cylindrical collimator and application of the transmission function in the reconstruction was found to improve the resolution of the system by a factor of 20, as compared to the uncollimated Compton camera. Although this system is designed for small animal imaging, the technique can be applied to any application of single photon imaging.
- Published
- 2011
27. Image-based modelling of residual blurring in motion corrected small animal PET imaging using motion dependent point spread functions
- Author
-
Andre Kyme, Steven R. Meikle, Roger Fulton, John E. Gillam, and Georgios I. Angelis
- Subjects
business.industry ,Computer science ,Motion (geometry) ,Pet imaging ,Residual ,030218 nuclear medicine & medical imaging ,Point spread ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Small animal ,Computer vision ,Artificial intelligence ,business ,General Nursing ,Image based - Published
- 2018
28. F240. MULTI-MODAL PREDICTION OF GLOBAL FUNCTION FROM NEUROCOGNITIVE AND NEUROIMAGING MEASURES: OUTCOMES FROM THE PRONIA STUDY
- Author
-
John E. Gillam, Dominic B. Dwyer, Nikolaos Koutsouleris, Stephen J. Wood, and Anne Ruef
- Subjects
Data stream ,Modality (human–computer interaction) ,Poster Session II ,business.industry ,Data stream mining ,Computer science ,Probabilistic logic ,Linear classifier ,Machine learning ,computer.software_genre ,Outcome (probability) ,Support vector machine ,Psychiatry and Mental health ,Abstracts ,Artificial intelligence ,business ,computer ,Neurocognitive - Abstract
Background In order to extract the most powerful predictive models from data collected within the PRONIA study, diverse information sources must be combined. PRONIA aims to combine information from a range of study sites across Europe as well as from a diverse range of information sources. For each subject, neurocognitive, neuroimaging and clinically observed data has been collected that is intended to provide the basis for the development of predictive models for use in individualised diagnosis and prediction. However, as yet it is unclear as to which elements (or combination) of the measured data provide optimal predictive capacity and which features will generalize best. Methods In order to combine data from a diverse range of sources a number of approaches may be considered. While it is initially attractive to concatenate the features gathered from each modality, this approach is problematic in two ways. Not only do the appropriate pre-processing steps differ between modalities, but the high dimensionality of imaging data (in comparison to neurocognitive measures) may alter the way each modality contributes to the decision function during learning. Instead, we investigate more simplistic learning approaches in an initial step that produces a single outcome for each modality considered. In a second step these outcomes are combined to generate a final estimate of the target class. In this investigation neurocognitive and neuroimaging data, collected as part of the PRONIA study, were considered as features for prediction of clinically observed global function, measured at the same time-point. Each neurocognitive test, applied as part of the PRONIA battery, was considered as an independent modality, as were each of a range of MRI-based neuroimaging measures (from structural, functional and diffusion imaging). Support Vector Classification (SVC) was conducted for each modality, with the target class defined as a score of 65 or less on the Global Assessment of Function. Both linear classification and the use of radial basis functions were explored within the initial modality-independent learning phase as well as during modality fusion as part of the second learning phase. Repeated, nested, cross-validation was employed in both stages in order ensure robust estimates of generalisation. Results Because each modality is reduced to a single measure in the first stage, each can contribute on an equal basis to the predictive outcome in the second while allowing inter-modality interaction. While SVC models do not naturally provide probabilistic outcomes, the distance of each point to the separating hyperplane can be scaled to represent the relative class probabilities. Predictions obtained at the first stage not only provide for the second phase of learning, but also provide a means to assess each modality for predictive accuracy. Correlations between the predictions from each mode provide information as to which combination of data may contribute constructively to the final outcome while learning approaches within the second phase can also be used to identify the most useful predictors. Discussion The two-stage learning framework provides a useful approach to learning that allows assessment of each separate data stream as well as the fused-prediction outcome. The contribution of each data stream to the final prediction may be explored while interactions between data streams can also be contextualised. However, more subtle interactions between data, particularly at the initial input stage, may be difficult to observe and so the extension of this approach to more structured data-fusion and is considered.
- Published
- 2018
29. A Pixelated Emission Detector for RadiOisotopes (PEDRO)
- Author
-
Chris Hall, Robert A. Lewis, John E. Gillam, Jeremy M. C. Brown, Matthew Richard Dimmock, and Toby Ean Beveridge
- Subjects
Physics ,Nuclear and High Energy Physics ,Range (particle radiation) ,medicine.diagnostic_test ,business.industry ,Astrophysics::High Energy Astrophysical Phenomena ,Resolution (electron density) ,Detector ,Gauge (firearms) ,Collimated light ,Optics ,medicine ,Tomography ,business ,Instrumentation ,Sensitivity (electronics) ,Emission computed tomography - Abstract
The Pixelated Emission Detector for RadiOisotopes (PEDRO) is a hybrid imager designed for the measurement of single photon emission from small animals. The proof-of-principle device currently under development consists of a Compton-camera situated behind a mechanical modulator. The combination of mechanical and electronic (hybrid) collimation should provide optimal detection characteristics over a broad spectral range ( 30 keV ≤ E γ ≤ 511 keV ) , through a reduction in the sensitivity-resolution trade-off, inherent in conventional mechanically collimated configurations. This paper presents GEANT4 simulation results from the PEDRO geometry operated only as a Compton camera in order to gauge its advantage when used in concert with mechanical collimation—regardless of the collimation pattern. The optimization of multiple detector spacing and resolution parameters is performed utilizing the Median Distance of Closest Approach (MDCA) and has been shown to result in an optimum distance, beyond which only a loss in sensitivity occurs.
- Published
- 2009
30. Optimisation of a dual head semiconductor Compton camera using Geant4
- Author
-
I.H. Lazarus, D. C. Oxley, A. J. Boston, D.P. Scraggs, J.R. Cresswell, P. J. Nolan, H. C. Boston, Toby Ean Beveridge, A. N. Grint, L. J. Harkness, John E. Gillam, and Reynold J. Cooper
- Subjects
Physics ,Nuclear and High Energy Physics ,Photon ,medicine.diagnostic_test ,business.industry ,Astrophysics::High Energy Astrophysical Phenomena ,Physics::Medical Physics ,Detector ,Iterative reconstruction ,Single-photon emission computed tomography ,Collimated light ,Semiconductor detector ,Planar ,Optics ,medicine ,business ,Electronic Collimation ,Instrumentation - Abstract
Conventional medical gamma-ray camera systems utilise mechanical collimation to provide information on the position of an incident gamma-ray photon. Systems that use electronic collimation utilising Compton image reconstruction techniques have the potential to offer huge improvements in sensitivity. Position sensitive high purity germanium (HPGe) detector systems are being evaluated as part of a single photon emission computed tomography (SPECT) Compton camera system. Data have been acquired from the orthogonally segmented planar SmartPET detectors, operated in Compton camera mode. The minimum gamma-ray energy which can be imaged by the current system in Compton camera configuration is 244 keV due to the 20 mm thickness of the first scatter detector which causes large gamma-ray absorption. A simulation package for the optimisation of a new semiconductor Compton camera has been developed using the Geant4 toolkit. This paper will show results of preliminary analysis of the validated Geant4 simulation for gamma-ray energies of SPECT, 141 keV.
- Published
- 2009
31. K-edge subtraction using an energy-resolving position-sensitive detector
- Author
-
Chris Hall, John E. Gillam, Toby Ean Beveridge, Daniel John Kitcher, Stewart Michael Midgley, and Robert A. Lewis
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.medical_specialty ,medicine.diagnostic_test ,Pixel ,business.industry ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Subtraction ,Image subtraction ,Digital subtraction angiography ,Feature (computer vision) ,Histogram ,medicine ,Medical physics ,Computer vision ,Artificial intelligence ,business ,Instrumentation ,Energy (signal processing) - Abstract
Digital Subtraction Angiography is an important technique used to image arterial blood flow using an introduced contrast agent. K-edge subtraction uses contrast-agents with a K-edge feature to acquire the same information but requires knowledge of the X-ray energy and so is usually conducted using synchrotron radiation. However, given a detector that measures position and energy – rather than the currently used integrating devices – it is possible to use white-beam radiation to conduct dynamic K-edge subtraction studies. This study demonstrates an approach to this imaging possibility using analysis of the spectral histogram in each image pixel. K-edge subtraction can be conducted more efficiently using a broad-spectrum X-ray source by processing groups of data either side of the K-edge. Using a simulated data model the possible quality of this new technique is explored under a number of assumed spectral resolutions. The effects on the final image of different methods of image subtraction and pre-processing are also explored.
- Published
- 2009
32. Orthogonal strip HPGe planar SmartPET detectors in Compton configuration
- Author
-
D.P. Scraggs, A. N. Grint, A.R. Mather, J. Cresswell, Robert A. Lewis, John E. Gillam, G. Turk, I.H. Lazarus, Toby Ean Beveridge, Chris Hall, Andrew Berry, Reynold J. Cooper, A. J. Boston, H. C. Boston, and P. J. Nolan
- Subjects
Physics ,Nuclear and High Energy Physics ,business.industry ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,chemistry.chemical_element ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Germanium ,law.invention ,Semiconductor detector ,Planar ,Optics ,chemistry ,law ,Medical imaging ,business ,Instrumentation ,Throughput (business) ,Energy (signal processing) ,Gamma camera - Abstract
The evolution of Germanium detector technology over the last decade has lead to the possibility that they can be employed in medical and security imaging. The potential of excellent energy resolution coupled with good position information that Germanium affords removes the necessity for mechanical collimators that would be required in a conventional gamma camera system. By removing this constraint, the overall dose to the patient can be reduced or the throughput of the system can be increased. An additional benefit of excellent energy resolution is that tight gates can be placed on energies from either a multi-lined gamma source or from multi-nuclide sources increasing the number of sources that can be used in medical imaging. In terms of security imaging, segmented Germanium gives directionality and excellent spectroscopic information.
- Published
- 2007
33. SmartPET: Applying HPGe and pulse shape analysis to small-animal PET
- Author
-
A.R. Mather, Andrew Berry, H. C. Boston, Chris Hall, J. Cresswell, Reynold J. Cooper, G. Turk, A. N. Grint, P. J. Nolan, John E. Gillam, Toby Ean Beveridge, D.P. Scraggs, I.H. Lazarus, Robert A. Lewis, and A. J. Boston
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Resolution (electron density) ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Single-photon emission computed tomography ,Semiconductor detector ,Optics ,Positron emission tomography ,medicine ,Medical physics ,Sensitivity (control systems) ,business ,Instrumentation ,Image resolution ,Energy (signal processing) - Abstract
The SmartPET project is the development of a prototype small-animal imaging system based on the use of Hyperpure Germanium (HPGe) detectors. The use of digital electronics and application of Pulse Shape Analysis (PSA) techniques provide fine spatial resolution, while the excellent intrinsic energy resolution of HPGe detectors makes the system ideal for multi-nuclide imaging. As a result, the SmartPET system has the potential to function as a dual modality imager, operating as a dual-head Positron Emission Tomography (PET) camera or in a Compton Camera configuration for Single Photon Emission Computed Tomography (SPECT) imaging. In this paper, we discuss how the use of simple PSA techniques greatly improves the position sensitivity of the detector yielding improved spatial resolution in reconstructed images. The PSA methods presented have been validated by comparison to data from high-precision scanning of the detectors. Results from this analysis are presented along with initial images from the SmartPET system, which demonstrates the impact of these techniques on PET images.
- Published
- 2007
34. Characterisation of the SmartPET planar Germanium detectors
- Author
-
P. J. Nolan, J. Cresswell, A.R. Mather, John E. Gillam, Toby Ean Beveridge, Andrew Berry, I.H. Lazarus, H. C. Boston, D.P. Scraggs, Reynold J. Cooper, G. Turk, Chris Hall, A. N. Grint, Robert A. Lewis, and A. J. Boston
- Subjects
Physics ,Nuclear and High Energy Physics ,Photon ,business.industry ,Detector ,chemistry.chemical_element ,Germanium ,computer.file_format ,Collimated light ,Semiconductor detector ,Planar ,Optics ,chemistry ,Raster graphics ,business ,Instrumentation ,computer ,Image resolution - Abstract
Small Animal Reconstruction PET (SmartPET) is a project funded by the UK medical research council (MRC) to demonstrate proof of principle that Germanium can be utilised in Positron Emission Tomography (PET). The SmartPET demonstrator consists of two orthogonal strip High Purity Germanium (HPGe) planar detectors manufactured by ORTEC. The aim of the project is to produce images of an internal source with sub mm 3 spatial resolution. Before this image can be achieved the detectors have to be fully characterised to understand the response at any given location to a γ -ray interaction. This has been achieved by probing the two detectors at a number of specified points with collimated sources of various energies and strengths. A 1 mm diameter collimated beam of photons was raster scanned in 1 mm steps across the detector. Digital pulse shape data were recorded from all the detector channels and the performance of the detector for energy and position determination has been assessed. Data will be presented for the first SmartPET detector.
- Published
- 2007
35. Modelling orthogonal strip HPGe detector systems
- Author
-
T.E. Beveridge, John E. Gillam, Robert A. Lewis, Paul R Smith, Chris Hall, A. Berry, Gregory Ian Potter, and P. J. Nolan
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.medical_specialty ,Photon ,Physics::Instrumentation and Detectors ,business.industry ,Detector ,Semiconductor detector ,Planar ,Optics ,Position (vector) ,medicine ,Medical physics ,business ,Instrumentation ,Event (particle physics) ,Image resolution ,Energy (signal processing) - Abstract
The development of orthogonal planar strip high purity germanium (HPGe) detectors offers the advantages of good energy and three dimensional spatial resolution of photon interactions. The use of such devices for Positron emission Tomography (PET) are being investigated by the SmartPET collaboration. This paper presents initial results for algorithms developed to analyse the detector signals which recover the interaction position and energy, and the effects of strip geometry on these quantities. The aim is to develop a tool to aid design decisions, provide estimates of uncertainty, and resolve event position to better than 1 mm.
- Published
- 2007
36. Multiple occupancy considerations for the SmartPET imaging system
- Author
-
H. C. Boston, A.R. Mather, Chris Hall, A. J. Boston, P. J. Nolan, Reynold J. Cooper, Toby Ean Beveridge, Robert A. Lewis, and John E. Gillam
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.diagnostic_test ,Compton imaging ,Monte Carlo method ,Detector ,Computational physics ,Nuclear magnetic resonance ,Quality (physics) ,Planar ,Position (vector) ,Positron emission tomography ,medicine ,Instrumentation ,Energy (signal processing) - Abstract
The SmartPET collaboration is investigating the use of two planar high-purity germanium double-sided strip detectors as a Compton imaging positron emission tomography system. Monte Carlo simulations suggest that a large proportion of interactions within the detectors will occur within a small spatial volume, introducing significant ambiguities within the position and energy measurements made by the detectors. Under certain circumstances, the ambiguities will result in multiple interactions being detected as a single interaction. This investigation studied the effect of such multiple occupancies and found that approximately 45% of 511 keV events include multiple interactions within a 5×5×20 mm 3 volume. Nevertheless, the effect on the quality of the final data remains quite acceptable.
- Published
- 2007
37. Effect of position resolution on LoR discrimination for a dual-head Compton camera
- Author
-
Robert A. Lewis, H. C. Boston, Reynold J. Cooper, Chris Hall, A.R. Mather, A. J. Boston, John E. Gillam, P. J. Nolan, and Toby Ean Beveridge
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.diagnostic_test ,business.industry ,Resolution (electron density) ,Process (computing) ,Collimated light ,Interaction information ,Optics ,Data acquisition ,Positron emission tomography ,medicine ,Information source (mathematics) ,business ,Electronic Collimation ,Instrumentation - Abstract
With the current increase in effective germanium semiconductor detection technology, a positron emission tomography system comprising two opposing HPGe detectors is under development. This type of detection offers not only improvement to some aspects of PET, but also the ability to record single-photon information in the detection process. This information can be used in stand-alone imaging, and also as an additional information source in the PET process. Discrimination based on this single-photon information was proposed; however, the effectiveness of this discrimination is dependent on the resolution of the single-photon information. Simulations of the detection system, in which the positional resolution of the interaction information is variable, was conducted. The single-photon information has then been used in the PET imaging process and its effect on image improvement shown. Much like mechanical collimation, electronic collimation may be used to remove false LoRs from an image, at the expense of efficiency. Moreover, unlike mechanical collimation, this trade off may be dynamically adjusted post data acquisition.
- Published
- 2007
38. Position sensitivity of the first SmartPET HPGe detector
- Author
-
J. Simpson, A. J. Boston, J.R. Cresswell, Reynold J. Cooper, Robert A. Lewis, G. Turk, H. C. Boston, A.R. Mather, John E. Gillam, Toby Ean Beveridge, Chris Hall, P. J. Nolan, Andrew Berry, and I.H. Lazarus
- Subjects
Physics ,Nuclear and High Energy Physics ,medicine.medical_specialty ,medicine.diagnostic_test ,Physics::Instrumentation and Detectors ,business.industry ,Detector response function ,Physics::Medical Physics ,Detector ,equipment and supplies ,Optics ,Positron emission tomography ,Position (vector) ,Calibration ,medicine ,Pulse shape analysis ,Medical physics ,Sensitivity (control systems) ,business ,Hpge detector ,Instrumentation - Abstract
In this paper we discuss the Smart Positron Emission Tomography (PET) imaging system being developed by the University of Liverpool in conjunction with CCLRC Daresbury Laboratory. We describe the motivation for the development of a semiconductor-based PET system and the advantages it will offer over current tomographs. Details of the detectors and associated electronics are discussed and results of high precision scans are presented. Analysis of this scan data has facilitated full characterization of the detector response function and calibration of the three-dimensional position sensitivity. This work presents the analysis of the depth sensitivity of the detector.
- Published
- 2007
39. Motion compensation and pose measurement uncertainty in awake small animal positron emission tomography using stochastic origin ensembles
- Author
-
Andre Kyme, Steven R. Meikle, Roger Fulton, John E. Gillam, and Georgios I. Angelis
- Subjects
Physics ,Scanner ,Motion compensation ,Sampling (signal processing) ,business.industry ,Posterior probability ,Range (statistics) ,Measurement uncertainty ,Computer vision ,Iterative reconstruction ,Artificial intelligence ,Tracking (particle physics) ,business - Abstract
In order to remove the influence of anaesthetic agents on neurological function and to expand the range of imaging tasks available it is preferable to image small animals in an awake state. Accurate activity estimates in awake small animal Positron Emission Tomography rely on the measurement of animal head pose over the time-course of the scan. Pose measurements are then incorporated into the emission data during image reconstruction, compensating for animal motion. Uncertainty in pose measurement can impact reconstructed image quality by effectively degrading scanner resolution, and hence that of the reconstructed image. In small animal imaging, regions of interest can be small in comparison to image-space voxelisation so that the precision of estimates taken from a single reconstructed image can be difficult to gauge. Stochastic Origin Ensembles provides a means of estimating a more complete statistical description of the emission data than other methods of image reconstruction. In this investigation, rigid motion compensation is incorporated into the Stochastic Origin Ensembles algorithm and explored using simulated data. Realistic motion is modeled within a GATE simulation and measurement uncertainty is incorporated into the pose data using both simulated perturbations as well as experimental trials using a motion tracking system. Sampling from the posterior distribution is conducted using the Stochastic Origin Ensembles algorithm and compared to image reconstruction using the Maximum Likelihood-Expectation Maximisation algorithm. Using Stochastic Origin Ensembles both regional and single voxel parameters were investigated. The impact of varying levels of pose measurement uncertainty on image-space parameters was demonstrated and assessed.
- Published
- 2015
40. Direct estimation of neurotransmitter response in awake and freely moving animals
- Author
-
Georgios I. Angelis, Roger Fulton, William J. Ryder, John E. Gillam, Steven R. Meikle, and Andre Kyme
- Subjects
Accuracy and precision ,Tomographic reconstruction ,Voxel ,Computer science ,Iterative reconstruction ,computer.software_genre ,Biological system ,computer ,Simulation ,Imaging phantom ,Displacement (vector) ,Data modeling ,Parametric statistics - Abstract
The temporal characterisation of endogenous neurotransmitter release during a cognitive task or drug intervention is an important capability for studying the role of neuro-transmitters in normal and aberrant brain function, including disease. Advanced kinetic models, such as the linear parametric neurotransmitter PET (lp-ntPET) have been developed to appropriately model the transient changes in the model parameters, such as the radiotracer efflux from the target tissue, during endogenous neurotransmitter release. Incorporation of the kinetic model within the tomographic reconstruction algorithm may lead to improved parameter estimates, both in terms of precision and accuracy, compared to the conventional two-step post reconstruction approach. In this study, we evaluate a direct reconstruction approach that uses an expectation maximisation framework to transfer the 4D spatiotemporal maximum likelihood problem into an image-based weighted least squares problem. This framework allows the use of well established kinetic models, such as the lp-ntPET model, to estimate the endogenous neurotransmitter response directly from the dynamic PET data. Dynamic GATE simulations using a realistic digital rat brain phantom showed that the proposed direct reconstruction method can provide higher temporal accuracy and precision for the estimated neurotransmitter response at the voxel level, compared to the conventional post reconstruction modelling. In addition, we applied this methodology to a [11C]raclopride displacement study in an awake and freely moving rat and generated voxel-wise parametric maps illustrating ligand displacement from striatum.
- Published
- 2015
41. Identifying markers of pathology in SAXS data of malignant tissues of the brain
- Author
-
Karen Kit Wan Siu, John E. Gillam, Geoffrey I. Webb, Toby Ean Beveridge, Robert A. Lewis, Chris Hall, Shane M Butler, Elisabeth Schultke, S J Wilkinson, K Mannan, Sarah Jayne Pearson, G McLoughlin, Andrew H. Kaye, and A R Round
- Subjects
Physics ,Nuclear and High Energy Physics ,Pathology ,medicine.medical_specialty ,Small-angle X-ray scattering ,fungi ,Human brain ,Malignancy ,medicine.disease ,Nuclear magnetic resonance ,medicine.anatomical_structure ,medicine ,Multivariate statistical ,Instrumentation - Abstract
Conventional neuropathological analysis for brain malignancies is heavily reliant on the observation of morphological abnormalities, observed in thin, stained sections of tissue. Small Angle X-ray Scattering (SAXS) data provide an alternative means of distinguishing pathology by examining the ultra-structural (nanometer length scales) characteristics of tissue. To evaluate the diagnostic potential of SAXS for brain tumors, data was collected from normal, malignant and benign tissues of the human brain at station 2.1 of the Daresbury Laboratory Synchrotron Radiation Source and subjected to data mining and multivariate statistical analysis. The results suggest SAXS data may be an effective classifier of malignancy.
- Published
- 2005
42. An improvement to the diffraction-enhanced imaging method that permits imaging of dynamic systems
- Author
-
John E. Gillam, Marcus J. Kitchen, Karen Kit Wan Siu, Konstantin Mikhailovitch Pavlov, Robert A. Lewis, Kentarou Uesugi, and Naoto Yagi
- Subjects
Diffraction ,Physics ,Nuclear and High Energy Physics ,Spectrum analyzer ,Apparent absorption ,business.industry ,Synchrotron radiation ,X-ray optics ,Reflectivity ,Refraction ,Optics ,Medical imaging ,business ,Instrumentation - Abstract
We present an improvement to the diffraction-enhanced imaging (DEI) method that permits imaging of moving samples or other dynamic systems in real time. The method relies on the use of a thin Bragg analyzer crystal and simultaneous acquisition of the multiple images necessary for the DEI reconstruction of the apparent absorption and refraction images. These images are conventionally acquired at multiple points on the reflectivity curve of an analyzer crystal which presents technical challenges and precludes imaging of moving subjects. We have demonstrated the potential of the technique by taking DEI “movies” of an artificially moving mouse leg joint, acquired at the Biomedical Imaging Centre at SPring-8, Japan.
- Published
- 2005
43. Unification of analyser-based and propagation-based X-ray phase-contrast imaging
- Author
-
Kentarou Uesugi, Marcus J. Kitchen, Timur E. Gureyev, John E. Gillam, Robert A. Lewis, Naoto Yagi, David M. Paganin, Karen Kit Wan Siu, Michael J. Morgan, Yakov Nesterets, and Konstantin Mikhailovitch Pavlov
- Subjects
Physics ,Nuclear and High Energy Physics ,business.industry ,Analyser ,Synchrotron radiation ,law.invention ,Optics ,law ,X-Ray Phase-Contrast Imaging ,Imaging technique ,business ,Phase retrieval ,Instrumentation ,Monochromator - Abstract
We have developed a new imaging technique that unifies analyser-based and propagation-based X-ray phase-contrast imaging. A novel theoretical approach, utilising the theory of linear shift-invariant systems, was employed to solve the phase-retrieval problem for the case of large Fresnel/Takagi numbers. In October 2003, we performed some preliminary experiments at BL20B2 at SPring-8 to illustrate this new imaging technique.
- Published
- 2005
44. Increasing PET scanner resolution using a Silicon detector probe
- Author
-
Josep F. Oliver, Magdalena Rafecas, Karol Brzezinski, and John E. Gillam
- Subjects
Optics ,Materials science ,Oncology ,Radiology Nuclear Medicine and imaging ,business.industry ,Pet scanner ,Resolution (electron density) ,Silicon detector ,Radiology, Nuclear Medicine and imaging ,Hematology ,business - Published
- 2016
45. Towards optimal imaging with PET: an in silico feasibility study
- Author
-
Kinwah Wu, Zdenka Kuncic, John E. Gillam, M Toghyani, and Aimee L. McNamara
- Subjects
Physics ,Photons ,Discriminator ,Photon ,Models, Statistical ,Radiological and Ultrasound Technology ,business.industry ,Image quality ,Monte Carlo method ,Detector ,Coincidence ,Optics ,Positron-Emission Tomography ,Annihilation radiation ,Image Interpretation, Computer-Assisted ,Feasibility Studies ,Radiology, Nuclear Medicine and imaging ,Computer Simulation ,business ,Monte Carlo Method ,Energy (signal processing) - Abstract
The efficacy of Positron Emission Tomography (PET) imaging relies fundamentally on the ability of the system to accurately identify true coincidence events. With existing systems, this is currently accomplished with an energy acceptance criterion followed by correction techniques to remove suspected false coincidence events. These corrections generally result in signal and contrast loss and thus limit the PET system's ability to achieve optimum image quality. A key property of annihilation radiation is that the photons are polarised with respect to each other. This polarisation correlation offers a potentially powerful discriminator, independent of energy, to accurately identify true events. In this proof of concept study, we investigate how photon polarisation information can be exploited in PET imaging by developing a method to discriminate true coincidences using the polarisation correlation of annihilation pairs. We implement this method using a Geant4 PET simulation of a GE Advance/Discovery LS system and demonstrate the potential advantages of the polarisation coincidence selection method over a standard energy criterion method. Current PET ring detectors are not capable of exploiting the polarisation correlation of the photon pairs. Compton PET systems, however are promising candidates for this application. We demonstrate the feasibility of a two-component Compton camera system in identifying true coincidences with Monte Carlo simulations. Our study demonstrates the potential of improving signal gain using polarisation, particularly for high photon emission rates. We also demonstrate the ability of the Compton camera at exploiting this polarisation correlation in PET.
- Published
- 2014
46. Spatially variant resolution modelling using redistributed lines-of-response and the image space reconstruction algorithm
- Author
-
Johan Nuyts, Matthew Bickell, Roger Fulton, and John E. Gillam
- Subjects
business.industry ,Computation ,Detector ,Reconstruction algorithm ,Computer vision ,Probability density function ,Iterative reconstruction ,Artificial intelligence ,business ,Image resolution ,Imaging phantom ,Convolution ,Mathematics - Abstract
A spatially variant resolution modelling technique for PET image reconstruction is presented which models the physical processes of the measurement during the iterative reconstruction. This is achieved by redistributing the line-of-response endpoints according to derived probability density functions describing the detector response function and photon acollinearity. When applying this technique it is shown that, to avoid mathematical inconsistencies and reconstruction artefacts, MLEM cannot be used for the reconstruction. The ISRA algorithm, after being adapted to a list-mode based implementation, is used instead since its structure is well-suited to this application. The Redistribution technique is shown to produce superior resolution recovery in off-centre phantom reconstructions than the standard stationary image-space Gaussian convolution approach, and it only requires approximately 35% more computation time.
- Published
- 2014
47. Image reconstruction using tetrahedral voxels: A list mode implementation for awake animal imaging
- Author
-
John E. Gillam, Georgios I. Angelis, Steven R. Meikle, and William J. Ryder
- Subjects
Motion compensation ,business.industry ,Computer science ,Motion (geometry) ,Iterative reconstruction ,computer.software_genre ,Compensation (engineering) ,Sampling (signal processing) ,Voxel ,Tetrahedron ,Computer vision ,Artificial intelligence ,Tomography ,business ,computer - Abstract
Reliable interpretation of results from pre-clinical Emission Tomography studies is often hampered by the requirement that the animal be anesthetised, affecting certain neuro-transmission systems and cerebral blood flow. Animal tracking and motion compensation techniques have been exploited to account for that rigid motion associated with head movement - allowing brain imaging in small animals. However, rigid motion cannot be assumed for extra cranial activity. Tetrahedral mesh approaches have been applied in cardiac reconstruction to account for non-rigid motion in clinical environments. In this investigation a list-mode approach to calculation of the elements of the system matrix using a tetrahedral image space is developed and evaluated by applying rigid motion compensation to both simulated and experimental data. Experimental data for which only rigid motion was applied were used to demonstrate the application of variable voxel-size over the image space. A simple small animal model comprising rigid (head) and non-rigid (body) motion was developed and the resulting voxelised sources were simulated using GATE. Motion compensation based on rigid transforms allowed targeted voxel sampling and demonstrates the impact of the non-rigid motion of activity estimates. The tetrahedral mesh should allow future investigations to extend correction to also include non-rigid estimates of small animal motion.
- Published
- 2014
48. Efficient time-weighted sensitivity image calculation for motion compensated list mode reconstruction
- Author
-
Steven R. Meikle, Andre Kyme, John E. Gillam, William J. Ryder, Roger Fulton, and Georgios I. Angelis
- Subjects
Scanner ,Sampling (signal processing) ,business.industry ,Computer science ,Attenuation ,Computer vision ,Sensitivity (control systems) ,Artificial intelligence ,Noise (video) ,Iterative reconstruction ,business ,Imaging phantom ,Image (mathematics) - Abstract
Accurate motion compensated image reconstruction of freely moving small animals requires the exact calculation of the time-weighted sensitivity correction factors. Back-projection of all possible lines of response for every recorded pose is a computationally intensive task, which requires impractically long reconstruction times. In this work we investigated an approach to accelerate this task, by randomly sampling the lines of response and the poses that are used to calculate the time-averaged sensitivity image. Two phantom datasets, acquired on the microPET Focus220 scanner, were used to quantify errors introduced in the randomly sampled sensitivity images and propagated to the final reconstructed images. In addition, the qualitative performance of the proposed methodology was assessed by reconstructing a freely moving rat acquisition. Results showed that randomisation can severely amplify the noise in the reconstructed images, especially when few LORs are sampled. However, such errors can be suppressed by post-filtering the randomised sensitivity images prior to reconstruction (e.g. 2 mm FHWM). Such an approach can substantially reduce the computational time involved during the estimation of the time-averaged sensitivity image for motion compensated image reconstruction.
- Published
- 2014
49. Feasibility of motion-corrected planar projection imaging of single photon emitters: A phantom study
- Author
-
Andrew G. Weisenberger, S. Lee, William J. Ryder, Georgios I. Angelis, Steven R. Meikle, J. E. McKisson, John E. Gillam, Peter L. Kench, Andre Kyme, and John McKisson
- Subjects
Physics ,Tomographic reconstruction ,Planar projection ,business.industry ,Physics::Medical Physics ,Iterative reconstruction ,Imaging phantom ,Planar ,Optics ,Match moving ,Projection (mathematics) ,Computer vision ,Tomography ,Artificial intelligence ,business - Abstract
The kinetics of single photon emitting macromolecules (e.g. antibodies) are typically slower than small molecules, necessitating long or repeat acquisitions. We previously proposed the use of motion tracking and limited angle tomographic reconstruction to characterise tracer kinetics over extended periods in awake rodents. In this study, we explored this approach by imaging a contrast phantom, moved with 6 degrees of freedom, using a prototype preclinical SPECT scanner with parallel-hole collimation and a fixed detector located at 0 and 90 degrees. The position of the phantom was tracked and data were acquired in list mode. Each event was motion-corrected and reconstructed using LM-MLEM. Planar projections were created by summing the reconstructed volume along the x-axis. Line profiles of the contrast phantom were compared for a planar reference projection and projections generated from the motion-free, motion-corrupted and motion-corrected reconstructions. Projections created from the motion-corrected agreed well with the planar reference projection of the stationary object and exhibited similar contrast. Whilst this initial study was limited to rigid motion, it demonstrates the feasibility of motion-corrected planar projections of a moving object.
- Published
- 2014
50. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
- Author
-
William J. Ryder, John E. Gillam, Georgios I. Angelis, Steven R. Meikle, Roger Fulton, and Rezaul Bashar
- Subjects
POSIX Threads ,Coprocessor ,Workstation ,Computer science ,law ,Symmetric multiprocessing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,x86 ,Thread (computing) ,Parallel computing ,Iterative reconstruction ,Xeon Phi ,law.invention - Abstract
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.