29 results on '"Johannes Totz"'
Search Results
2. Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation.
- Author
-
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang, and Wenzhe Shi
- Published
- 2017
- Full Text
- View/download PDF
3. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.
- Author
-
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi
- Published
- 2017
- Full Text
- View/download PDF
4. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network.
- Author
-
Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang
- Published
- 2016
- Full Text
- View/download PDF
5. Database-Based Estimation of Liver Deformation under Pneumoperitoneum for Surgical Image-Guidance and Simulation.
- Author
-
Stian Flage Johnsen, Stephen A. Thompson, Matthew J. Clarkson, Marc Modat, Yi Song, Johannes Totz, Kurinchi Gurusamy, Brian R. Davidson, Zeike A. Taylor, David J. Hawkes, and Sébastien Ourselin
- Published
- 2015
- Full Text
- View/download PDF
6. Fast Semi-dense Surface Reconstruction from Stereoscopic Video in Laparoscopic Surgery.
- Author
-
Johannes Totz, Stephen A. Thompson, Danail Stoyanov, Kurinchi Gurusamy, Brian R. Davidson, David J. Hawkes, and Matthew J. Clarkson
- Published
- 2014
- Full Text
- View/download PDF
7. Locally rigid, vessel-based registration for laparoscopic liver surgery.
- Author
-
Yi Song, Johannes Totz, Steve Thompson, Stian Flage Johnsen, Dean C. Barratt, Crispin Schneider, Kurinchi Gurusamy, Brian R. Davidson, Sébastien Ourselin, David J. Hawkes, and Matthew J. Clarkson
- Published
- 2015
- Full Text
- View/download PDF
8. The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging.
- Author
-
Matthew J. Clarkson, Gergely Zombori, Steve Thompson, Johannes Totz, Yi Song, Miklos Espak, Stian Flage Johnsen, David J. Hawkes, and Sébastien Ourselin
- Published
- 2015
- Full Text
- View/download PDF
9. A novel low-friction manipulator for bimanual joint-level robot control and active constraints.
- Author
-
George P. Mylonas, Johannes Totz, Valentina Vitiello, Christopher J. Payne, and Guang-Zhong Yang
- Published
- 2012
- Full Text
- View/download PDF
10. Dense Surface Reconstruction for Enhanced Navigation in MIS.
- Author
-
Johannes Totz, Peter Mountney, Danail Stoyanov, and Guang-Zhong Yang
- Published
- 2011
- Full Text
- View/download PDF
11. Visual Search Behaviour and Analysis of Augmented Visualisation for Minimally Invasive Surgery.
- Author
-
Kenko Fujii, Johannes Totz, and Guang-Zhong Yang
- Published
- 2011
- Full Text
- View/download PDF
12. Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks.
- Author
-
Joost R. van Amersfoort, Wenzhe Shi, Alejandro Acosta, Francisco Massa, Johannes Totz, Zehan Wang, and Jose Caballero
- Published
- 2017
13. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.
- Author
-
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi
- Published
- 2016
14. Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation.
- Author
-
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang, and Wenzhe Shi
- Published
- 2016
15. Enhanced visualisation for minimally invasive surgery.
- Author
-
Johannes Totz, Kenko Fujii, Peter Mountney, and Guang-Zhong Yang
- Published
- 2012
- Full Text
- View/download PDF
16. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study
- Author
-
Matthew J. Clarkson, David J. Hawkes, Crispin Schneider, Sebastian Ourselin, Johannes Totz, M. H. Sodergren, Kurinchi Selvan Gurusamy, Moustafa Allam, Danail Stoyanov, Yi Song, Adrien E. Desjardins, Brian R. Davidson, Dean C. Barratt, and Stephen A. Thompson
- Subjects
Adult ,Male ,medicine.medical_specialty ,Computer-assisted surgery ,RESECTION ,medicine.medical_treatment ,03 medical and health sciences ,Patient safety ,0302 clinical medicine ,Semi-automatic registration ,Clinical endpoint ,Image-guided surgery ,Humans ,Medicine ,Medical physics ,Aged ,Aged, 80 and over ,Science & Technology ,Augmented Reality ,GUIDANCE ,business.industry ,Orientation (computer vision) ,Iterative closest point ,1103 Clinical Sciences ,Usability ,Stereoscopic surface reconstruction ,Middle Aged ,New Technology ,3. Good health ,Liver ,Surgery, Computer-Assisted ,030220 oncology & carcinogenesis ,Feasibility Studies ,Female ,030211 gastroenterology & hepatology ,Surgery ,Augmented reality ,LEARNING-CURVE ,business ,Life Sciences & Biomedicine ,Laparoscopic liver surgery ,SYSTEM - Abstract
Background The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. Methods Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. Results The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference − 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. Conclusion The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
- Published
- 2020
17. Accuracy validation of an image guided laparoscopy system for liver resection.
- Author
-
Stephen A. Thompson, Johannes Totz, Yi Song, Stian Flage Johnsen, Danail Stoyanov, Sébastien Ourselin, Kurinchi Gurusamy, Crispin Schneider, Brian R. Davidson, David J. Hawkes, and Matthew J. Clarkson
- Published
- 2015
- Full Text
- View/download PDF
18. Analysis of OpenCL Support for Mobile GPUs on Android
- Author
-
Johannes Totz, Carlos Merino, and Alejandro Acosta
- Subjects
Software portability ,Software ,business.industry ,Computer science ,Deep learning ,Embedded system ,Video processing ,Artificial intelligence ,Feature phone ,Android (operating system) ,business ,Mobile device ,Efficient energy use - Abstract
The capabilities of mobile devices, like smartphones and tablets, are increasing every year. As each system-on-a-chip (SoC) generation provides better performance while also being more energy efficient compared to its predecessors, running computationally intensive tasks on the device becomes feasible. This enables advanced image filtering, video processing and machine learning applications based on Deep Learning. The dominant platform for mobile devices is Android, running on a hugely diverse set of devices, from low-end feature phone to high-end setop box. OpenCL allows one to harness that compute power with its portable cross-platform API and language. However, while being source portable, the achievable performance is not. The different characteristics of SoCs mean that the application developer needs to take advantage of SoC-specific capabilities to achieve maximum performance. This paper presents an analysis of OpenCL adoption across Android devices that have the Twitter Android app installed. The analysis shows that most of the sampled Android devices support OpenCL but there are differences in terms of OpenCL version support, different architectures and memory models. This analysis enables software developers to make an informed decision about which OpenCL implementations to target in order to provide performance portability across different hardware manufacturers and support the largest amount of devices possible.
- Published
- 2018
19. Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation
- Author
-
Zehan Wang, Andrew Peter Aitken, Jose Caballero, Johannes Totz, Wenzhe Shi, Alejandro Acosta, and Christian Ledig
- Subjects
FOS: Computer and information sciences ,Image fusion ,Motion compensation ,Artificial neural network ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Iterative reconstruction ,Convolutional neural network ,Convolution ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Image resolution - Abstract
Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30% whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency., Comment: Changes: * Uploaded Vid4 results (footnote 1). * Added references [14, 29] as spatial-transformer prior art. * Fixed typos
- Published
- 2017
20. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
- Author
-
Johannes Totz, Zehan Wang, Ferenc Huszar, Alejandro Acosta, Christian Ledig, Andrew Peter Aitken, Lucas Theis, Wenzhe Shi, Andrew Cunningham, Jose Caballero, and Alykhan Tejani
- Subjects
FOS: Computer and information sciences ,Similarity (geometry) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Bioengineering ,Machine Learning (stat.ML) ,02 engineering and technology ,Iterative reconstruction ,4603 Computer Vision and Multimedia Computation ,Convolutional neural network ,46 Information and Computing Sciences ,Image texture ,Statistics - Machine Learning ,4611 Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Image resolution ,40 Engineering ,Network architecture ,Pixel ,business.industry ,020206 networking & telecommunications ,Pattern recognition ,4607 Graphics, Augmented Reality and Games ,Image translation ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,4006 Communications Engineering - Abstract
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method., 19 pages, 15 figures, 2 tables, accepted for oral presentation at CVPR, main paper + some supplementary material
- Published
- 2016
21. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
- Author
-
Andrew Peter Aitken, Rob Bishop, Johannes Totz, Zehan Wang, Wenzhe Shi, Daniel Rueckert, Jose Caballero, and Ferenc Huszar
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Feature extraction ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Machine Learning (stat.ML) ,02 engineering and technology ,Iterative reconstruction ,Convolutional neural network ,Convolution ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Image resolution ,Pixel ,Artificial neural network ,business.industry ,020207 software engineering ,Filter (signal processing) ,Feature (computer vision) ,Bicubic interpolation ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Interpolation - Abstract
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods., CVPR 2016 paper with updated affiliations and supplemental material, fixed typo in equation 4
- Published
- 2016
22. Database-Based Estimation of Liver Deformation under Pneumoperitoneum for Surgical Image-Guidance and Simulation
- Author
-
Johannes Totz, Brian R. Davidson, Zeike A. Taylor, Stian Flage Johnsen, Stephen A. Thompson, David J. Hawkes, Kurinchi Selvan Gurusamy, Matthew J. Clarkson, Sebastien Ourselin, Yi Song, and Marc Modat
- Subjects
Liver surgery ,Insufflation ,medicine.medical_specialty ,Computer science ,Deformation (meteorology) ,medicine.disease ,body regions ,medicine.anatomical_structure ,Pneumoperitoneum ,medicine ,Abdomen ,Radiology ,Image guidance ,Simulation - Abstract
The insufflation of the abdomen in laparoscopic liver surgery leads to significant deformation of the liver. The estimation of the shape and position of the liver after insufflation has many important applications, such as providing surface-based registration algorithms used in image guidance with an initial guess and realistic patient-specific surgical simulation.
- Published
- 2015
23. PTH-103 Evaluation of a novel system for image guided laparoscopic liver surgery in an animal model and first clinical experience
- Author
-
Brian R. Davidson, Sebastian Ourselin, Matthew J. Clarkson, Crispin Schneider, Stephen A. Thompson, Johannes Totz, Yi Song, David J. Hawkes, Stian Flage Johnsen, Kurinchi Selvan Gurusamy, and Danail Stoyanov
- Subjects
Liver surgery ,Medical education ,Pathology ,medicine.medical_specialty ,Organ movement ,business.industry ,Gastroenterology ,Animal model ,Liver lesion ,Liver anatomy ,Blood loss ,General partnership ,medicine ,business ,health care economics and organizations ,Large animal - Abstract
Introduction Compared to open surgery, laparoscopic liver resection (LLR) of cancer benefits patients by reducing pain, length of stay and morbidity. However, LLR is often more challenging than open surgery due to the difficulty of identifying and dividing major vascular and bile duct branches. Some of these challenges may be resolved by using image guidance systems (IGS) to overlay a 3D model of the liver structure onto the liver seen at laparoscopy. Current IGS technologies rely on manual landmark definition or ultrasound for co-registration (alignment of 3D model and in-vivo liver) and do not take organ movement and deformation into account. Image guidance for LLR using a cone beam CT has also been attempted. Our group has developed an IGS that automatically registers a liver model derived from pre-operative CT to the in-vivo liver surface using computer vision techniques. Laparoscope position in relation to the liver is determined by optical tracking. Results from a porcine study and its first application in a patient are presented here. Method Laparoscopic microwave ablation was used to create identifiable liver “lesions” and a CT was obtained in land race pigs under general anaesthesia (GA). One week later, laparoscopic left hepatectomy was performed under GA, using our system. A 46 year old female who presented with an indeterminate liver lesion in the junction of segment 5/6 underwent a hepatic wedge resection under image guidance. Data on system and surgical performance was collected in both studies. Results Experiments were conducted in 5 animals with a successful image overlay achieved in 3 cases. Failure of overlay was attributed to distorted liver anatomy secondary to adhesions formed around regions of ablated liver. Initial registration of the overlayed 3d model was accomplished in less than 10 min (min). The setup and calibration of equipment for the clinical case took 20 min. Initial registration required 3 min and similar to the animal study did not require repetition. Image overlay was successfully achieved and the operation carried out using an ultrasonic scalpel with a total procedure time of 190 min. Estimated blood loss was Conclusion The IGS presented here has been evaluated in both a large animal model and subsequently in a clinical scenario. The system appears to be feasible for clinical use and has benefits in regards to uninterrupted surgical workflow, lack of radiation exposure and automatic compensation for organ motion. Disclosure of interest C. Schneider Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Thompson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., J. Totz Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., Y. Song Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Johnsen Grant/ Research Support from: Intelligent Imaging Programme Grant, ref: EP/H046410/1, D. Stoyanov Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., S. Ourselin Grant/ Research Support from: Intelligent Imaging Programme Grant, ref: EP/H046410/1, K. Gurusamy Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., D. Hawkes Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., M. Clarkson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health., B. Davidson Grant/ Research Support from: This publication presents independent research commissioned by the Health Innovation Challenge Fund (HICF-T4–317), a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust or the Department of Health.
- Published
- 2015
24. 18. A pilot study evaluating the overlay display method for image guidance in laparoscopic liver surgery
- Author
-
Kurinchi Selvan Gurusamy, Johannes Totz, Matthew J. Clarkson, David J. Hawkes, Danail Stoyanov, Brian R. Davidson, Stephen A. Thompson, Yi Song, Crispin Schneider, and Sebastian Ourselin
- Subjects
Liver surgery ,medicine.medical_specialty ,Oncology ,business.industry ,medicine ,Surgery ,General Medicine ,Radiology ,Overlay ,business ,Image guidance - Published
- 2016
25. Preliminary results from a clinical study evaluating a novel image guidance system for laparoscopic liver surgery
- Author
-
Danail Stoyanov, Johannes Totz, Crispin Schneider, David J. Hawkes, Yi Song, Adrien E. Desjardins, Stephen A. Thompson, Kurinchi Selvan Gurusamy, Matthew J. Clarkson, and Brian R. Davidson
- Subjects
Clinical study ,Liver surgery ,medicine.medical_specialty ,Hepatology ,business.industry ,General surgery ,Gastroenterology ,Medicine ,Radiology ,business ,Image guidance - Published
- 2016
26. Visual Search Behaviour and Analysis of Augmented Visualisation for Minimally Invasive Surgery
- Author
-
Johannes Totz, Guang-Zhong Yang, and Kenko Fujii
- Subjects
Visual search ,Computer science ,business.industry ,Region of interest ,Orientation (computer vision) ,Trajectory ,Eye movement ,Eye tracking ,Computer vision ,Artificial intelligence ,business ,Sensory cue ,Visualization - Abstract
Disorientation has been one of the key issues hampering natural orifice translumenal endoscopic surgery (NOTES) adoption. A new Dynamic View Expansion (DVE) technique was recently introduced as a method to increase the field-of-view, as well as to provide temporal visual cues to encode the camera motion trajectory. This paper presents a systematic analysis of visual search behaviour during the use of DVE for NOTES navigation. The study compares spatial orientation and latency with and without the use of the new DVE technique with motion trajectory encoding. Eye tracking data was recorded and modelled using Markov chains to characterise the visual search behaviour, where a new region of interest (ROI) definition was used to determine the states in the transition graphs. Resultant state transition graphs formed from the participants' eye movements showed a marked difference in visual search behaviour with increased cross-referencing between grey and less grey regions. The results demonstrate the advantages of using motion trajectory encoding for DVE.
- Published
- 2012
27. Dense surface reconstruction for enhanced navigation in MIS
- Author
-
Johannes, Totz, Peter, Mountney, Danail, Stoyanov, and Guang-Zhong, Yang
- Subjects
Imaging, Three-Dimensional ,Surgery, Computer-Assisted ,Phantoms, Imaging ,Surface Properties ,Biomedical Engineering ,Image Processing, Computer-Assisted ,Humans ,Minimally Invasive Surgical Procedures ,Reproducibility of Results ,Laparoscopy ,Robotics - Abstract
Recent introduction of dynamic view expansion has led to the development of computer vision methods for minimally invasive surgery to artificially expand the intra-operative field-of-view of the laparoscope. This provides improved awareness of the surrounding anatomical structures and minimises the effect of disorientation during surgical navigation. It permits the augmentation of live laparoscope images with information from previously captured views. Current approaches, however, can only represent the tissue geometry as planar surfaces or sparse 3D models, thus introducing noticeable visual artefacts in the final rendering results. This paper proposes a high-fidelity tissue geometry mapping by combining a sparse SLAM map with semi-dense surface reconstruction. The method is validated on phantom data with known ground truth, as well as in-vivo data captured during a robotic assisted MIS procedure. The derived results have shown that the method is able to effectively increase the coverage of the expanded surgical view without compromising mapping accuracy.
- Published
- 2011
28. Enhanced visualisation for minimally invasive surgery
- Author
-
Guang-Zhong Yang, Johannes Totz, Kenko Fujii, and Peter Mountney
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Biomedical Engineering ,Video Recording ,Health Informatics ,Field of view ,Imaging, Three-Dimensional ,Image Interpretation, Computer-Assisted ,Specular highlight ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Spatial contextual awareness ,business.industry ,Orientation (computer vision) ,Phantoms, Imaging ,Reproducibility of Results ,Endoscopy ,General Medicine ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Visualization ,Surgery, Computer-Assisted ,Feature (computer vision) ,Peripheral vision ,Surgery ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms ,Texture synthesis - Abstract
Endoscopes used in minimally invasive surgery provide a limited field of view, thus requiring a high degree of spatial awareness and orientation. Attempts at expanding this small, restricted view with previously observed imagery have been made by researchers and is generally known as image mosaicing or dynamic view expansion. For minimally invasive endoscopy, SLAM-based methods have been shown to have potential values but have yet to address effective visualisation techniques.The live endoscopic video feed is expanded with previously observed footage. To this end, a method that highlights the difference between actual camera image and historic data observed earlier is proposed. Old video data is faded out to grey scale to mimic human peripheral vision. Specular highlights are removed with the help of texture synthesis to avoid distracting visual cues. The method is further evaluated on in vivo and phantom sequences by a detailed user study to examine the ability of the user in discerning temporal motion trajectories while visualising the expanded field of view, a feature that is of practical value for enhancing spatial awareness and orientation.The difference between historic data and live video is integrated effectively. The use of a single texture domain generated by planar parameterisation is demonstrated for view expansion. Specular highlights can be removed through texture synthesis without introducing noticeable artefacts. The implicit encoding of motion trajectory of the endoscopic camera visualised by the proposed method facilitates both global awareness and temporal evolution of the scene.Dynamic view expansion provides more context for navigation and orientation by establishing reference points beyond the camera's field of view. Effective integration of visual cues is paramount for concise visualisation.
- Published
- 2011
29. Dense Surface Reconstruction for Enhanced Navigation in MIS
- Author
-
Guang-Zhong Yang, Johannes Totz, Danail Stoyanov, and Peter Mountney
- Subjects
Ground truth ,Computer science ,business.industry ,Invasive surgery ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,business ,Surface reconstruction ,ComputingMethodologies_COMPUTERGRAPHICS ,Rendering (computer graphics) - Abstract
Recent introduction of dynamic view expansion has led to the development of computer vision methods for minimally invasive surgery to artificially expand the intra-operative field-of-view of the laparoscope. This provides improved awareness of the surrounding anatomical structures and minimises the effect of disorientation during surgical navigation. It permits the augmentation of live laparoscope images with information from previously captured views. Current approaches, however, can only represent the tissue geometry as planar surfaces or sparse 3D models, thus introducing noticeable visual artefacts in the final rendering results. This paper proposes a high-fidelity tissue geometry mapping by combining a sparse SLAM map with semi-dense surface reconstruction. The method is validated on phantom data with known ground truth, as well as in-vivo data captured during a robotic assisted MIS procedure. The derived results have shown that the method is able to effectively increase the coverage of the expanded surgical view without compromising mapping accuracy.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.