41 results on '"Mathis-Ullrich F"'
Search Results
2. A learning robot for cognitive camera control in minimally invasive surgery
- Author
-
Wagner, Martin, Bihlmaier, Andreas, Kenngott, Hannes Götz, Mietkowski, Patrick, Scheikl, Paul Maria, Bodenstedt, Sebastian, Schiepe-Tiska, Anja, Vetter, Josephin, Nickel, Felix, Speidel, S., Wörn, H., Mathis-Ullrich, F., and Müller-Stich, B. P.
- Published
- 2021
- Full Text
- View/download PDF
3. Robotik im Operationssaal – (Ko‑)Operieren mit Kollege Roboter
- Author
-
Mathis-Ullrich, F. and Scheikl, P. M.
- Published
- 2021
- Full Text
- View/download PDF
4. A Self-Assembling Extendable Tendon-Driven Continuum Robot with Variable Length
- Author
-
Fischer, N., primary, Becher, M., additional, Höltge, L., additional, and Mathis-Ullrich, F., additional
- Published
- 2023
- Full Text
- View/download PDF
5. A Laryngoscope with Shape Memory Actuation
- Author
-
Fischer Nikola, Ho Patty, Marzi Christian, Schuler Patrick, and Mathis-Ullrich Franziska
- Subjects
endotracheal intubation ,eti ,smart materials ,sma ,nickel-titanium ,Medicine - Abstract
Endotracheal intubation is a medical procedure to secure a patient’s airways, which can be challenging in emergency situations due to an individual’s anatomy. Intubation blades can benefit from an adjustable design with shape memory alloys. We present a novel blade design with two degrees of freedom, continuously transforming a straight into a curved blade. Our demonstrators reached mean angular deflections up to 16∘ without exceeding outer blade temperatures of 40∘C. Mean forces of almost 24N were applicable. This approach has the potential to reduce the complexity of intubation procedures and increase the success rate, especially in time-critical emergency situations.
- Published
- 2024
- Full Text
- View/download PDF
6. Interactive Surgical Liver Phantom for Cholecystectomy Training
- Author
-
Schüßler Alexander, Younis Rayan, Paik Jamie, Wagner Martin, Mathis-Ullrich Franziska, and Kunz Christian
- Subjects
surgical phantom ,cholecystectomy ,robotassisted surgery ,surgical training ,force modeling ,Medicine - Abstract
Training and prototype development in robotassisted surgery requires appropriate and safe environments for the execution of surgical procedures. Current dry lab laparoscopy phantoms often lack the ability to mimic complex, interactive surgical tasks. This work presents an interactive surgical phantom for the cholecystectomy. The phantom enables the removal of the gallbladder during cholecystectomy by allowing manipulations and cutting interactions with the synthetic tissue. The force-displacement behavior of the gallbladder is modelled based on retraction demonstrations. The force model is compared to the force model of ex-vivo porcine gallbladders and evaluated on its ability to estimate retraction forces.
- Published
- 2024
- Full Text
- View/download PDF
7. Optimising speech recognition using LLMs: an application in the surgical domain
- Author
-
Matasyoh Nevin M., Zeineldin Ramy A., and Mathis-Ullrich Franziska
- Subjects
error correction ,zero-shot prompting ,few-shotprompting ,large language model ,automatic speech recognition ,Medicine - Abstract
Automatic speech recognition (ASR), powered by deep learning techniques, is crucial for enhancing humancomputer interaction. However, its full potential remains unrealized in diverse real-world environments, with challenges such as dialects, accents, and domain-specific jargon, particularly in fields like surgery, persisting. Here, we investigate the potential of large language models (LLMs) as error correction modules for ASR.We leverage Whisper-medium or ASRLibriSpeech for speech recognition, and GPT-3.5 or GPT-4 for error correction.We employ various prompting methods, from zero-shot to few-shot with leading questions and sample medical terms to correct wrong transcriptions. Results, measured by word error rate (WER), reveal Whisper’s superior transcription accuracy over ASR-LibriSpeech, with a WER of 11.93% compared to 32.09%. GPT-3.5, with the few-shot with medical terms prompting method, further enhances performance, achieving a 64.29% and 37.83% WER-reduction for Whisper and ASR-LibriSpeech, respectively. Additionally, Whisper exhibits faster execution speed. Substituting GPT-3.5 with GPT- 4 further enhances transcription accuracy. Despite some few challenges, our approach demonstrates the potential of leveraging domain-specific knowledge through LLM prompting for accurate transcription, particularly in sophisticated domains like surgery.
- Published
- 2024
- Full Text
- View/download PDF
8. Vibrational Feedback for a Teleoperated Continuum Robot with Non-contact Endoscope Localization
- Author
-
Fischer Jonas, Andreas Daniel, Beckerle Philipp, Mathis-Ullrich Franziska, and Marzi Christian
- Subjects
vibrotactile feedback ,continuum robot ,capacitive sensor ,minimally invasive ,surgical instruments ,Medicine - Abstract
Limited or absent haptic feedback is reported as a factor hindering the continued adoption of surgical robots. This article presents a proof of concept for vibrotactile feedback integrated into a continuum robot to explore whether such feedback improves spatial perception in surgical settings. The robot is equipped with a capacitive sensor for noncontact endoscope localization, enabling spatial awareness of the robot’s tool center point (TCP) within the surgical environment. The data from the sensor is processed and transmitted to a bracelet worn by the user, which generates vibrotactile feedback. The bracelet contains four vibration motors providing tactile cues for navigation and positioning of the robot’s TCP. All subsystems are integrated into a unified system to deliver vibrotactile feedback to the user. When the user maneuvers the TCP of the robot near an object, they receive vibrotactile feedback via the bracelet. Thereby, the intensity of vibration increases as the TCP approaches the object, and the direction of the obstacle is mapped on the bracelet. Initial functional tests were performed and prove the functionality of the proposed system.
- Published
- 2024
- Full Text
- View/download PDF
9. Statistical Shape Models for Grasp Point Determination in Laparoscopic Surgeries
- Author
-
Kunz Christian, Kraus Maria, Younis Rayan, Wagner Martin, and Mathis-Ullrich Franziska
- Subjects
grasp point determination ,statistical shape models ,cholecystectomy ,robot-assisted surgery ,Medicine - Abstract
Robotic assistance systems are being used more and more frequently in the operating room, with the goal to support surgeons and to automate parts of a procedure. The laparoscopic cholecystectomy is one of the most common procedures in Germany.We aim to automate the assistant grasp task in this procedure. To achieve this goal, first the grasp points on the gallbladder need to be determined. In this work, we therefore present a statistical shape model fitting to the gallbladder for grasp point determination. Gallbladder and liver point clouds are utilized as inputs. A registration algorithm is used to fit the shape model to the gallbladder mesh. The process is evaluated on three different datasets achieving a successful grasping point identification of 90% for artificially created gallbladders, 100% for our silicon phantom model, and 90% for ex-vivo organs.
- Published
- 2024
- Full Text
- View/download PDF
10. Robotik im Operationssaal – (Ko‑)Operieren mit Kollege Roboter
- Author
-
Mathis-Ullrich, F., primary and Scheikl, P. M., additional
- Published
- 2020
- Full Text
- View/download PDF
11. Segmentation of the mouse skull for MRI guided transcranial focused ultrasound therapy planning
- Author
-
Linte, Cristian A., Siewerdsen, Jeffrey H., Hopp, T., Springer, L., Gross, C., Grudzenski-Theis, S., Mathis-Ullrich, F., and Ruiter, N.V.
- Published
- 2022
- Full Text
- View/download PDF
12. Annotation-efficient learning of surgical instrument activity in neurosurgery
- Author
-
Philipp Markus, Alperovich Anna, Lisogorov Alexander, Gutt-Will Marielena, Mathis Andrea, Saur Stefan, Raabe Andreas, and Mathis-Ullrich Franziska
- Subjects
annotation-efficiency learning ,neurosurgery ,instrument localization ,medical deep learning ,Medicine - Abstract
Machine learning-based solutions rely heavily on the quality and quantity of the training data. In the medical domain, the main challenge is to acquire rich and diverse annotated datasets for training. We propose to decrease the annotation efforts and further diversify the dataset by introducing an annotation-efficient learning workflow. Instead of costly pixel-level annotation, we require only image-level labels as the remainder is covered by simulation. Thus, we obtain a large-scale dataset with realistic images and accurate ground truth annotations. We use this dataset for the instrument localization activity task together with a studentteacher approach. We demonstrate the benefits of our workflow compared to state-of-the-art methods in instrument localization that are trained only on clinical datasets, which are fully annotated by human experts.
- Published
- 2022
- Full Text
- View/download PDF
13. Robotic Sensorized Gastroendoscopy with Wireless Single-Hand Control
- Author
-
García Gabriela, Fischer Nikola, Marzi Christian, and Mathis-Ullrich Franziska
- Subjects
robotic gastroendoscopy ,teleoperated ,hand-held control ,sensorized ,Medicine - Abstract
The manipulation of flexible endoscopes is a procedure that requires great dexterity since it requires the synchronization and use of both hands in parallel. Imprecise handling during gastroendoscopy could harm the digestive tract. Our solution allows the physician to use only one hand to wirelessly control the forward, backward, and tip bending motion. The proposed system provides endoscopic vision and tactile impact force sensing at the tip to detect the force applied to tissue and thus avoid damage. We experimentally evaluate the handling of the robotic system in open space and inside a medical phantom. The results revealed a training effect with less time demand for task completion and reduction of average impact force after only 5 runs. The proposed system was successfully controlled using one hand and, together with the force information, could enhance the physician’s experience during endoscopy. Future work will address axial control and an intensive user study with clinical experts.
- Published
- 2022
- Full Text
- View/download PDF
14. Augmented Reality-based Robot Control for Laparoscopic Surgery
- Author
-
Kunz Christian, Maierhofer Pascal, Gyenes Balázs, Franke Nikolai, Younis Rayan, Müller-Stich Beat-Peter, Wagner Martin, and Mathis-Ullrich Franziska
- Subjects
computer assisted surgery ,augmented reality ,cognitive surgical robotics ,robot control ,Medicine - Abstract
Minimally invasive surgery is the standard for many abdominal interventions, with an increasing use of telemanipulated robots. As collaborative robots enter the field of medical interventions, their intuitive control needs to be addressed. Augmented reality can thereby support a surgeon by representing the surgical scene in a natural way. In this work, an augmented reality based robot control for laparoscopic cholecystectomy is presented. A user can interact with the virtual scene to clip the cystic duct and artery as well as to manipulate the deformable gallbladder. An evaluation was performed based on the SurgTLX and system usability scale.
- Published
- 2022
- Full Text
- View/download PDF
15. Quality-dependent Deep Learning for Safe Autonomous Guidewire Navigation
- Author
-
Ritter Jacqueline, Karstensen Lennart, Langejürgen Jens, Hatzl Johannes, Mathis-Ullrich Franziska, and Uhl Christian
- Subjects
deep reinforcement learning ,safety ,guidewire navigation ,autonomous ,machine learning ,Medicine - Abstract
Cardiovascular diseases are the main cause of death worldwide. State-of-the-art treatment often includes the process of navigating endovascular instruments through the vasculature. Automation of the procedure receives much attention lately and may increase treatment quality and unburden clinicians. However, in order to ensure the patient’s safety the endovascular device needs to be steered carefully through the body. In this work, we present a collection of medical criteria that are considered by physicians during an intervention and that can be evaluated automatically enabling a highly objective assessment. Additionally, we trained an autonomous controller with deep reinforcement learning to gently navigate within a 2D simulation of an aortic arch. Among others, the controller reduced the maximum and mean contact force applied to the vessel walls by 43% and 29%, respectively.
- Published
- 2022
- Full Text
- View/download PDF
16. Deep automatic segmentation of brain tumours in interventional ultrasound data
- Author
-
Zeineldin Ramy A., Pollok Alex, Mangliers Tim, Karar Mohamed E., Mathis-Ullrich Franziska, and Burgert Oliver
- Subjects
brain tumour ,deep learning ,image-guided neurosurgery ,ius ,segmentation ,Medicine - Abstract
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
- Published
- 2022
- Full Text
- View/download PDF
17. Biocompatible Soft Material Actuator for Compliant Medical Robots
- Author
-
Marzi Christian, Fischer Nikola, and Mathis-Ullrich Franziska
- Subjects
actuator ,soft robot ,light actuation ,smart materials ,biocompatible ,Medicine - Abstract
Robots from material-based actuators offer high potential for small-scale robots with abilities hardly achievable by classical methods like electric motors. Besides excellent scaling to minimally invasive systems, allowing for omission of metallic components, such robots can be applied in imaging modalities such as MRI or CT. To allow for higher accessibility in this field of research, a facile method for fabrication of such soft actuators was developed. It comprises only two materials: graphene oxide and silicone elastomer. The facile fabrication method does not require specialized equipment. The resulting actuator is biocompatible and controllable by light mediated heat. The bending motion can be controlled by the intensity of applied infrared light and the actuator was experimentally shown to move five times its own weight. Thus, providing capabilities for a medical soft robotic actuator.
- Published
- 2021
- Full Text
- View/download PDF
18. Facile Testbed for Robotic X-ray based Endovascular Interventions
- Author
-
Karstensen Lennart, Pätz Torben, Mathis-Ullrich Franziska, and Stallkamp Jan
- Subjects
endovascular ,surgical robotics ,testbed ,x-ray ,catheter ,Medicine - Abstract
Endovascular surgical robotics requires a facile and realistic testbed to validate control algorithms. This work compares methods to manufacture such a testbed. Utilizing animal tissue, a mannequin, and a low-voltage flow pump it is possible to perform catheter-based interventions with X-ray feedback for less than €300. The aim of this paper is to lower the entry hurdle for validating endovascular surgical robots by providing a method to build a facile and low-budget testbed.
- Published
- 2021
- Full Text
- View/download PDF
19. Flexible Facile Tactile Sensor for Smart Vessel Phantoms
- Author
-
Fischer Nikola, Scheikl Paul, Marzi Christian, Galindo-Blanco Barbara, Kisilenko Anna, Müller-Stich Beat P., Wagner Martin, and Mathis-Ullrich Franziska
- Subjects
force sensing ,smart phantom ,tactile sensor ,Medicine - Abstract
Smart medical phantoms for training and evaluation of endovascular procedures ought to measure impact forces on the vessel walls worth protecting to provide feedback to clinicians and articulated soft robots. Recent commercial smart phantoms are expensive, usually not customizable to different applications and lack accessibility for integrated development. This work investigates piezoresistive films as highly integratable flexible sensors to be used in arbitrary soft vessel phantom anatomies over large surfaces and curved shapes providing quantitative measurement in the force range up to 1 N with 0.1 N resolution. First results show promising performance at the point of calibration and in a 5 mm range around it, with absolute measuring error of 28 mN and a standard deviation of ±10 mN and response times
- Published
- 2021
- Full Text
- View/download PDF
20. Synthetic data generation for optical flow evaluation in the neurosurgical domain
- Author
-
Philipp Markus, Bacher Neal, Nienhaus Jonas, Hauptmann Lars, Lang Laura, Alperovich Anna, Gutt-Will Marielena, Mathis Andrea, Saur Stefan, Raabe Andreas, and Mathis-Ullrich Franziska
- Subjects
neurosurgery ,surgical microscope ,optical flow ,evaluation ,Medicine - Abstract
Towards computer-assisted neurosurgery, scene understanding algorithms for microscope video data are required. Previous work utilizes optical flow to extract spatiotemporal context from neurosurgical video sequences. However, to select an appropriate optical flow method, we need to analyze which algorithm yields the highest accuracy for the neurosurgical domain. Currently, there are no benchmark datasets available for neurosurgery. In our work, we present an approach to generate synthetic data for optical flow evaluation on the neurosurgical domain. We simulate image sequences and thereby take into account domainspecific visual conditions such as surgical instrument motion. Then, we evaluate two optical flow algorithms, Farneback and PWC-Net, on our synthetic data. Qualitative and quantitative assessments confirm that our data can be used to evaluate optical flow for the neurosurgical domain. Future work will concentrate on extending the method by modeling additional effects in neurosurgery such as elastic background motion.
- Published
- 2021
- Full Text
- View/download PDF
21. Slicer-DeepSeg: Open-Source Deep Learning Toolkit for Brain Tumour Segmentation
- Author
-
Zeineldin Ramy A., Weimann Pauline, Karar Mohamed E., Mathis-Ullrich Franziska, and Burgert Oliver
- Subjects
3d slicer ,brain tumour segmentation ,deep learning ,image-guided neurosurgery ,mri. ,Medicine - Abstract
Purpose Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research. Methods Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods. Results The developed Slicer- DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches. Conclusions Developed Slicer-DeepSeg allows the development of novel AIassisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg.
- Published
- 2021
- Full Text
- View/download PDF
22. Infrared marker tracking with the HoloLens for neurosurgical interventions
- Author
-
Kunz Christian, Maurer Paulina, Kees Fabian, Henrich Pit, Marzi Christian, Hlaváč Michal, Schneider Max, and Mathis-Ullrich Franziska
- Subjects
augmented reality ,computer assisted surgery ,ir-marker ,navigation ,tracking ,Medicine - Abstract
Patient tracking is an essential part in a surgical augmented reality system for correct hologram to patient registration. Augmented reality can support a surgeon with visual assistance to navigate more precisely during neurosurgical interventions. In this work, a system for patient tracking based on infrared markers is proposed. These markers are widely used in medical applications and meet the special medical requirements such as sterilizability. A tracking accuracy of 0.76 mm is achieved when using the near field reflectivity and depth sensor of the HoloLens. On the HoloLens a performance of 55–60 fps is reached, which grants a sufficiently stable placement of the holograms in the operating room.
- Published
- 2020
- Full Text
- View/download PDF
23. Autonomous guidewire navigation in a two dimensional vascular phantom
- Author
-
Karstensen Lennart, Behr Tobias, Pusch Tim Philipp, Mathis-Ullrich Franziska, and Stallkamp Jan
- Subjects
autonomous ,catheter navigation ,deep reinforcement learning ,machine learning ,neural network controller ,Medicine - Abstract
The treatment of cerebro- and cardiovascular diseases requires complex and challenging navigation of a catheter. Previous attempts to automate catheter navigation lack the ability to be generalizable. Methods of Deep Reinforcement Learning show promising results and may be the key to automate catheter navigation through the tortuous vascular tree. This work investigates Deep Reinforcement Learning for guidewire manipulation in a complex and rigid vascular model in 2D. The neural network trained by Deep Deterministic Policy Gradients with Hindsight Experience Replay performs well on the low-level control task, however the high-level control of the path planning must be improved further.
- Published
- 2020
- Full Text
- View/download PDF
24. Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery
- Author
-
Scheikl Paul Maria, Laschewski Stefan, Kisilenko Anna, Davitashvili Tornike, Müller Benjamin, Capek Manuela, Müller-Stich Beat P., Wagner Martin, and Mathis-Ullrich Franziska
- Subjects
computer assisted surgery ,endoscopy ,minimally invasive interventions ,surgical data science ,Medicine - Abstract
Semantic segmentation of organs and tissue types is an important sub-problem in image based scene understanding for laparoscopic surgery and is a prerequisite for context-aware assistance and cognitive robotics. Deep Learning (DL) approaches are prominently applied to segmentation and tracking of laparoscopic instruments. This work compares different combinations of neural networks, loss functions, and training strategies in their application to semantic segmentation of different organs and tissue types in human laparoscopic images in order to investigate their applicability as components in cognitive systems. TernausNet-11 trained on Soft-Jaccard loss with a pretrained, trainable encoder performs best in regard to segmentation quality (78.31% mean Intersection over Union [IoU]) and inference time (28.07 ms) on a single GTX 1070 GPU.
- Published
- 2020
- Full Text
- View/download PDF
25. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration
- Author
-
Zeineldin Ramy A., Karar Mohamed E., Coburger Jan, Wirtz Christian R., Mathis-Ullrich Franziska, and Burgert Oliver
- Subjects
biomedical image processing ,brain shift ,deep learning ,image-guided neurosurgery ,mri-ius ,Medicine - Abstract
Intraoperative brain deformation, so-called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
- Published
- 2020
- Full Text
- View/download PDF
26. Segmentation of the mouse skull for MRI guided transcranial focused ultrasound therapy planning.
- Author
-
Hopp, T., Springer, L., Gross, C., Grudzenski-Theis, S., Mathis-Ullrich, F., and Ruiter, N.V.
- Published
- 2022
- Full Text
- View/download PDF
27. Intraoperative adaptive eye model based on instrument-integrated OCT for robot-assisted vitreoretinal surgery.
- Author
-
Briel M, Haide L, Hess M, Schimmelpfennig J, Matten P, Peter R, Hillenbrand M, Tagliabue E, and Mathis-Ullrich F
- Abstract
Purpose: Pars plana vitrectomy (PPV) is the most common surgical procedure performed by retinal specialists, highlighting the need for model-based assistance and automation in surgical treatment. An intraoperative retinal model provides precise anatomical information relative to the surgical instrument, enhancing surgical precision and safety., Methods: This work focuses on the intraoperative parametrization of retinal shape using 1D instrument-integrated optical coherence tomography distance measurements combined with a surgical robot. Our approach accommodates variability in eye geometries by transitioning from an initial spherical model to an ellipsoidal representation, improving accuracy as more data is collected through sensor motion., Results: We demonstrate that ellipsoid fitting outperforms sphere fitting for regular eye shapes, achieving a mean absolute error of less than 40 μ m in simulation and below 200 μ m on 3D printed models and ex vivo porcine eyes. The model reliably transitions from a spherical to an ellipsoidal representation across all six tested eye shapes when specific criteria are satisfied., Conclusion: The adaptive eye model developed in this work meets the accuracy requirements for clinical application in PPV within the central retina. Additionally, the global model effectively extrapolates beyond the scanned area to encompass the retinal periphery.This capability enhances PPV procedures, particularly through virtual boundary assistance and improved surgical navigation, ultimately contributing to safer surgical outcomes., Competing Interests: Declarations. Conflict of interest: The authors have no Conflict of interest to declare that are relevant to the content of this article. Ethics approval: Not applicable. Informed consent: Not applicable., (© 2025. The Author(s).)
- Published
- 2025
- Full Text
- View/download PDF
28. Correction to: Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.
- Author
-
Peter R, Moreira S, Tagliabue E, Hillenbrand M, Nunes RG, and Mathis-Ullrich F
- Published
- 2024
- Full Text
- View/download PDF
29. Interactive Surgical Training in Neuroendoscopy: Real-Time Anatomical Feature Localization Using Natural Language Expressions.
- Author
-
Matasyoh NM, Schmidt R, Zeineldin RA, Spetzger U, and Mathis-Ullrich F
- Subjects
- Humans, Ventriculostomy methods, Ventriculostomy education, Image Processing, Computer-Assisted methods, Neuroendoscopy methods, Neuroendoscopy education, Natural Language Processing
- Abstract
Objective: This study addresses challenges in surgical education, particularly in neuroendoscopy, where the demand for optimized workflow conflicts with the need for trainees' active participation in surgeries. To overcome these challenges, we propose a framework that accurately identifies anatomical structures within images guided by language descriptions, facilitating authentic and interactive learning experiences in neuroendoscopy., Methods: Utilizing the encoder-decoder architecture of a conventional transformer, our framework processes multimodal inputs (images and language descriptions) to identify and localize features in neuroendoscopic images. We curate a dataset from recorded endoscopic third ventriculostomy (ETV) procedures for training and evaluation. Utilizing evaluation metrics, including "R@n," "IoU= θ," "mIoU," and top-1 accuracy, we systematically benchmark our framework against state-of-the-art methodologies., Results: The framework demonstrates excellent generalization, surpassing the compared methods with 93.67 % accuracy and 76.08 % mIoU on unseen data. It also exhibits better computational speed compared with other methods. Qualitative results affirms the framework's effectiveness in precise localization of referred anatomical features within neuroendoscopic images., Conclusion: The framework's adeptness at localizing anatomical features using language descriptions positions it as a valuable tool for integration into future interactive clinical learning systems, enhancing surgical training in neuroendoscopy., Significance: The exemplary performance reinforces the framework's potential in enhancing surgical education, leading to improved skills and outcomes for trainees in neuroendoscopy.
- Published
- 2024
- Full Text
- View/download PDF
30. Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.
- Author
-
Peter R, Moreira S, Tagliabue E, Hillenbrand M, Nunes RG, and Mathis-Ullrich F
- Abstract
Purpose: This work presents a novel platform for stereo reconstruction in anterior segment ophthalmic surgery to enable enhanced scene understanding, especially depth perception, for advanced computer-assisted eye surgery by effectively addressing the lack of texture and corneal distortions artifacts in the surgical scene., Methods: The proposed platform for stereo reconstruction uses a two-step approach: generating a sparse 3D point cloud from microscopic images, deriving a dense 3D representation by fitting surfaces onto the point cloud, and considering geometrical priors of the eye anatomy. We incorporate a pre-processing step to rectify distortion artifacts induced by the cornea's high refractive power, achieved by aligning a 3D phenotypical cornea geometry model to the images and computing a distortion map using ray tracing., Results: The accuracy of 3D reconstruction is evaluated on stereo microscopic images of ex vivo porcine eyes, rigid phantom eyes, and synthetic photo-realistic images. The results demonstrate the potential of the proposed platform to enhance scene understanding via an accurate 3D representation of the eye and enable the estimation of instrument to layer distances in porcine eyes with a mean average error of 190 μ m , comparable to the scale of surgeons' hand tremor., Conclusion: This work marks a significant advancement in stereo reconstruction for ophthalmic surgery by addressing corneal distortions, a previously often overlooked aspect in such surgical scenarios. This could improve surgical outcomes by allowing for intra-operative computer assistance, e.g., in the form of virtual distance sensors., Competing Interests: Declarations. Conflict of interest: The authors declare that they have no conflict of interest. Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent: This articles does not contain patient data., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
31. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery.
- Author
-
Zeineldin RA, Karar ME, Burgert O, and Mathis-Ullrich F
- Subjects
- Humans, Neuronavigation methods, Neurosurgical Procedures methods, Ultrasonography, Magnetic Resonance Imaging methods, Brain Neoplasms diagnostic imaging, Brain Neoplasms surgery, Brain Neoplasms pathology, Surgery, Computer-Assisted methods
- Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes., (© 2024. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.)
- Published
- 2024
- Full Text
- View/download PDF
32. Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI.
- Author
-
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, and Mathis-Ullrich F
- Subjects
- Humans, Reproducibility of Results, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Brain diagnostic imaging, Brain pathology, Glioma diagnostic imaging, Glioma pathology, Brain Neoplasms diagnostic imaging
- Abstract
Accurate localization of gliomas, the most common malignant primary brain cancer, and its different sub-region from multimodal magnetic resonance imaging (MRI) volumes are highly important for interventional procedures. Recently, deep learning models have been applied widely to assist automatic lesion segmentation tasks for neurosurgical interventions. However, these models are often complex and represented as "black box" models which limit their applicability in clinical practice. This article introduces new hybrid vision Transformers and convolutional neural networks for accurate and robust glioma segmentation in Brain MRI scans. Our proposed method, TransXAI, provides surgeon-understandable heatmaps to make the neural networks transparent. TransXAI employs a post-hoc explanation technique that provides visual interpretation after the brain tumor localization is made without any network architecture modifications or accuracy tradeoffs. Our experimental findings showed that TransXAI achieves competitive performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about the tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully. Thus, it enables the physicians' trust in such deep learning systems towards applying them clinically. To facilitate TransXAI model development and results reproducibility, we will share the source code and the pre-trained models after acceptance at https://github.com/razeineldin/TransXAI ., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
33. Design criteria for AI-based IT systems.
- Author
-
Lemke HU and Mathis-Ullrich F
- Subjects
- Humans, Algorithms, Clinical Decision-Making, Artificial Intelligence, Radiology
- Abstract
Purpose: This editorial relates to a panel discussion during the CARS 2023 congress that addressed the question on how AI-based IT systems should be designed that record and (transparently) display a reproducible path on clinical decision making. Even though the software engineering approach suggested for this endeavor is of a generic nature, it is assumed that the listed design criteria are applicable to IT system development also for the domain of radiology and surgery., Methods: An example of a possible design approach is outlined by illustrating on how to move from data, information, knowledge and models to wisdom-based decision making in the context of a conceptual GPT system design. In all these design steps, the essential requirements for system quality, information quality, and service quality may be realized by following the design cycle as suggested by A.R. Hevner, appropriately applied to AI-based IT systems design., Results: It can be observed that certain state-of-the-art AI algorithms and systems, such as large language models or generative pre-trained transformers (GPTs), are becoming increasingly complex and, therefore, need to be rigorously examined to render them transparent and comprehensible in their usage for all stakeholders involved in health care. Further critical questions that need to be addressed are outlined and complemented with some suggestions, that a possible design framework for a stakeholder specific AI system could be a (modest) GPT based on a small language model., Discussion: A fundamental question for the future remains whether society wants a quasi-wisdom-oriented healthcare system, based on data-driven intelligence with AI, or a human curated wisdom based on model-driven intelligence (with and without AI). Special CARS workshops and think tanks are planned to address this challenging question and possible new direction for assisting selected medical disciplines, e.g., radiology and surgery., (© 2024. CARS.)
- Published
- 2024
- Full Text
- View/download PDF
34. Augmented Reality-Assisted versus Freehand Ventriculostomy in a Head Model.
- Author
-
Schneider M, Kunz C, Wirtz CR, Mathis-Ullrich F, Pala A, and Hlavac M
- Abstract
Background: Ventriculostomy (VST) is a frequent neurosurgical procedure. Freehand catheter placement represents the standard current practice. However, multiple attempts are often required. We present augmented reality (AR) headset guided VST with in-house developed head models. We conducted a proof of concept study in which we tested AR-guided as well as freehand VST. Repeated AR punctures were conducted to investigate if a learning curve can be derived., Methods: Five custom-made 3D-printed head models, each holding an anatomically different ventricular system, were filled with agarose gel. Eleven surgeons placed two AR-guided as well as two freehand ventricular drains per head. A subgroup of four surgeons did a total of three series of AR-guided punctures each to test for a learning curve. A Microsoft HoloLens served as the hardware platform. The marker-based tracking did not require rigid head fixation. Catheter tip position was evaluated in computed tomography scans., Results: Marker-tracking, image segmentation, and holographic display worked satisfactorily. In freehand VST, a success rate of 72.7% was achieved, which was higher than under AR guidance (68.2%, difference not statistically significant). Repeated AR-guided punctures increased the success rate from 65 to 95%. We assume a steep learning curve as repeated AR-guided punctures led to an increase in successful attempts. Overall user experience showed positive feedback., Conclusions: We achieved promising results that encourage the continued development and technical improvement. However, several more developmental steps have to be taken before an application in humans can be considered. In the future, AR headset-based holograms have the potential to serve as a compact navigational help inside and outside the operating room., Competing Interests: None declared., (Thieme. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
35. A sensorized modular training platform to reduce vascular damage in endovascular surgery.
- Author
-
Fischer N, Marzi C, Meisenbacher K, Kisilenko A, Davitashvili T, Wagner M, and Mathis-Ullrich F
- Subjects
- Humans, Catheterization, Catheters, Aorta, Abdominal, Clinical Competence, Education, Medical, Endovascular Procedures
- Abstract
Purpose: Endovascular interventions require intense practice to develop sufficient dexterity in catheter handling within the human body. Therefore, we present a modular training platform, featuring 3D-printed vessel phantoms with patient-specific anatomy and integrated piezoresistive impact force sensing of instrument interaction at clinically relevant locations for feedback-based skill training to detect and reduce damage to the delicate vascular wall., Methods: The platform was fabricated and then evaluated in a user study by medical ([Formula: see text]) and non-medical ([Formula: see text]) users. The users had to navigate a set of guidewire and catheter through a parkour of 3 modules including an aneurismatic abdominal aorta, while impact force and completion time were recorded. Eventually, a questionnaire was conducted., Results: The platform allowed to perform more than 100 runs in which it proved capable to distinguish between users of different experience levels. Medical experts in the fields of vascular and visceral surgery had a strong performance assessment on the platform. It could be shown, that medical students could improve runtime and impact over 5 runs. The platform was well received and rated as promising for medical education despite the experience of higher friction compared to real human vessels., Conclusion: We investigated an authentic patient-specific training platform with integrated sensor-based feedback functionality for individual skill training in endovascular surgery. The presented method for phantom manufacturing is easily applicable to arbitrary patient-individual imaging data. Further work shall address the implementation of smaller vessel branches, as well as real-time feedback and camera imaging for further improved training experience., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
36. Recurrent neural networks for generalization towards the vessel geometry in autonomous endovascular guidewire navigation in the aortic arch.
- Author
-
Karstensen L, Ritter J, Hatzl J, Ernst F, Langejürgen J, Uhl C, and Mathis-Ullrich F
- Subjects
- Humans, Aorta, Thoracic diagnostic imaging, Aorta, Thoracic surgery, Stents, Neural Networks, Computer, Blood Vessel Prosthesis, Treatment Outcome, Retrospective Studies, Prosthesis Design, Aortic Aneurysm, Thoracic surgery, Blood Vessel Prosthesis Implantation, Endovascular Procedures methods
- Abstract
Purpose: Endovascular intervention is the state-of-the-art treatment for common cardiovascular diseases, such as heart attack and stroke. Automation of the procedure may improve the working conditions of physicians and provide high-quality care to patients in remote areas, posing a major impact on overall treatment quality. However, this requires the adaption to individual patient anatomies, which currently poses an unsolved challenge., Methods: This work investigates an endovascular guidewire controller architecture based on recurrent neural networks. The controller is evaluated in-silico on its ability to adapt to new vessel geometries when navigating through the aortic arch. The controller's generalization capabilities are examined by reducing the number of variations seen during training. For this purpose, an endovascular simulation environment is introduced, which allows guidewire navigation in a parametrizable aortic arch., Results: The recurrent controller achieves a higher navigation success rate of 75.0% after 29,200 interventions compared to 71.6% after 156,800 interventions for a feedforward controller. Furthermore, the recurrent controller generalizes to previously unseen aortic arches and is robust towards size changes of the aortic arch. Being trained on 2048 aortic arch geometries gives the same results as being trained with full variation when evaluated on 1000 different geometries. For interpolation a gap of 30% of the scaling range and for extrapolation additional 10% of the scaling range can be navigated successfully., Conclusion: Adaption to new vessel geometries is essential in the navigation of endovascular instruments. Therefore, the intrinsic generalization to new vessel geometries poses an essential step towards autonomous endovascular robotics., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
37. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark.
- Author
-
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, and Bodenstedt S
- Subjects
- Humans, Workflow, Algorithms, Machine Learning, Artificial Intelligence, Benchmarking
- Abstract
Purpose: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill., Methods: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment., Results: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team)., Conclusion: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery., Competing Interests: Declaration of Competing Interest M. Wagner, B.-P. Müller-Stich, S. Speidel and S. Bodenstedt worked with medical device manufacturer KARL STORZ SE & Co. KG in the projects “InnOPlan”, funded by the German Federal Ministry of Economic Affairs and Energy (grant number BMWI 01MD15002E) and “Surgomics”, funded by the German Federal Ministry of Health (grant number BMG 2520DAT82D and BMG 2520DAT82A). Lars Mündermann is an employee of KARL STORZ SE & Co. KG. A. Reinke works with the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science. S. Kondo was an employee of Konica Minolta Inc. when this work was done. Wolfgang Reiter is an employee of Wintegral GmbH, a subsidiary of medical device manufacturer Richard Wolf GmbH. I. Twick, K. Kirtac, E. Hosgor, J. Lindström Bolmgren, M. Stenzel and B. von Siemens are employees of Caresyntax GmbH. Felix Nickel received travel support for conference participation as well as equipment provided for laparoscopic surgery courses by KARL STORZ SE & Co. KG, Johnson & Johnson, Intuitive Surgical, Cambridge Medical Robotics, and Medtronic. The other authors have no conflicts of interest., (Copyright © 2023. Published by Elsevier B.V.)
- Published
- 2023
- Full Text
- View/download PDF
38. Learning-based autonomous vascular guidewire navigation without human demonstration in the venous system of a porcine liver.
- Author
-
Karstensen L, Ritter J, Hatzl J, Pätz T, Langejürgen J, Uhl C, and Mathis-Ullrich F
- Subjects
- Animals, Computer Simulation, Humans, Liver diagnostic imaging, Liver surgery, Swine, Machine Learning, Neural Networks, Computer
- Abstract
Purpose: The navigation of endovascular guidewires is a dexterous task where physicians and patients can benefit from automation. Machine learning-based controllers are promising to help master this task. However, human-generated training data are scarce and resource-intensive to generate. We investigate if a neural network-based controller trained without human-generated data can learn human-like behaviors., Methods: We trained and evaluated a neural network-based controller via deep reinforcement learning in a finite element simulation to navigate the venous system of a porcine liver without human-generated data. The behavior is compared to manual expert navigation, and real-world transferability is evaluated., Results: The controller achieves a success rate of 100% in simulation. The controller applies a wiggling behavior, where the guidewire tip is continuously rotated alternately clockwise and counterclockwise like the human expert applies. In the ex vivo porcine liver, the success rate drops to 30%, because either the wrong branch is probed, or the guidewire becomes entangled., Conclusion: In this work, we prove that a learning-based controller is capable of learning human-like guidewire navigation behavior without human-generated data, therefore, mitigating the requirement to produce resource-intensive human-generated training data. Limitations are the restriction to one vessel geometry, the neglected safeness of navigation, and the reduced transferability to the real world., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
39. Explainability of deep neural networks for MRI analysis of brain tumors.
- Author
-
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, and Mathis-Ullrich F
- Subjects
- Humans, Image Processing, Computer-Assisted methods, Magnetic Resonance Imaging methods, Neural Networks, Computer, Artificial Intelligence, Brain Neoplasms diagnostic imaging, Brain Neoplasms pathology
- Abstract
Purpose: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice., Methods: In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent., Results: NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN., Conclusion: Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI ., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
40. Continuous Feature-Based Tracking of the Inner Ear for Robot-Assisted Microsurgery.
- Author
-
Marzi C, Prinzen T, Haag J, Klenzner T, and Mathis-Ullrich F
- Abstract
Robotic systems for surgery of the inner ear must enable highly precise movement in relation to the patient. To allow for a suitable collaboration between surgeon and robot, these systems should not interrupt the surgical workflow and integrate well in existing processes. As the surgical microscope is a standard tool, present in almost every microsurgical intervention and due to it being in close proximity to the situs, it is predestined to be extended by assistive robotic systems. For instance, a microscope-mounted laser for ablation. As both, patient and microscope are subject to movements during surgery, a well-integrated robotic system must be able to comply with these movements. To solve the problem of on-line registration of an assistance system to the situs, the standard of care often utilizes marker-based technologies, which require markers being rigidly attached to the patient. This not only requires time for preparation but also increases invasiveness of the procedure and the line of sight of the tracking system may not be obstructed. This work aims at utilizing the existing imaging system for detection of relative movements between the surgical microscope and the patient. The resulting data allows for maintaining registration. Hereby, no artificial markers or landmarks are considered but an approach for feature-based tracking with respect to the surgical environment in otology is presented. The images for tracking are obtained by a two-dimensional RGB stream of a surgical microscope. Due to the bony structure of the surgical site, the recorded cochleostomy scene moves nearly rigidly. The goal of the tracking algorithm is to estimate motion only from the given image stream. After preprocessing, features are detected in two subsequent images and their affine transformation is computed by a random sample consensus (RANSAC) algorithm. The proposed method can provide movement feedback with up to 93.2 μm precision without the need for any additional hardware in the operating room or attachment of fiducials to the situs. In long term tracking, an accumulative error occurs., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2021 Marzi, Prinzen, Haag, Klenzner and Mathis-Ullrich.)
- Published
- 2021
- Full Text
- View/download PDF
41. Augmented reality-assisted ventriculostomy.
- Author
-
Schneider M, Kunz C, Pal'a A, Wirtz CR, Mathis-Ullrich F, and Hlaváč M
- Subjects
- Drainage, Humans, Neurosurgical Procedures, Ventriculostomy, Augmented Reality
- Abstract
Objective: Placement of a ventricular drain is one of the most common neurosurgical procedures. However, a higher rate of successful placements with this freehand procedure is desirable. The authors' objective was to develop a compact navigational augmented reality (AR)-based tool that does not require rigid patient head fixation, to support the surgeon during the operation., Methods: Segmentation and tracking algorithms were developed. A commercially available Microsoft HoloLens AR headset in conjunction with Vuforia marker-based tracking was used to provide guidance for ventriculostomy in a custom-made 3D-printed head model. Eleven surgeons conducted a series of tests to place a total of 110 external ventricular drains under holographic guidance. The HoloLens was the sole active component; no rigid head fixation was necessary. CT was used to obtain puncture results and quantify success rates as well as precision of the suggested setup., Results: In the proposed setup, the system worked reliably and performed well. The reported application showed an overall ventriculostomy success rate of 68.2%. The offset from the reference trajectory as displayed in the hologram was 5.2 ± 2.6 mm (mean ± standard deviation). A subgroup conducted a second series of punctures in which results and precision improved significantly. For most participants it was their first encounter with AR headset technology and the overall feedback was positive., Conclusions: To the authors' knowledge, this is the first report on marker-based, AR-guided ventriculostomy. The results from this first application are encouraging. The authors would expect good acceptance of this compact navigation device in a supposed clinical implementation and assume a steep learning curve in the application of this technique. To achieve this translation, further development of the marker system and implementation of the new hardware generation are planned. Further testing to address visuospatial issues is needed prior to application in humans.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.