1. Assisted phase and step annotation for surgical videos
- Author
-
Nicolas Martin, Martin Ragot, Gurvan Lecuyer, Laurent Launay, Pierre Jannin, Institut de Recherche Technologique b-com (IRT b-com), Laboratoire Traitement du Signal et de l'Image (LTSI), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de la Santé et de la Recherche Médicale (INSERM), and Université de Rennes (UR)-Institut National de la Santé et de la Recherche Médicale (INSERM)
- Subjects
Computer science ,0206 medical engineering ,Biomedical Engineering ,Health Informatics ,Cataract Extraction ,02 engineering and technology ,computer.software_genre ,Convolutional neural network ,Health informatics ,Workflow ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Annotation ,0302 clinical medicine ,Software ,Humans ,Step recognition ,Cholecystectomy ,Radiology, Nuclear Medicine and imaging ,Surgical workflow ,Artificial neural network ,business.industry ,Assisted annotation ,Deep learning ,General Medicine ,020601 biomedical engineering ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,User assistance ,Phase recognition ,Surgery ,[SDV.IB]Life Sciences [q-bio]/Bioengineering ,Neural Networks, Computer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithms ,Natural language processing - Abstract
International audience; Purpose - Annotation of surgical videos is a time-consuming task which requires specific knowledge. In this paper, we present and evaluate a deep learning-based method that includes pre-annotation of the phases and steps in surgical videos and user assistance in the annotation process. Methods - We propose a classification function that automatically detects errors and infers temporal coherence in predictions made by a convolutional neural network. First, we trained three different architectures of neural networks to assess the method on two surgical procedures: cholecystectomy and cataract surgery. The proposed method was then implemented in an annotation software to test its ability to assist surgical video annotation. A user study was conducted to validate our approach, in which participants had to annotate the phases and the steps of a cataract surgery video. The annotation and the completion time were recorded. Results - The participants who used the assistance system were 7% more accurate on the step annotation and 10 min faster than the participants who used the manual system. The results of the questionnaire showed that the assistance system did not disturb the participants and did not complicate the task. Conclusion - The annotation process is a difficult and time-consuming task essential to train deep learning algorithms. In this publication, we propose a method to assist the annotation of surgical workflows which was validated through a user study. The proposed assistance system significantly improved annotation duration and accuracy.
- Published
- 2020
- Full Text
- View/download PDF