Back to Search Start Over

A Learning Approach for Suture Thread Detection With Feature Enhancement and Segmentation for 3-D Shape Reconstruction.

Authors :
Lu, Bo
Yu, X. B.
Lai, J. W.
Huang, K. C.
Chan, Keith C. C.
Chu, Henry K.
Source :
IEEE Transactions on Automation Science & Engineering; Apr2020, Vol. 17 Issue 2, p858-870, 13p
Publication Year :
2020

Abstract

A vision-based system presents one of the most reliable methods for achieving an automated robot-assisted manipulation associated with surgical knot tying. However, some challenges in suture thread detection and automated suture thread grasping significantly hinder the realization of a fully automated surgical knot tying. In this article, we propose a novel algorithm that can be used for computing the 3-D coordinates of a suture thread in knot tying. After proper training with our data set, we built a deep-learning model for accurately locating the suture’s tip. By applying a Hessian-based filter with multiscale parameters, the environmental noises can be eliminated while preserving the suture thread information. A multistencils fast marching method was then employed to segment the suture thread, and a precise stereomatching algorithm was implemented to compute the 3-D coordinates of this thread. Experiments associated with the precision of the deep-learning model, the robustness of the 2-D segmentation approach, and the overall accuracy of 3-D coordinate computation of the suture thread were conducted in various scenarios, and the results quantitatively validate the feasibility and reliability of the entire scheme for automated 3-D shape reconstruction. Note to Practitioners—This article was motivated by the challenges of suture thread detection and 3-D coordinate evaluation in a calibrated stereovision system. To precisely detect the suture thread with no distinctive feature in an image, additional information, such as the two ends of the suture thread or its total length, is usually required. This article suggests a new method utilizing a deep-learning model to automate the tip detection process, eliminating the need of manual click in the initial stage. After feature enhancements with image filters, a multistencils fast marching method was incorporated to compute the arrival time from the detected tip to other points on the suture contour. By finding the point that takes the maximal time to travel in a closed contour, the other end of the suture thread can be identified, thereby allowing suture threads of any length to be segmented out from an image. A precise stereomatching method was then proposed to generate matched key points of the suture thread on the image pair, thereby enabling the reconstruction of its 3-D coordinates. The accuracy and robustness of the entire suture detection scheme were validated through experiments with different backgrounds and lengths. This proposed scheme offers a new solution for detecting curvilinear objects and their 3-D coordinates, which shows potential in realizing automated suture grasping with robot manipulators. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15455955
Volume :
17
Issue :
2
Database :
Complementary Index
Journal :
IEEE Transactions on Automation Science & Engineering
Publication Type :
Academic Journal
Accession number :
142667007
Full Text :
https://doi.org/10.1109/TASE.2019.2950005