Back to Search Start Over

Exploring Intra- and Inter-Video Relation for Surgical Semantic Scene Segmentation

Authors :
Jin, Yueming
Yu, Yang
Chen, Cheng
Zhao, Zixu
Heng, Pheng-Ann
Stoyanov, Danail
Publication Year :
2022

Abstract

Automatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.<br />Comment: Accepted at IEEE TMI

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.15251
Document Type :
Working Paper