1. CFP-PSPNet: a lightweight unmanned vessel water segmentation algorithm.
- Author
-
Yang, Xuecun, Song, Yijing, He, Lintao, Xue, Hang, Dong, Zhonghua, and Zhang, Qingyun
- Abstract
Accurate water segmentation is the first prerequisite for unmanned vessels to navigate safely and perform other operations. Aiming at the problems of low utilization rate of unmanned boat water image features and low accuracy of contour edge segmentation in complex inland river scenarios, this paper proposes a lightweight water segmentation algorithm: channel feature pyramid-pyramid scene parsing network (CFP-PSPNet), which realizes efficient and accurate segmentation of water area in complex scenarios. First, a cross transformation-channel feature pyramid (CT-CFP) is proposed, which improves the utilization of the original feature information by cross-fertilizing the feature information between different layers and realizes the improvement of the water segmentation accuracy; Secondly, a parallel semantic segmentation network CFP-PSPNet is designed, which extracts the image information by pyramid pooling module (PPM) and CT-CFP dual pyramid, which solves the problem of loss of detail information and edge information, so as to achieve the purpose of improving the accuracy; finally, Mobilenetv2 after the introduction of encoder-context-attention (ECA) is used as a feature extraction network, which reduces the number of parameters and computation of the network without affecting the segmentation accuracy and realizes the lightweight design of the network. Experiments are conducted on the open-source dataset USVInland, and the experimental results show that our CFP-PSPNet algorithm has a significant reduction in the number of parameters, an increase in detection speed by 81FPS, and mean intersection over union (MIoU) and accuracy rates of 97.71% and 98.75%, respectively, which are 1.41% and 0.74% higher than that of the original network. It is superior to other classical semantic segmentation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF