Back to Search Start Over

Fast Self-Attention Deep Detection Network Based on Weakly Differentiated Plant Nematodes

Authors :
Ningyuan Xu
Jiayan Zhuang
Tianyi Mao
Jiangjian Xiao
Jianfeng Gu
Liu Yangming
Yi Zhu
Publication Year :
2021
Publisher :
Research Square Platform LLC, 2021.

Abstract

Background: High-precision, high-speed detection and classification of weakly differentiated targets has always been a difficult problem in the field of image vision. In this paper, the detection of phytopathogenic Bursaphelenchus xylophilus with small size and very weak inter-species differences is taken as an example. Methods: Our work has been carried out in response to the current weakly differentiated target detection problems: a. To replace the complex network labelling and training process based on expert empirical knowledge, we proposed a lightweight Self-Attention network. Experiments proved that the key feature identification areas of​ plant nematodes found by our Self-Attention network is in good agreement with the customs expert empirical knowledge, and the feature areas found by the method in this paper can better obtain higher detection accuracy than the expert knowledge; b. To optimize the computing power caused by the input of the entire image, we used low-resolution images to quickly obtain the key feature location coordinates, then obtain high-resolution feature areas information based on the coordinates; c. We adopt the adaptive weight with multi-feature joint detection method based on the brightness of the heatmap to further improve the detection accuracy; d. We constructed a more complete high-resolution training dataset involving 24 species of Bursaphelenchus xylophilus and other common hybrid species with a total amount of data exceeded 10,000. Results: The algorithm proposed in this paper replaces the tedious extensive manual labelling in the training process, improves the average training time of the model by more than 50%, reduces the testing time of a single sample by about 27%, optimizes the model storage size by 65%, improves the detection accuracy of the ImageNet pre-trained model by 12.6%, and improves the detection accuracy of the no-ImageNet pre-trained model by more than 48%. Conclusions: Overall, the method in this paper has achieved better results in terms of model complexity, training annotation, as well as testing time and accuracy.

Details

Database :
OpenAIRE
Accession number :
edsair.doi...........756011ba3bfae36a52b405f42d8358a7
Full Text :
https://doi.org/10.21203/rs.3.rs-671493/v1