Back to Search
Start Over
S2CPL: A novel method of the harvest evaluation and subsoil 3D cutting-Point location for selective harvesting of green asparagus.
- Source :
-
Computers & Electronics in Agriculture . Oct2024, Vol. 225, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- • The S2CPL model can segment and extract growth features of green asparagus infield. • Lightweight convolution and attention mechanism are introduced to improve YOLO v8. • A 3D morphology extraction method is proposed for length-diameter calculation. • Harvest evaluation and subsoil 3D Cutting-Point location are achieved for elective harvesting. • Camera pose optimization is carried out during recognition tests. Robotic selective harvesting is an ideal method for bionic manual harvest of green asparagus. However, the harvesting robot encounters difficulties in evaluating the suitable harvest due to the tilt and bending of the long stem, as well as determining the precise location of the subsoil cutting-point to prevent damage from bacteria on the cutting surface. This paper proposed the S2CPL model to address the challenges of the harvest evaluation and 3D localization of subsoil cutting-point for selective harvesting of green asparagus in field conditions. Firstly, an RGB-D sensor was used to acquire images and depth information of green asparaguses. Secondly, the improved YOLOv8 by introduced lightweight convolution and attention mechanisms in the feature fusion module to enhance the segmentation accuracy. Thirdly, a 3D morphology extraction method was proposed to calculate the length and diameter of green asparagus by utilizing the image mask fusion with depth information. Finally, harvest evaluation and subsoil 3D cutting-point location were achieved for robotic selective harvesting. In addition, the RGB-D sensor posture was optimized. The test results showed that the Intersection over Union (IoU) of green asparagus segmentation with S2CPL reaches 98.0 %, which outperforms YOLOv5 + uNet, YOLOv7 + uNet and YOLOv8-tiny by 5.60 %, 4.59 % and 1.34 % respectively. The average detection time per image was only 2.0 ms, and the GFLOPS was improved by 23.90 %, 88.49 % and 7.63 % compared with other models. The relative error of the length and diameter were less than 2.98 % and 2.15 %, respectively. The accuracy of location the subsoil cutting-point is more than 99.0 %, and the horizontal positioning error and depth positioning error of cutting-points were less than 6.0 mm and 7.4 mm. The proposed model is of strong robustness even dealing with partial occlusion and motion blur and is suitable with limited computing power to meet the needs of Robotic selective harvesting. [ABSTRACT FROM AUTHOR]
- Subjects :
- *IMAGE fusion
*ASPARAGUS
*SUBSOILS
*SOIL ripping
*BIONICS
Subjects
Details
- Language :
- English
- ISSN :
- 01681699
- Volume :
- 225
- Database :
- Academic Search Index
- Journal :
- Computers & Electronics in Agriculture
- Publication Type :
- Academic Journal
- Accession number :
- 179396101
- Full Text :
- https://doi.org/10.1016/j.compag.2024.109316