1. Field-based multispecies weed and crop detection using ground robots and advanced YOLO models: A data and model-centric approach
- Author
-
Sunil G C, Arjun Upadhyay, Yu Zhang, Kirk Howatt, Thomas Peters, Michael Ostlie, William Aderholdt, and Xin Sun
- Subjects
Data-centric ,Deep learning ,Model-centric ,Object detection ,Weed detection ,Yolo ,Agriculture (General) ,S1-972 ,Agricultural industries ,HD9000-9495 - Abstract
The implementation of a machine-vision system for real-time precision weed management is a crucial step towards the development of smart spraying robotic vehicles. The intelligent machine-vision system, constructed using deep learning object detection models, has the potential to accurately detect weeds and crops from images. Both data-centric and model-centric approaches of deep learning model development offer advantages depending on environment and non-environment factors. To assess the performance of weed detection in real-field conditions for eight crop species, the Yolov8, Yolov9, and customized Yolov9 deep learning models were trained and evaluated using RGB images from four locations (Casselton, Fargo, and Carrington) over a two-year period in the Great Plains region of the U.S. The experiment encompassed eight crop species—dry bean, canola, corn, field pea, flax, lentil, soybean, and sugar beet—and five weed species—horseweed, kochia, redroot pigweed, common ragweed, and water hemp. Six Yolov8 and eight Yolov9 model variants were trained using annotated weed and crop images gathered from four different sites including combined dataset from four sites. The Yolov8 and Yolov9 models’ performance in weed detection were assessed based on mean average precision (mAP50) metrics for five datasets, eight crop species, and five weed species. The results of the weed and crop detection evaluation showed high mAP50 values of 86.2 %. The mAP50 values for individual weed and crop species detection ranged from 80.8 % to 98 %. The results demonstrated that the performance of the model varied by model type (model-centric), location due to environment (data-centric), data size (data-centric), data quality (data-centric), and object size in the image (data-centric). Nevertheless, the Yolov9 customized lightweight model has the potential to play a significant role in building a real time machine-vision-based precision weed management system.
- Published
- 2024
- Full Text
- View/download PDF