ObjectiveIn recent years, there has been a steady increase in the occurrence and fatality rates of shrimp diseases, causing substantial impacts in shrimp aquaculture. These diseases are marked by their swift onset, high infectivity, complex control requirements, and elevated mortality rates. With the continuous growth of shrimp factory farming, traditional manual detection approaches are no longer able to keep pace with the current requirements. Hence, there is an urgent necessity for an automated solution to identify shrimp diseases. The main goal of this research is to create a cost-effective inspection method using computer vision that achieves a harmonious balance between cost efficiency and detection accuracy. The improved YOLOv8 (You Only Look Once) network and multiple features were employed to detect shrimp diseases.MethodsTo address the issue of surface foam interference, the improved YOLOv8 network was applied to detect and extract surface shrimps as the primary focus of the image. This target detection approach accurately recognizes objects of interest in the image, determining their category and location, with extraction results surpassing those of threshold segmentation. Taking into account the cost limitations of platform computing power in practical production settings, the network was optimized by reducing parameters and computations, thereby improving detection speed and deployment efficiency. Additionally, the Farnberck optical flow method and gray level co-occurrence matrix (GLCM) were employed to capture the movement and image texture features of shrimp video clips. A dataset was created using these extracted multiple feature parameters, and a Support Vector Machine (SVM) classifier was trained to categorize the multiple feature parameters in video clips, facilitating the detection of shrimp health.Results and DiscussionsThe improved YOLOv8 in this study effectively enhanced detection accuracy without increasing the number of parameters and flops. According to the results of the ablation experiment, replacing the backbone network with FasterNet lightweight backbone network significantly reduces the number of parameters and computation, albeit at the cost of decreased accuracy. However, after integrating the efficient multi-scale attention (EMA) on the neck, the mAP0.5 increased by 0.3% compared to YOLOv8s, while mAP0.95 only decreased by 2.1%. Furthermore, the parameter count decreased by 45%, and FLOPs decreased by 42%. The improved YOLOv8 exhibits remarkable performance, ranking second only to YOLOv7 in terms of mAP0.5 and mAP0.95, with respective reductions of 0.4% and 0.6%. Additionally, it possesses a significantly reduced parameter count and FLOPS compared to YOLOv7, matching those of YOLOv5. Despite the YOLOv7-Tiny and YOLOv8-VanillaNet models boasting lower parameters and Flops, their accuracy lags behind that of the improved YOLOv8. The mAP0.5 and mAP0.95 of YOLOv7-Tiny and YOLOv8-VanillaNet are 22.4%, 36.2%, 2.3%, and 4.7% lower than that of the improved YOLOv8, respectively. Using a support vector machine (SVM) trained on a comprehensive dataset incorporating multiple feature, the classifier achieved an impressive accuracy rate of 97.625%. The 150 normal fragments and the 150 diseased fragments were randomly selected as test samples. The classifier exhibited a detection accuracy of 89% on this dataset of the 300 samples. This result indicates that the combination of features extracted using the Farnberck optical flow method and GLCM can effectively capture the distinguishing dynamics of movement speed and direction between infected and healthy shrimp. In this research, the majority of errors stem from the incorrect recognition of diseased segments as normal segments, accounting for 88.2% of the total error. These errors can be categorized into three main types: 1) The first type occurs when floating foam obstructs the water surface, resulting in a small number of shrimp being extracted from the image. 2) The second type is attributed to changes in water movement. In this study, nanotubes were used for oxygenation, leading to the generation of sprays on the water surface, which affected the movement of shrimp. 3) The third type of error is linked to video quality. When the video's pixel count is low, the difference in optical flow between diseased shrimp and normal shrimp becomes relatively small. Therefore, it is advisable to adjust the collection area based on the actual production environment and enhance video quality.ConclusionsThe multiple features introduced in this study effectively capture the movement of shrimp, and can be employed for disease detection. The improved YOLOv8 is particularly well-suited for platforms with limited computational resources and is feasible for deployment in actual production settings. However, the experiment was conducted in a factory farming environment, limiting the applicability of the method to other farming environments. Overall, this method only requires consumer-grade cameras as image acquisition equipment and has lower requirements on the detection platform, and can provide a theoretical basis and methodological support for the future application of aquatic disease detection methods.