[Objective]Grape picking is a key link in increasing production. However, in this process, a large amount of manpower and material resources are required, which makes the picking process complex and slow. To enhance harvesting efficiency and achieve automated grape harvesting, an improved YOLOv8n object detection model named 3C-YOLOv8n was proposed, which integrates the RealSense D415 depth camera for grape recognition and localization.[Methods]The propoesed 3C-YOLOv8n incorporated a convolutional block attention module (CBAM) between the first C2f module and the third Conv module in the backbone network. Additionally, a channel attention (CA) module was added at the end of the backbone structure, resulting in a new 2C-C2f backbone network architecture. This design enabled the model to sequentially infer attention maps across two independent dimensions (channel and spatial), optimize features by considering relationships between channels and positional information. The network structure was both flexible and lightweight. Furthermore, the Content-aware ReAssembly of Features up sampling operator was implemented to support instance-specific kernels (such as deconvolution) for feature reconstruction with neighboring pixels, replacing the nearest neighbor interpolation operator in the YOLOv8n neck network. This enhancement increased the receptive field and guided the reconstruction process based on input features while maintaining low parameter and computational complexity, thereby forming the 3C-YOLOv8n model. The pyrealsense2 library was utilized to obtain pixel position information from the target area using the Intel RealSense D415 camera. During this process, the depth camera was used to capture images, and target detection algorithms were employed to pinpoint the location of grapes. The camera's depth sensor facilitated the acquisition of the three-dimensional point cloud of grapes, allowing for the calculation of the distance from the pixel point to the camera and the subsequent determination of the three-dimensional coordinates of the center of the target's bounding box in the camera coordinate system, thus achieving grape recognition and localization.[Results and Discussions]Comparative and ablation experiments were conducted. it was observed that the 3C-YOLOv8n model achieved a mean average precision (mAP) of 94.3% at an intersection ratio of 0.5 (IOU=0.5), surpassing the YOLOv8n model by 1%. The accuracy (P) and recall (R) rates were recorded at 91.6% and 86.4%, respectively, reflecting increases of 0.1% and 0.7%. The F1-Score also improved by 0.4%, demonstrating that the improved network model met the experimental accuracy and recall requirements. In terms of loss, the 3C-YOLOv8n algorithm exhibited superior performance, with a rapid decrease in loss values and minimal fluctuations, ultimately leading to a minimized loss value. This indicated that the improved algorithm quickly reached a convergence state, enhancing both model accuracy and convergence speed. The ablation experiments revealed that the original YOLOv8n model yielded a mAP of 93.3%. The integration of the CBAM and CA attention mechanisms into the YOLOv8n backbone resulted in mAP values of 93.5% each. The addition of the Content-aware ReAssembly of Features up sampling operator to the neck network of YOLOv8n produced a 0.5% increase in mAP, culminating in a value of 93.8%. The combination of the three improvement strategies yielded mAP increases of 0.3, 0.7, and 0.8%, respectively, compared to the YOLOv8n model. Overall, the 3C-YOLOv8n model demonstrated the best detection performance, achieving the highest mAP of 94.3%. The ablation results confirmed the positive impact of the proposed improvement strategies on the experimental outcomes. Compared to other mainstream YOLO series algorithms, all evaluation metrics showed enhancements, with the lowest missed detection and false detection rates among all tested algorithms, underscoring its practical advantages in detection tasks.[Conclusions]By effectively addressing the inefficiencies of manual labor, 3C-YOLOv8n network model not only enhances the precision of grape recognition and localization but also significantly optimizes overall harvesting efficiency. Its superior performance in evaluation metrics such as precision, recall, mAP, and F1-Score, alongside the lowest recorded loss values among YOLO series algorithms, indicates a remarkable advancement in model convergence and operational effectiveness. Furthermore, the model's high accuracy in grape target recognition not only lays the groundwork for automated harvesting systems but also enables the implementation of complementary intelligent operations.