9 results on '"Jin, Shichao"'
Search Results
2. PhenoNet: A two-stage lightweight deep learning framework for real-time wheat phenophase classification.
- Author
-
Zhang, Ruinan, Jin, Shichao, Zhang, Yuanhao, Zang, Jingrong, Wang, Yu, Li, Qing, Sun, Zhuangzhuang, Wang, Xiao, Zhou, Qin, Cai, Jian, Xu, Shan, Su, Yanjun, Wu, Jin, and Jiang, Dong
- Subjects
- *
WHEAT , *GRAPHICS processing units , *DEEP learning , *IMAGE recognition (Computer vision) , *REMOTE sensing , *BACOPA monnieri , *LEARNING strategies - Abstract
The real-time monitoring of wheat phenology variations among different varieties and their adaptive responses to environmental conditions is essential for advancing breeding efforts and improving cultivation management. Many remote sensing efforts have been made to relieve the challenges of key phenophase detection. However, existing solutions are not accurate enough to discriminate adjacent phenophases with subtle organ changes, and they are not real-time, such as the vegetation index curve-based methods relying on entire growth stage data after the experiment was finished. Furthermore, it is key to improving the efficiency, scalability, and availability of phenological studies. This study proposes a two-stage deep learning framework called PhenoNet for the accurate, efficient, and real-time classification of key wheat phenophases. PhenoNet comprises a lightweight encoder module (PhenoViT) and a long short-term memory (LSTM) module. The performance of PhenoNet was assessed using a well-labeled, multi-variety, and large-volume dataset (WheatPheno). The results show that PhenoNet achieved an overall accuracy (OA) of 0.945, kappa coefficients (Kappa) of 0.928, and F1-score (F1) of 0.941. Additionally, the network parameters (Params), number of operations measured by multiply-adds (MAdds), and graphics processing unit memory required for classification (Memory) were 0.889 million (M), 0.093 Giga times (G), and 8.0 Megabytes (MB), respectively. PhenoNet outperformed eleven state-of-the-art deep learning networks, achieving an average improvement of 3.7% in OA, 5.1% in Kappa, and 4.1% in F1, while reducing average Params, MAdds, and Memory by 78.4%, 85.0%, and 75.1%, respectively. The feature visualization and ablation analysis explained that PhenoNet mainly benefited from using time-series information and lightweight modules. Furthermore, PhenoNet can be effectively transferred across years, achieving a high OA of 0.981 using a two-stage transfer learning strategy. Furthermore, an extensible web platform that integrates WheatPheno and PhenoNet and ensures that the work done in this study is accessible, interoperable, and reusable has been developed (https://phenonet.org/). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Novel Computational Framework for Precision Diagnosis and Subtype Discovery of Plant With Lesion.
- Author
-
Xia, Fei, Xie, Xiaojun, Wang, Zongqin, Jin, Shichao, Yan, Ke, and Ji, Zhiwei
- Subjects
PLANT identification ,COMPUTER vision ,FOOD shortages ,SYMPTOMS ,DIAGNOSIS ,PLANT diseases - Abstract
Plants are often attacked by various pathogens during their growth, which may cause environmental pollution, food shortages, or economic losses in a certain area. Integration of high throughput phenomics data and computer vision (CV) provides a great opportunity to realize plant disease diagnosis in the early stage and uncover the subtype or stage patterns in the disease progression. In this study, we proposed a novel computational framework for plant disease identification and subtype discovery through a deep-embedding image-clustering strategy, Weighted Distance Metric and the t-stochastic neighbor embedding algorithm (WDM-tSNE). To verify the effectiveness, we applied our method on four public datasets of images. The results demonstrated that the newly developed tool is capable of identifying the plant disease and further uncover the underlying subtypes associated with pathogenic resistance. In summary, the current framework provides great clustering performance for the root or leave images of diseased plants with pronounced disease spots or symptoms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Separating the Structural Components of Maize for Field Phenotyping Using Terrestrial LiDAR Data and Deep Convolutional Neural Networks.
- Author
-
Jin, Shichao, Su, Yanjun, Gao, Shang, Wu, Fangfang, Ma, Qin, Xu, Kexin, Hu, Tianyu, Liu, Jin, Pang, Shuxin, Guan, Hongcan, Zhang, Jing, and Guo, Qinghua
- Subjects
- *
ARTIFICIAL neural networks , *STRUCTURAL components , *LIDAR , *DEEP learning , *CORN , *PRECISION farming ,CORN growth - Abstract
Separating structural components is important but also challenging for plant phenotyping and precision agriculture. Light detection and ranging (LiDAR) technology can potentially overcome these difficulties by providing high quality data. However, there are difficulties in automatically classifying and segmenting components of interest. Deep learning can extract complex features, but it is mostly used with images. Here, we propose a voxel-based convolutional neural network (VCNN) for maize stem and leaf classification and segmentation. Maize plants at three different growth stages were scanned with a terrestrial LiDAR and the voxelized LiDAR data were used as inputs. A total of 3000 individual plants (22 004 leaves and 3000 stems) were prepared for training through data augmentation, and 103 maize plants were used to evaluate the accuracy of classification and segmentation at both instance and point levels. The VCNN was compared with traditional clustering methods ($K$ -means and density-based spatial clustering of applications with noise), a geometry-based segmentation method, and state-of-the-art deep learning methods (PointNet and PointNet++). The results showed that: 1) at the instance level, the mean accuracy of classification and segmentation (F-score) were 1.00 and 0.96, respectively; 2) at the point level, the mean accuracy of classification and segmentation (F-score) were 0.91 and 0.89, respectively; 3) the VCNN method outperformed traditional clustering methods; and 4) the VCNN was on par with PointNet and PointNet++ in classification, and performed the best in segmentation. The proposed method demonstrated LiDAR’s ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Loess Landslide Detection Using Object Detection Algorithms in Northwest China.
- Author
-
Ju, Yuanzhen, Xu, Qiang, Jin, Shichao, Li, Weile, Su, Yanjun, Dong, Xiujun, and Guo, Qinghua
- Subjects
LANDSLIDES ,OBJECT recognition (Computer vision) ,LOESS ,AUTOMATIC identification ,ALGORITHMS ,EMERGENCY management ,NATURAL disaster warning systems - Abstract
Regional landslide identification is important for the risk management of landslide hazards. The traditional methods of regional landslide identification were mainly conducted by a human being. In previous studies, automatic landslide recognition mainly focused on new landslides distinct from the environment induced by rainfall or earthquake, using the image classification method and semantic segmentation method of deep learning. However, there is a lack of research on the automatic recognition of old loess landslides, which are difficult to distinguish from the environment. Therefore, this study uses the object detection method of deep learning to identify old loess landslides with Google Earth images. At first, a database of loess historical landslide samples was established for deep learning based on Google Earth images. A total of 6111 landslides were interpreted in three landslide areas in Gansu Province, China. Second, three object detection algorithms including the one-stage algorithm RetinaNet and YOLO v3 and the two-stage algorithm Mask R-CNN, were chosen for automatic landslide identification. Mask R-CNN achieved the greatest accuracy, with an AP of 18.9% and F1-score of 55.31%. Among the three landslide areas, the order of identification accuracy from high to low was Site 1, Site 2, and Site 3, with the F1-scores of 62.05%, 61.04% and 50.88%, respectively, which were positively related to their recognition difficulty. The research results proved that the object detection method can be employed for the automatic identification of loess landslides based on Google Earth images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Stem–Leaf segmentation and phenotypic trait extraction of individual plant using a precise and efficient point cloud segmentation network.
- Author
-
Yan, Jingkun, Tan, Fei, Li, Chengkai, Jin, Shichao, Zhang, Chu, Gao, Pan, and Xu, Wei
- Subjects
- *
POINT cloud , *MODERN architecture , *DEEP learning , *COMPUTER vision , *PLANT breeding , *COTTON , *TOMATOES - Abstract
• Proposing a framework for multi-plant phenotype extraction with minimal point clouds. • Non-parametric strategy enhances point cloud spatial representation capability. • Creating a labeled cotton point cloud dataset with multi-temporal and multi-species. Rapid and precise 3D organ segmentation is crucial for the automatic extraction of phenotypic traits, forming a fundamental prerequisite for intelligent plant breeding. The advancement of deep learning technology has replaced labor-intensive manual measurements and traditional computer vision methods, which are sensitive to parameters, in phenotypic trait extraction. However, current larger network structures not only require extensive point cloud data but also consume substantial computational resources, rendering them unsuitable for agricultural tasks with limited plant samples. Therefore, this study developed a lightweight 3D deep learning network (PEPNet) that achieves precise plant organ segmentation and stem-leaf phenotypic trait extraction. The adopted simple-but-effective network architecture and innovative modern operations, including a high-dimensional feature mapping strategy for preprocessing input points, a local feature extraction module based on inverted residual bottleneck block, and a cost-free attention block for spatial feature fusion, effectively implement multi-scale hierarchies and adaptively reduce computational overheads. Experimental results from cotton stem-leaf segmentation demonstrated that PEPNet not only presented approximately 2 × faster inference speed (9.59 ms) and throughput (146.32 plants per second) but also achieved competitive segmentation performance compared to other six state-of-the-art deep learning networks, namely PointNet++, DGCNN, CurveNet, Point Cloud Transformer, PointMLP, and SPoTr, achieving 95.99 %, 94.66 %, 95.32 %, and 91.31 % in Precision, Recall, F1-score, and mIoU, respectively. In transferability experiments with tomato and soybean plants, PEPNet achieved almost all the best metrics and significantly outperformed the second-best model (CurveNet). Furthermore, ablation study verified the optimal trade-off between efficiency and accuracy in this network. Any modifications to the modules could potentially disrupt the optimal trade-off. This work could contribute to reducing computational resources and annotation costs for applying segmentation methods in high-throughput phenotyping tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. PlantNet: A dual-function point cloud segmentation network for multiple plant species.
- Author
-
Li, Dawei, Shi, Guoliang, Li, Jinsheng, Chen, Yingliang, Zhang, Songyin, Xiang, Shiyu, and Jin, Shichao
- Subjects
- *
POINT cloud , *PLANT species , *DEEP learning , *PLANT selection , *DATA augmentation - Abstract
• A well-labeled dataset for stem-leaf semantic and instance segmentation was constructed manually, which contains 5460 point clouds using a self-designed data sampling and augmentation method, i.e., 3D Edge-Preserving Sampling (3DEPS). • A point-based dual-function network for stem-leaf semantic and instance segmentation was proposed, which achieves average Precision of 92.49% in semantic segmentation and average Mean Precision of 83.30% in instance segmentation of three different species of crops. • The PlantNet outperforms state-of-the-art deep learning networks including PointNet, PointNet++, SGPN, and ASIS with an average improvement of 5.56% in Precision on semantic segmentation and an average improvement of 22.18% in Mean Precision on instance segmentation. The accurate plant organ segmentation is crucial and challenging to the quantification of plant architecture and selection of plant ideotype. The popularity of point cloud data and deep learning methods make plant organ segmentation a feasible and cutting-edge research. However, current plant organ segmentation methods are specially designed for only one species or variety, and they rarely perform semantic segmentation (stems and leaves) and instance segmentation (individual leaf) simultaneously. This study innovates a dual-function deep learning neural network (PlantNet) to realize semantic segmentation and instance segmentation of two dicotyledons and one monocotyledon from point clouds. The innovations of the PlantNet include a 3D Edge-Preserving Sampling (3DEPS) strategy for preprocessing input points, a Local Feature Extraction Operation (LFEO) module based on dynamic graph convolutions, and a semantic-instance Feature Fusion Module (FFM). The semantic segmentation results of tobacco, tomato, and sorghum in average Precision , Recall , F1-score , and IoU reached 92.49%, 92.04%, 92.13%, and 85.86%, respectively; and the instance segmentation results in the mean precision (mPrec) , the mean recall (mRec) , the mean coverage (mCov) , and the mean weighted coverage (mWCov) reached 83.30%, 74.08%, 78.62%, and 84.38%, respectively. The PlantNet outperformed state-of-the-art deep learning networks including PointNet, PointNet++, SGPN, and ASIS, which achieved an average improvement of 5.56%, 3.58%, 4.78%, and 6.74% in Precision, Recall, F1-score, IoU on semantic segmentation, and an average improvement of 22.18%, 16.37%, 14.13%, and 13.35% in mPrec , mRec , mCov , and mWCov on instance segmentation. In addition, the effectiveness of 3DEPS, sub-modules, and the new loss function were verified separately by the ablation analysis, in which the removal of any of them can result in a segmentation performance decline of up to 2.0% on average quantitative measures. This study may contribute to the development of plant phenotype extraction, ideotype selection, and intelligent agriculture. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Nitrogen fertilization produces divergent effects on canopy structure between indica and japonica rice reflected in leaf to panicle ratio based on deep learning.
- Author
-
Yang, Zongfeng, Qi, Xiaotian, Dai, Yuan, Wang, Yu, Xiao, Feng, Ni, Jinlong, Jin, Shichao, Li, Ganghua, Ding, Yanfeng, Paul, Matthew J., and Liu, Zhenghui
- Subjects
- *
DEEP learning , *LEAF area index , *HYBRID rice , *RICE , *CROP science , *CROP canopies , *TRANSFORMER models - Abstract
High throughput plant phenomics enables precise quantification of structural information for the complex crop canopy. Leaf to panicle ratio (LPR) in terms of light interception is a physiological trait we formerly developed to clarify the light distribution pattern within the canopy of japonica rice. Here, using the methodology of deep learning neural network (Transformer Feature Pyramid Network), we proposed a general method for LPR calculation for both japonica and indica rice, and tested it in the study on variation of canopy structure across nitrogen (N) fertilization modes. Field experiments over three years (2020–2022) with three nitrogen levels and two basal to topdressing ratios were conducted for two cultivars of each japonica and indica rice. Results showed contrasting dynamic variation of LPR between the two species, ascending for indica rice but descending for japonica rice along with the grain-filling progression. Indica rice had larger temporal variation in LPR than the japonica. N topdressing significantly increased the LPR of indica rice cultivars at same N level, whereas that of japonica was dependent on N level and genotype. Morphological measurement revealed that the differential response of LPR to N was associated with the height difference between the flag leaf and panicle, panicle curvature, leaf area index and panicle area index. Correlation analysis revealed that the relation between LPR and grain yield was significantly positive for indica rice but negative for japonica rice. Our findings suggest that LPR can effectively reflect the characteristics of canopy structure as affected by cultivars and fertilization modes, thus being a valuable physiological indicator for crop science. • LPR (leaf to panicle ratio) was used to compare the canopy structure between japonica and indica rice. • The two subspecies were contrasting in the dynamics of canopy structure. • LPR of indica rice was more sensitive to nitrogen topdressing. • LPR and yield was positively related for indica but negatively for japonica. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. StomataTracker: Revealing circadian rhythms of wheat stomata with in-situ video and deep learning.
- Author
-
Sun, Zhuangzhuang, Wang, Xiao, Song, Yunlin, Li, Qing, Song, Jin, Cai, Jian, Zhou, Qin, Zhong, Yingxin, Jin, Shichao, and Jiang, Dong
- Subjects
- *
DEEP learning , *STOMATA , *GAS exchange in plants , *CIRCADIAN rhythms , *WHEAT breeding , *WHEAT - Abstract
• A new deep-learning-based individual stoma tracking pipeline was proposed. • The circadian rhythm of stomata opening was first reported from video data. • Smaller stomata not only respond faster but also had longer closure time at night. Plant stomata are essential channels for gas exchange between plants and the environment. The infrared gas-exchange system has greatly accelerated the studies of stomatal conductance (g s). Nevertheless, due to the lack of in-situ monitoring techniques, the behavior of stomata themselves remains poorly understood, especially in nocturnal environmental conditions. Here, a deep-learning-based stoma tracking pipeline (StomataTracker) was first proposed to continuously monitor stoma traits from unprecedentedly long-term, continuous, and non-destructive video data. Compared to the semi-automatic method (ImageJ), the open-source StomataTracker could greatly improve the extraction efficiency from 207 s to 1.47 s of stomatal traits, including stomatal area, perimeter, length, and width. The R2 adjusted of the four stomatal traits ranged from 0.620 to 0.752. In addition, the rhythm of wheat stomata opening in a completely dark environment was first reported from long-term video data. The closed time of stoma at night was negatively correlated with stomatal traits, and the R ranged from −0.583 to −0.855. The heterogeneity of stomatal behavior also highlighted that smaller stomata have the rhythm pattern of longer closure time at night. Overall, our study provides a novel perspective for stomatal study, and it is conducive to accelerating the application of stomatal circadian rhythm in wheat breeding. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.