41 results on '"Mei, Liye"'
Search Results
2. Accurately matching serial whole slide images for tumor heterogeneity analysis
- Author
-
Li, Xiaoxiao, Mei, Liye, Long, Mengping, Huang, Jin, Yang, Wei, Liu, Yan, Hao, Xin, Liu, Yiqiang, Shen, Hui, Hou, Jinxuan, Xu, Yu, Zhou, Fuling, Wang, Du, Wu, Jianghua, Hu, Taobo, and Lei, Cheng
- Published
- 2025
- Full Text
- View/download PDF
3. DAT-Net: Deep Aggregation Transformer Network for automatic nuclear segmentation
- Author
-
Mei, Mengqing, Wei, Zimei, Hu, Bin, Wang, Mingwei, Mei, Liye, and Ye, Zhiwei
- Published
- 2024
- Full Text
- View/download PDF
4. GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion
- Author
-
Mei, Liye, Hu, Xinglong, Ye, Zhaoyi, Tang, Linfeng, Wang, Ying, Li, Di, Liu, Yan, Hao, Xin, Lei, Cheng, Xu, Chuan, and Yang, Wei
- Published
- 2024
- Full Text
- View/download PDF
5. DSCA-Net: Double-stage Codec Attention Network for automatic nuclear segmentation
- Author
-
Ye, Zhiwei, Hu, Bin, Sui, Haigang, Mei, Mengqing, Mei, Liye, and Zhou, Ran
- Published
- 2024
- Full Text
- View/download PDF
6. An orientation-free ring feature descriptor with stain-variability normalization for pathology image matching
- Author
-
Li, Xiaoxiao, Long, Mengping, Huang, Jin, Wu, Jianghua, Shen, Hui, Zhou, Fuling, Hou, Jinxuan, Xu, Yu, Wang, Du, Mei, Liye, Liu, Yiqiang, Hu, Taobo, and Lei, Cheng
- Published
- 2023
- Full Text
- View/download PDF
7. UAVs and birds classification using robust coordinate attention synergy residual split-attention network based on micro-Doppler signature measurement by using L-band staring radar
- Author
-
Dai, Ting, Mei, Liye, Zhang, Yue, Tian, Biao, Guo, Rui, Wang, Teng, Du, Shan, and Xu, Shiyou
- Published
- 2023
- Full Text
- View/download PDF
8. KRT13-expressing epithelial cell population predicts better response to chemotherapy and immunotherapy in bladder cancer: Comprehensive evidences based on BCa database
- Author
-
Yu, Donghu, Chen, Chen, Sun, Le, Wu, Shaojie, Tang, Xiaoyu, Mei, Liye, Lei, Cheng, Wang, Du, Wang, Xinghuan, Cheng, Liang, and Li, Sheng
- Published
- 2023
- Full Text
- View/download PDF
9. FDADNet: Detection of Surface Defects in Wood-Based Panels Based on Frequency Domain Transformation and Adaptive Dynamic Downsampling.
- Author
-
Li, Hongli, Yi, Zhiqi, Wang, Zhibin, Wang, Ying, Ge, Liang, Cao, Wei, Mei, Liye, Yang, Wei, and Sun, Qin
- Abstract
The detection of surface defects on wood-based panels plays a crucial role in product quality control. However, due to the complex background and low contrast of defects in wood-based panel images, features extracted by traditional deep learning methods based on spatial domain processing often contain noise and blurred boundaries, which severely affects detection performance. To address these issues, we have proposed a wood-based panel surface defect detection method based on frequency domain transformation and adaptive dynamic downsampling (FDADNet). Specifically, we designed a Multi-axis Frequency Domain Weighted Information Representation Module (MFDW), which effectively decoupled the indistinguishable low-contrast defects from the background in the transform domain. Gaussian filtering was then employed to eliminate noise and blur between the defects and the background. Additionally, to tackle the issue of scale differences in defects that led to difficulties in accurate capture, we designed an Adaptive Dynamic Convolution (ADConv) module for downsampling. This method flexibly compressed and enhanced features, effectively improving the differentiation of the features of objects of varying scales in the transform space, and ultimately achieved effective defect detection. To compensate for the lack of data, we constructed a dataset of wood-based panel surface defects, WBP-DET. The experimental results showed that the proposed FDADNet effectively improved the detection performance of wood-based panel surface defects in complex scenarios, achieving a solid balance between efficiency and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Dual-Stream Feature Collaboration Perception Network for Salient Object Detection in Remote Sensing Images.
- Author
-
Li, Hongli, Chen, Xuhui, Mei, Liye, and Yang, Wei
- Subjects
OBJECT recognition (Computer vision) ,REMOTE sensing ,ARTIFICIAL intelligence ,FEATURE extraction ,TRANSFORMER models - Abstract
As the core technology of artificial intelligence, salient object detection (SOD) is an important approach to improve the analysis efficiency of remote sensing images by intelligently identifying key areas in images. However, existing methods that rely on a single strategy, convolution or Transformer, exhibit certain limitations in complex remote sensing scenarios. Therefore, we developed a Dual-Stream Feature Collaboration Perception Network (DCPNet) to enable the collaborative work and feature complementation of Transformer and CNN. First, we adopted a dual-branch feature extractor with strong local bias and long-range dependence characteristics to perform multi-scale feature extraction from remote sensing images. Then, we presented a Multi-path Complementary-aware Interaction Module (MCIM) to refine and fuse the feature representations of salient targets from the global and local branches, achieving fine-grained fusion and interactive alignment of dual-branch features. Finally, we proposed a Feature Weighting Balance Module (FWBM) to balance global and local features, preventing the model from overemphasizing global information at the expense of local details or from inadequately mining global cues due to excessive focus on local information. Extensive experiments on the EORSSD and ORSSD datasets demonstrated that DCPNet outperformed the current 19 state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Global Semantic-Sense Aggregation Network for Salient Object Detection in Remote Sensing Images.
- Author
-
Li, Hongli, Chen, Xuhui, Yang, Wei, Huang, Jian, Sun, Kaimin, Wang, Ying, Huang, Andong, and Mei, Liye
- Subjects
REMOTE sensing ,ENTROPY (Information theory) ,DECISION making - Abstract
Salient object detection (SOD) aims to accurately identify significant geographical objects in remote sensing images (RSI), providing reliable support and guidance for extensive geographical information analyses and decisions. However, SOD in RSI faces numerous challenges, including shadow interference, inter-class feature confusion, as well as unclear target edge contours. Therefore, we designed an effective Global Semantic-aware Aggregation Network (GSANet) to aggregate salient information in RSI. GSANet computes the information entropy of different regions, prioritizing areas with high information entropy as potential target regions, thereby achieving precise localization and semantic understanding of salient objects in remote sensing imagery. Specifically, we proposed a Semantic Detail Embedding Module (SDEM), which explores the potential connections among multi-level features, adaptively fusing shallow texture details with deep semantic features, efficiently aggregating the information entropy of salient regions, enhancing information content of salient targets. Additionally, we proposed a Semantic Perception Fusion Module (SPFM) to analyze map relationships between contextual information and local details, enhancing the perceptual capability for salient objects while suppressing irrelevant information entropy, thereby addressing the semantic dilution issue of salient objects during the up-sampling process. The experimental results on two publicly available datasets, ORSSD and EORSSD, demonstrated the outstanding performance of our method. The method achieved 93.91% S
α , 98.36% Eξ , and 89.37% Fβ on the EORSSD dataset. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
12. Dense Contour-Imbalance Aware framework for Colon Gland Instance Segmentation
- Author
-
Mei, Liye, Guo, Xiaopeng, Huang, Xin, Weng, Yueyun, Liu, Sheng, and Lei, Cheng
- Published
- 2020
- Full Text
- View/download PDF
13. SCFNet: Lightweight Steel Defect Detection Network Based on Spatial Channel Reorganization and Weighted Jump Fusion.
- Author
-
Li, Hongli, Yi, Zhiqi, Mei, Liye, Duan, Jia, Sun, Kaimin, Li, Mengcheng, Yang, Wei, and Wang, Ying
- Subjects
LIGHTWEIGHT steel ,OBJECT recognition (Computer vision) ,FEATURE extraction ,DATA augmentation ,SMALL business ,AMBIGUITY - Abstract
The goal of steel defect detection is to enhance the recognition accuracy and accelerate the detection speed with fewer parameters. However, challenges arise in steel sample detection due to issues such as feature ambiguity, low contrast, and similarity among inter-class features. Moreover, limited computing capability makes it difficult for small and medium-sized enterprises to deploy and utilize networks effectively. Therefore, we propose a novel lightweight steel detection network (SCFNet), which is based on spatial channel reconstruction and deep feature fusion. The network adopts a lightweight and efficient feature extraction module (LEM) for multi-scale feature extraction, enhancing the capability to extract blurry features. Simultaneously, we adopt spatial and channel reconstruction convolution (ScConv) to reconstruct the spatial and channel features of the feature maps, enhancing the spatial localization and semantic representation of defects. Additionally, we adopt the Weighted Bidirectional Feature Pyramid Network (BiFPN) for defect feature fusion, thereby enhancing the capability of the model in detecting low-contrast defects. Finally, we discuss the impact of different data augmentation methods on the model accuracy. Extensive experiments are conducted on the NEU-DET dataset, resulting in a final model achieving an mAP of 81.2%. Remarkably, this model only required 2.01 M parameters and 5.9 GFLOPs of computation. Compared to state-of-the-art object detection algorithms, our approach achieves a higher detection accuracy while requiring fewer computational resources, effectively balancing the model size and detection accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Progressive matching method of aerial-ground remote sensing image via multi-scale context feature coding.
- Author
-
Xu, Chuan, Xu, Junjie, Huang, Tao, Zhang, Huan, Mei, Liye, Zhang, Xia, Duan, Yu, and Yang, Wei
- Subjects
STANDARD deviations ,REMOTE sensing ,LINEAR network coding ,IMAGE registration ,SMART cities - Abstract
The fine 3D model is the essential spatial information for the construction of a smart city. UAV aerial images with large-scale scene perception ability are common data sources for 3D modelling of cities at present. However, in some complex urban areas, a single aerial image is difficult to capture the 3D scene information because of the existence of some problems such as inaccurate edges, holes, and blurred building facade textures due to changes in perspective and area occlusion. Therefore, how to solve perspective changes and area occlusion of the aerial image quickly and efficiently has become an important problem. The ground image can be used as an important supplement to solve the problem of missing bottom and area occlusion in oblique photography modelling. Thus, this article proposes a progressive matching method via multi-scale context feature coding network to achieve robust matching of aerial-ground remote sensing images, which provides better technical support for urban modelling. The main idea consists of three parts: (1) a multi-scale context feature coding network is designed to extract feature on aerial-ground images efficiently; (2) a block-based matching strategy is proposed to pay more attention to local features of the aerial-ground images; (3) a progressive matching method is applied in block matching stage to obtain more accurate features. We used eight sets of typical data, such as aerial images captured by the drone DJI-MAVIC2 and ground images captured by handheld devices as experimental objects, and compared them with algorithms such as SIFT, D2-net, DFM and SuperGlue. Experimental results show that our proposed aerial-ground image matching method has a good performance that the average NCM has improved 2.1–8.2 times, and the average rate of correct matching has an average increase of 26% points with the average root of mean square error is only 1.48 pixels. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Cross-Attention-Guided Feature Alignment Network for Road Crack Detection.
- Author
-
Xu, Chuan, Zhang, Qi, Mei, Liye, Chang, Xiufeng, Ye, Zhaoyi, Wang, Junjian, Ye, Lang, and Yang, Wei
- Subjects
FEATURE extraction ,URBAN planning ,CITY traffic ,ROAD safety measures - Abstract
Road crack detection is one of the important issues in the field of traffic safety and urban planning. Currently, road damage varies in type and scale, and often has different sizes and depths, making the detection task more challenging. To address this problem, we propose a Cross-Attention-guided Feature Alignment Network (CAFANet) for extracting and integrating multi-scale features of road damage. Firstly, we use a dual-branch visual encoder model with the same structure but different patch sizes (one large patch and one small patch) to extract multi-level damage features. We utilize a Cross-Layer Interaction (CLI) module to establish interaction between the corresponding layers of the two branches, combining their unique feature extraction capability and contextual understanding. Secondly, we employ a Feature Alignment Block (FAB) to align the features from different levels or branches in terms of semantics and spatial aspects, which significantly improves the CAFANet's perception of the damage regions, reduces background interference, and achieves more precise detection and segmentation of damage. Finally, we adopt multi-layer convolutional segmentation heads to obtain high-resolution feature maps. To validate the effectiveness of our approach, we conduct experiments on the public CRACK500 dataset and compare it with other mainstream methods. Experimental results demonstrate that CAFANet achieves excellent performance in road crack detection tasks, which exhibits significant improvements in terms of F1 score and accuracy, with an F1 score of 73.22% and an accuracy of 96.78%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Optical time‐stretch imaging flow cytometry in the compressed domain.
- Author
-
Lin, Siyuan, Li, Rubing, Weng, Yueyun, Mei, Liye, Wei, Chao, Song, Congkuan, Wei, Shubin, Yao, Yifan, Ruan, Xiaolan, Zhou, Fuling, Geng, Qing, Wang, Du, and Lei, Cheng
- Abstract
Imaging flow cytometry based on optical time‐stretch (OTS) imaging combined with a microfluidic chip attracts much attention in the large‐scale single‐cell analysis due to its high throughput, high precision, and label‐free operation. Compressive sensing has been integrated into OTS imaging to relieve the pressure on the sampling and transmission of massive data. However, image decompression brings an extra overhead of computing power to the system, but does not generate additional information. In this work, we propose and demonstrate OTS imaging flow cytometry in the compressed domain. Specifically, we constructed a machine‐learning network to analyze the cells without decompressing the images. The results show that our system enables high‐quality imaging and high‐accurate cell classification with an accuracy of over 99% at a compression ratio of 10%. This work provides a viable solution to the big data problem in OTS imaging flow cytometry, boosting its application in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion.
- Author
-
Shen, Sen, Li, Di, Mei, Liye, Xu, Chuan, Ye, Zhaoyi, Zhang, Qi, Hong, Bo, Yang, Wei, and Wang, Ying
- Published
- 2023
- Full Text
- View/download PDF
18. Cell damage evaluation by intelligent imaging flow cytometry.
- Author
-
Yao, Yifan, He, Li, Mei, Liye, Weng, Yueyun, Huang, Jin, Wei, Shubin, Li, Rubing, Tian, Sheng, Liu, Pan, Ruan, Xiaolan, Wang, Du, Zhou, Fuling, and Lei, Cheng
- Abstract
Essential thrombocythemia (ET) is an uncommon situation in which the body produces too many platelets. This can cause blood clots anywhere in the body and results in various symptoms and even strokes or heart attacks. Removing excessive platelets using acoustofluidic methods receives extensive attention due to their high efficiency and high yield. While the damage to the remaining cells, such as erythrocytes and leukocytes is yet evaluated. Existing cell damage evaluation methods usually require cell staining, which are time‐consuming and labor‐intensive. In this paper, we investigate cell damage by optical time‐stretch (OTS) imaging flow cytometry with high throughput and in a label‐free manner. Specifically, we first image the erythrocytes and leukocytes sorted by acoustofluidic sorting chip with different acoustic wave powers and flowing speed using OTS imaging flow cytometry at a flowing speed up to 1 m/s. Then, we employ machine learning algorithms to extract biophysical phenotypic features from the cellular images, as well as to cluster and identify images. The results show that both the errors of the biophysical phenotypic features and the proportion of abnormal cells are within 10% in the undamaged cell groups, while the errors are much greater than 10% in the damaged cell groups, indicating that acoustofluidic sorting causes little damage to the cells within the appropriate acoustic power, agreeing well with clinical assays. Our method provides a novel approach for high‐throughput and label‐free cell damage evaluation in scientific research and clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. All-Optical Fourier-Domain-Compressed Time-Stretch Imaging with Low-Pass Filtering.
- Author
-
Li, Rubing, Weng, Yueyun, Lin, Siyuan, Wei, Chao, Mei, Liye, Wei, Shubin, Yao, Yifan, Zhou, Fuling, Wang, Du, Goda, Keisuke, and Lei, Cheng
- Published
- 2023
- Full Text
- View/download PDF
20. Building Change Detection in Remote Sensing Imagery with Focal Self-Attention and Multi-Level Feature Fusion.
- Author
-
Shen, Peiquan, Mei, Liye, Ye, Zhaoyi, Wang, Ying, Zhang, Qi, Hong, Bo, Yin, Xiliang, and Yang, Wei
- Subjects
URBAN growth ,URBAN planning ,ENVIRONMENTAL monitoring ,LAND management ,PROBLEM solving ,INTELLIGENT buildings ,THEMATIC mapper satellite ,REMOTE sensing ,FUSION reactors - Abstract
Accurate and intelligent building change detection greatly contributes to effective urban development, optimized resource management, and informed decision-making in domains such as urban planning, land management, and environmental monitoring. Existing methodologies face challenges in effectively integrating local and global features for accurate building change detection. To address these challenges, we propose a novel method that uses focal self-attention to process the feature vector of input images, which uses a "focusing" mechanism to guide the calculation of the self-attention mechanism. By focusing more on critical areas when processing image features in different regions, focal self-attention can better handle both local and global information, and is more flexible and adaptive than other methods, improving detection accuracy. In addition, our multi-level feature fusion module groups the features and then constructs a hierarchical residual structure to fuse the grouped features. On the LEVIR-CD and WHU-CD datasets, our proposed method achieved F1-scores of 91.62% and 89.45%, respectively. Compared with existing methods, ours performed better on building change detection tasks. Our method therefore provides a framework for solving problems related to building change detection, with some reference value and guiding significance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Progressive Context-Aware Aggregation Network Combining Multi-Scale and Multi-Level Dense Reconstruction for Building Change Detection.
- Author
-
Xu, Chuan, Ye, Zhaoyi, Mei, Liye, Yang, Wei, Hou, Yingying, Shen, Sen, Ouyang, Wei, and Ye, Zhiwei
- Subjects
BUILDING repair ,DEEP learning ,INDIVIDUAL differences ,PROGRESSIVE collapse ,REMOTE sensing - Abstract
Building change detection (BCD) using high-resolution remote sensing images aims to identify change areas during different time periods, which is a significant research focus in urbanization. Deep learning methods are capable of yielding impressive BCD results by correctly extracting change features. However, due to the heterogeneous appearance and large individual differences of buildings, mainstream methods cannot further extract and reconstruct hierarchical and rich feature information. To overcome this problem, we propose a progressive context-aware aggregation network combining multi-scale and multi-level dense reconstruction to identify detailed texture-rich building change information. We design the progressive context-aware aggregation module with a Siamese structure to capture both local and global features. Specifically, we first use deep convolution to obtain superficial local change information of buildings, and then utilize self-attention to further extract global features with high-level semantics based on the local features progressively, which ensures capability of the context awareness of our feature representations. Furthermore, our multi-scale and multi-level dense reconstruction module groups extracted feature information according to pre- and post-temporal sequences. By using multi-level dense reconstruction, the following groups are able to directly learn feature information from the previous groups, enhancing the network's robustness to pseudo changes. The proposed method outperforms eight state-of-the-art methods on four common BCD datasets, including LEVIR-CD, SYSU-CD, WHU-CD, and S2Looking-CD, both in terms of visual comparison and objective evaluation metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Typing of acute leukemia by intelligent optical time-stretch imaging flow cytometry on a chip.
- Author
-
Weng, Yueyun, Shen, Hui, Mei, Liye, Liu, Li, Yao, Yifan, Li, Rubing, Wei, Shubin, Yan, Ruopeng, Ruan, Xiaolan, Wang, Du, Wei, Yongchang, Deng, Yunjie, Zhou, Yuqi, Xiao, Tinghui, Goda, Keisuke, Liu, Sheng, Zhou, Fuling, and Lei, Cheng
- Subjects
ACUTE leukemia ,CONVOLUTIONAL neural networks ,FLOW cytometry ,ACUTE myeloid leukemia ,OPTICAL images ,BONE marrow - Abstract
Acute leukemia (AL) is one of the top life-threatening diseases. Accurate typing of AL can significantly improve its prognosis. However, conventional methods for AL typing often require cell staining, which is time-consuming and labor-intensive. Furthermore, their performance is highly limited by the specificity and availability of fluorescent labels, which can hardly meet the requirements of AL typing in clinical settings. Here, we demonstrate AL typing by intelligent optical time-stretch (OTS) imaging flow cytometry on a microfluidic chip. Specifically, we employ OTS microscopy to capture the images of cells in clinical bone marrow samples with a spatial resolution of 780 nm at a high flowing speed of 1 m s
−1 in a label-free manner. Then, to show the clinical utility of our method for which the features of clinical samples are diverse, we design and construct a deep convolutional neural network (CNN) to analyze the cellular images and determine the AL type of each sample. We measure 30 clinical samples composed of 7 acute lymphoblastic leukemia (ALL) samples, 17 acute myelogenous leukemia (AML) samples, and 6 samples from healthy donors, resulting in a total of 227 620 images acquired. Results show that our method can distinguish ALL and AML with an accuracy of 95.03%, which, to the best of our knowledge, is a record in label-free AL typing. In addition to AL typing, we believe that the high throughput, high accuracy, and label-free operation of our method make it a potential solution for cell analysis in scientific research and clinical settings. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
23. Dense Multiscale Feature Learning Transformer Embedding Cross-Shaped Attention for Road Damage Detection.
- Author
-
Xu, Chuan, Zhang, Qi, Mei, Liye, Shen, Sen, Ye, Zhaoyi, Li, Di, Yang, Wei, and Zhou, Xiangyang
- Subjects
FEATURE extraction ,POTHOLES (Roads) ,LEARNING modules ,ROAD maintenance ,MACHINE learning - Abstract
Road damage detection is essential to the maintenance and management of roads. The morphological road damage contains a large number of multi-scale features, which means that existing road damage detection algorithms are unable to effectively distinguish and fuse multiple features. In this paper, we propose a dense multiscale feature learning Transformer embedding cross-shaped attention for road damage detection (DMTC) network, which can segment the damage information in road images and improve the effectiveness of road damage detection. Our DMTC makes three contributions. Firstly, we adopt a cross-shaped attention mechanism to expand the perceptual field of feature extraction, and its global attention effectively improves the feature description of the network. Secondly, we use the dense multi-scale feature learning module to integrate local information at different scales, so that we are able to overcome the difficulty of detecting multiscale targets. Finally, we utilize a multi-layer convolutional segmentation head to generalize the previous feature learning and get a final detection result. Experimental results show that our DMTC network could segment pavement pothole patterns more accurately and effectively than other methods, achieving an F1 score of 79.39% as well as an OA score of 99.83% on the cracks-and-potholes-in-road-images-dataset (CPRID). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Multiparameter Investigation of Diamond Plates with Optical Time-Stretch Quantitative Phase Imaging.
- Author
-
Weng, Yueyun, Wu, Gai, Li, Rubing, Mei, Liye, Wei, Shubin, Yao, Yifan, Li, Zhongxing, Wang, Du, Liu, Sheng, and Lei, Cheng
- Published
- 2023
- Full Text
- View/download PDF
25. BM-Net: CNN-Based MobileNet-V3 and Bilinear Structure for Breast Cancer Detection in Whole Slide Images.
- Author
-
Huang, Jin, Mei, Liye, Long, Mengping, Liu, Yiqiang, Sun, Wei, Li, Xiaoxiao, Shen, Hui, Zhou, Fuling, Ruan, Xiaolan, Wang, Du, Wang, Shu, Hu, Taobo, and Lei, Cheng
- Subjects
- *
EARLY detection of cancer , *BREAST cancer , *DATA augmentation , *CARCINOMA in situ , *CANCER diagnosis - Abstract
Breast cancer is one of the most common types of cancer and is the leading cause of cancer-related death. Diagnosis of breast cancer is based on the evaluation of pathology slides. In the era of digital pathology, these slides can be converted into digital whole slide images (WSIs) for further analysis. However, due to their sheer size, digital WSIs diagnoses are time consuming and challenging. In this study, we present a lightweight architecture that consists of a bilinear structure and MobileNet-V3 network, bilinear MobileNet-V3 (BM-Net), to analyze breast cancer WSIs. We utilized the WSI dataset from the ICIAR2018 Grand Challenge on Breast Cancer Histology Images (BACH) competition, which contains four classes: normal, benign, in situ carcinoma, and invasive carcinoma. We adopted data augmentation techniques to increase diversity and utilized focal loss to remove class imbalance. We achieved high performance, with 0.88 accuracy in patch classification and an average 0.71 score, which surpassed state-of-the-art models. Our BM-Net shows great potential in detecting cancer in WSIs and is a promising clinical tool. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. SAR image water extraction using the attention U-net and multi-scale level set method: flood monitoring in South China in 2020 as a test case.
- Author
-
Xu, Chuan, Zhang, Shanshan, Zhao, Bofei, Liu, Chang, Sui, Haigang, Yang, Wei, and Mei, Liye
- Subjects
CONVOLUTIONAL neural networks ,WATER use ,IMAGE segmentation ,LEVEL set methods ,FLOODS ,DEEP learning ,BODIES of water - Abstract
Level set method has been extensively used for image segmentation, which is a key technology of water extraction. However, one of the problems of the level-set method is how to find the appropriate initial surface parameters, which will affect the accuracy and speed of level set evolution. Recently, the semantic segmentation based on deep learning has opened the exciting research possibilities. In addition, the Convolutional Neural Network (CNN) has shown a strong feature representation capability. Therefore, in this paper, the CNN method is used to obtain the initial SAR image segmentation map to provide deep a priori information for the zero-level set curve, which only needs to describe the general outline of the water body, rather than the accurate edges. Compared with the traditional circular and rectangular zero-level set initialization method, this method can converge to the edge of the water body faster and more precisely; it will not fall into the local minimum value and be able to obtain accurate segmentation results. The effectiveness of the proposed method is demonstrated by the experimental results of flood disaster monitoring in South China in 2020. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Adversarial Multiscale Feature Learning Framework for Overlapping Chromosome Segmentation.
- Author
-
Mei, Liye, Yu, Yalan, Shen, Hui, Weng, Yueyun, Liu, Yan, Wang, Du, Liu, Sheng, Zhou, Fuling, and Lei, Cheng
- Subjects
- *
GENERATIVE adversarial networks , *CHROMOSOMES , *CHROMOSOME analysis , *KARYOTYPES , *IMAGE analysis - Abstract
Chromosome karyotype analysis is of great clinical importance in the diagnosis and treatment of diseases. Since manual analysis is highly time and effort consuming, computer-assisted automatic chromosome karyotype analysis based on images is routinely used to improve the efficiency and accuracy of the analysis. However, the strip-shaped chromosomes easily overlap each other when imaged, significantly affecting the accuracy of the subsequent analysis and hindering the development of chromosome analysis instruments. In this paper, we present an adversarial, multiscale feature learning framework to improve the accuracy and adaptability of overlapping chromosome segmentation. We first adopt the nested U-shaped network with dense skip connections as the generator to explore the optimal representation of the chromosome images by exploiting multiscale features. Then we use the conditional generative adversarial network (cGAN) to generate images similar to the original ones; the training stability of the network is enhanced by applying the least-square GAN objective. Finally, we replace the common cross-entropy loss with the advanced Lovász-Softmax loss to improve the model's optimization and accelerate the model's convergence. Comparing with the established algorithms, the performance of our framework is proven superior by using public datasets in eight evaluation criteria, showing its great potential in overlapping chromosome segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Cross spatio-temporal attention network for change detection.
- Author
-
Yi, Zhigang, Yu, Haonan, Ye, Zhaoyi, Huang, Shengyu, Wang, Ying, Mei, Liye, and Xu, Chuan
- Published
- 2024
- Full Text
- View/download PDF
29. Multi‐focus image fusion with Siamese self‐attention network.
- Author
-
Guo, Xiaopeng, Meng, Lingyu, Mei, Liye, Weng, Yueyun, and Tong, Hengqing
- Abstract
Recently, convolutional neural networks (CNNs) have achieved impressive progress in multi‐focus image fusion (MFF). However, it always fails to capture sufficient discrimination features due to the local receptive field limitations of the convolutional operator, restricting most current CNN‐based methods' performance. To address this issue, by leveraging self‐attention (SA) mechanism, the authors propose Siamese SA network (SSAN) for MFF. Specifically, two kinds of SA modules, position SA (PSA) and channel SA (CSA) are utilised to model the long‐range dependencies across focused and defocused regions in the multi‐focus image, alleviating the local receptive field limitations of convolution operators in CNN. To search a better feature representation of the input image for MFF, the captured features obtained by PSA and CSA are further merged through a learnable 1 × 1 convolution operator. The whole pipeline is in a Siamese network fashion to reduce the complexity. After training, the authors SSAN can accomplish well the fusion task with no post‐processing. Experiments demonstrate that their approach outperforms other current state‐of‐the‐art methods, not only in subjective visual perception but also in the quantitative assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Semantic Segmentation of Colon Gland with Conditional Generative Adversarial Network.
- Author
-
Mei, Liye, Guo, Xiaopeng, and Cheng, Chaowei
- Published
- 2019
- Full Text
- View/download PDF
31. Learning Geometric Invariance Features and Discrimination Representation for Image Classification via Spatial Transform Network and XGBoost Modeling.
- Author
-
Mei, Liye, Guo, Xiaopeng, and Yin, Wang
- Published
- 2018
- Full Text
- View/download PDF
32. Learning to Fuse Multi-Focus Image via Convolutional Network Modeling.
- Author
-
Guo, Xiaopeng, Mei, Liye, and Nie, Rencan
- Published
- 2018
- Full Text
- View/download PDF
33. FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network.
- Author
-
Guo, Xiaopeng, Nie, Rencan, Cao, Jinde, Zhou, Dongming, Mei, Liye, and He, Kangjian
- Abstract
We study the problem of multi-focus image fusion, where the key challenge is detecting the focused regions accurately among multiple partially focused source images. Inspired by the conditional generative adversarial network (cGAN) to image-to-image task, we propose a novel FuseGAN to fulfill the images-to-image for multi-focus image fusion. To satisfy the requirement of dual input-to-one output, the encoder of the generator in FuseGAN is designed as a Siamese network. The least square GAN objective is employed to enhance the training stability of FuseGAN, resulting in an accurate confidence map for focus region detection. Also, we exploit the convolutional conditional random fields technique on the confidence map to reach a refined final decision map for better focus region detection. Moreover, due to the lack of a large-scale standard dataset, we synthesize a large enough multi-focus image dataset based on a public natural image dataset PASCAL VOC 2012, where we utilize a normalized disk point spread function to simulate the defocus and separate the background and foreground in the synthesis for each image. We conduct extensive experiments on two public datasets to verify the effectiveness of the proposed method. Results demonstrate that the proposed method presents accurate decision maps for focus regions in multi-focus images, such that the fused images are superior to 11 recent state-of-the-art algorithms, not only in visual perception, but also in quantitative analysis in terms of five metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. CellSAM: Advancing Pathologic Image Cell Segmentation via Asymmetric Large-Scale Vision Model Feature Distillation Aggregation Network.
- Author
-
Ma X, Huang J, Long M, Li X, Ye Z, Hu W, Yalikun Y, Wang D, Hu T, Mei L, and Lei C
- Abstract
Segment anything model (SAM) has attracted extensive interest as a potent large-scale image segmentation model, with prior efforts adapting it for use in medical imaging. However, the precise segmentation of cell nucleus instances remains a formidable challenge in computational pathology, given substantial morphological variations and the dense clustering of nuclei with unclear boundaries. This study presents an innovative cell segmentation algorithm named CellSAM. CellSAM has the potential to improve the effectiveness and precision of disease identification and therapy planning. As a variant of SAM, CellSAM integrates dual-image encoders and employs techniques such as knowledge distillation and mask fusion. This innovative model exhibits promising capabilities in capturing intricate cell structures and ensuring adaptability in resource-constrained scenarios. The experimental results indicate that this structure effectively enhances the quality and precision of cell segmentation. Remarkably, CellSAM demonstrates outstanding results even with minimal training data. In the evaluation of particular cell segmentation tasks, extensive comparative analyzes show that CellSAM outperforms both general fundamental models and state-of-the-art (SOTA) task-specific models. Comprehensive evaluation metrics yield scores of 0.884, 0.876, and 0.768 for mean accuracy, recall, and precision respectively. Extensive experiments show that CellSAM excels in capturing subtle details and complex structures and is capable of segmenting cells in images accurately. Additionally, CellSAM demonstrates excellent performance on clinical data, indicating its potential for robust applications in treatment planning and disease diagnosis, thereby further improving the efficiency of computer-aided medicine., (© 2024 Wiley Periodicals LLC.)
- Published
- 2024
- Full Text
- View/download PDF
35. High-Accuracy and Lightweight Image Classification Network for Optimizing Lymphoblastic Leukemia Diagnosisy.
- Author
-
Mei L, Lian C, Han S, Jin S, He J, Dong L, Wang H, Shen H, Lei C, and Xiong B
- Abstract
Leukemia is a hematological malignancy that significantly impacts the human immune system. Early detection helps to effectively manage and treat cancer. Although deep learning techniques hold promise for early detection of blood disorders, their effectiveness is often limited by the physical constraints of available datasets and deployed devices. For this investigation, we collect an excellent-quality dataset of 17,826 morphological bone marrow cell images from 85 patients with lymphoproliferative neoplasms. We employ a progressive shrinking approach, which integrates a comprehensive pruning technique across multiple dimensions, including width, depth, resolution, and kernel size, to train our lightweight model. The proposed model achieves rapid identification of acute lymphoblastic leukemia, chronic lymphocytic leukemia, and other bone marrow cell types with an accuracy of 92.51% and a throughput of 111 slides per second, while comprising only 6.4 million parameters. This model significantly contributes to leukemia diagnosis, particularly in the rapid and accurate identification of lymphatic system diseases, and provides potential opportunities to enhance the efficiency and accuracy of medical experts in the diagnosis and treatment of lymphocytic leukemia., (© 2024 Wiley Periodicals LLC.)
- Published
- 2024
- Full Text
- View/download PDF
36. MSGM: An Advanced Deep Multi-Size Guiding Matching Network for Whole Slide Histopathology Images Addressing Staining Variation and Low Visibility Challenges.
- Author
-
Li X, Li Z, Hu T, Long M, Ma X, Huang J, Liu Y, Yalikun Y, Liu S, Wang D, Wu J, Mei L, and Lei C
- Subjects
- Humans, Algorithms, Staining and Labeling methods, Neoplasms diagnostic imaging, Neoplasms pathology, Neural Networks, Computer, Histocytochemistry methods, Image Processing, Computer-Assisted methods, Image Interpretation, Computer-Assisted methods, Deep Learning
- Abstract
Matching whole slide histopathology images to provide comprehensive information on homologous tissues is beneficial for cancer diagnosis. However, the challenge arises with the Giga-pixel whole slide images (WSIs) when aiming for high-accuracy matching. Learning-based methods are difficult to generalize well with large-size WSIs, necessitating the integration of traditional matching methods to enhance accuracy as the size increases. In this paper, we propose a multi-size guiding matching method applicable high-accuracy requirements. Specifically, we design learning multiscale texture to train deep descriptors, called TDescNet, that trains 64 × 64 × 256 and 256 × 256 × 128 size convolution layer as C64 and C256 descriptors to overcome staining variation and low visibility challenges. Furthermore, we develop the 3D-ring descriptor using sparse keypoints to support the description of large-size WSIs. Finally, we employ C64, C256, and 3D-ring descriptors to progressively guide refined local matching, utilizing geometric consistency to identify correct matching results. Experiments show that when matching WSIs of size 4096 × 4096 pixels, our average matching error is 123.48 μm and the success rate is 93.02 % in 43 cases. Notably, our method achieves an average improvement of 65.52 μm in matching accuracy compared to recent state-of-the-art methods, with enhancements ranging from 36.27 μm to 131.66 μm. Therefore, we achieve high-fidelity whole-slice image matching, and overcome staining variation and low visibility challenges, enabling assistance in comprehensive cancer diagnosis through matched WSIs.
- Published
- 2024
- Full Text
- View/download PDF
37. Adversarial training collaborating hybrid convolution-transformer network for automatic identification of reactive lymphocytes in peripheral blood.
- Author
-
Mei L, Peng H, Luo P, Jin S, Shen H, He J, Yang W, Ye Z, Sui H, Mei M, Lei C, and Xiong B
- Abstract
Reactive lymphocytes may indicate diseases such as viral infections. Identifying these abnormal lymphocytes is crucial for disease diagnosis. Currently, reactive lymphocytes are mainly manually identified by pathological experts with microscopes and morphological knowledge, which is time-consuming and laborious. Some studies have used convolutional neural networks (CNNs) to identify peripheral blood leukocytes, but there are limitations in the small receptive field of the model. Our model introduces a transformer based on CNN, expands the receptive field of the model, and enables it to extract global features more efficiently. We also enhance the generalization ability of the model through virtual adversarial training (VAT) without changing the parameters of the model. Finally, our model achieves an overall accuracy of 93.66% on the test set, and the accuracy of reactive lymphocytes also reaches 88.03%. This work takes another step toward the efficient identification of reactive lymphocytes., Competing Interests: The authors declare no conflicts of interest., (© 2024 Optica Publishing Group.)
- Published
- 2024
- Full Text
- View/download PDF
38. Monitoring response to neoadjuvant therapy for breast cancer in all treatment phases using an ultrasound deep learning model.
- Author
-
Zhang J, Deng J, Huang J, Mei L, Liao N, Yao F, Lei C, Sun S, and Zhang Y
- Abstract
Purpose: The aim of this study was to investigate the value of a deep learning model (DLM) based on breast tumor ultrasound image segmentation in predicting pathological response to neoadjuvant chemotherapy (NAC) in breast cancer., Methods: The dataset contains a total of 1393 ultrasound images of 913 patients from Renmin Hospital of Wuhan University, of which 956 ultrasound images of 856 patients were used as the training set, and 437 ultrasound images of 57 patients underwent NAC were used as the test set. A U-Net-based end-to-end DLM was developed for automatically tumor segmentation and area calculation. The predictive abilities of the DLM, manual segmentation model (MSM), and two traditional ultrasound measurement methods (longest axis model [LAM] and dual-axis model [DAM]) for pathological complete response (pCR) were compared using changes in tumor size ratios to develop receiver operating characteristic curves., Results: The average intersection over union value of the DLM was 0.856. The early-stage ultrasound-predicted area under curve (AUC) values of pCR were not significantly different from those of the intermediate and late stages (p< 0.05). The AUCs for MSM, DLM, LAM and DAM were 0.840, 0.756, 0.778 and 0.796, respectively. There was no significant difference in AUC values of the predictive ability of the four models., Conclusion: Ultrasonography was predictive of pCR in the early stages of NAC. DLM have a similar predictive value to conventional ultrasound for pCR, with an add benefit in effectively improving workflow., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2024 Zhang, Deng, Huang, Mei, Liao, Yao, Lei, Sun and Zhang.)
- Published
- 2024
- Full Text
- View/download PDF
39. High-throughput and high-accuracy diagnosis of multiple myeloma with multi-object detection.
- Author
-
Mei L, Shen H, Yu Y, Weng Y, Li X, Zahid KR, Huang J, Wang D, Liu S, Zhou F, and Lei C
- Abstract
Multiple myeloma (MM) is a type of blood cancer where plasma cells abnormally multiply and crowd out regular blood cells in the bones. Automated analysis of bone marrow smear examination is considered promising to improve the performance and reduce the labor cost in MM diagnosis. To address the drawbacks in established methods, which mainly aim at identifying monoclonal plasma cells (monoclonal PCs) via binary classification, in this work, considering that monoclonal PCs is not the only basis in MM diagnosis, for the first we construct a multi-object detection model for MM diagnosis. The experimental results show that our model can handle the images at a throughput of 80 slides/s and identify six lineages of bone marrow cells with an average accuracy of 90.8%. This work makes a step further toward full-automatic and high-efficiency MM diagnosis., Competing Interests: The authors declare no conflicts of interest., (© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.)
- Published
- 2022
- Full Text
- View/download PDF
40. Analysis of signal detection configurations in optical time-stretch imaging.
- Author
-
Weng Y, Mei L, Wu G, Chen S, Zhan B, Goda K, Liu S, and Lei C
- Abstract
Optical time-stretch (OTS) imaging is effective for observing ultra-fast dynamic events in real time by virtue of its capability of acquiring images with high spatial resolution at high speed. In different implementations of OTS imaging, different configurations of its signal detection, i.e. fiber-coupled and free-space detection schemes, are employed. In this research, we quantitatively analyze and compare the two detection configurations of OTS imaging in terms of sensitivity and image quality with the USAF-1951 resolution chart and diamond films, respectively, providing a valuable guidance for the system design of OTS imaging in diverse fields.
- Published
- 2020
- Full Text
- View/download PDF
41. Temporally interleaved optical time-stretch imaging.
- Author
-
Weng Y, Wu G, Mei L, Wang Q, Goda K, Liu S, and Lei C
- Abstract
Optical time-stretch imaging has shown potential in diverse fields for its capability of acquiring images at high speed and high resolution. However, its wide application is hindered by the stringent requirement on the instrumentation hardware caused by the high-speed serial data stream. Here we demonstrate temporally interleaved optical time-stretch imaging that lowers the requirement without sacrificing the frame rate or spatial resolution by interleaving the high-speed data stream into multiple channels in the time domain. Its performance is validated with both a United States Air Force (USAF)-1951 resolution chart and a single-crystal diamond film. We achieve a 101 Mfps 1D scanning rate and 3 µm spatial resolution with only a 2.5 GS/s sampling rate by using a two-channel-interleaved system.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.