40 results on '"Sun, Yaoqi"'
Search Results
2. Tyrosine hydroxylase inhibits HCC progression by downregulating TGFβ/Smad signaling
- Author
-
Liu, Guoqian, Li, Mengwei, Zeng, Zimei, Fan, Qi, Ren, Xinxin, Wang, Zhexin, Sun, Yaoqi, He, Yulin, Sun, Lunquan, Deng, Yuezhen, Liu, Shupeng, Zhong, Chenxi, and Gao, Jie
- Published
- 2024
- Full Text
- View/download PDF
3. Dynamic interactive refinement network for camouflaged object detection
- Author
-
Sun, Yaoqi, Ma, Lidong, Shou, Peiyao, Wen, Hongfa, Gao, YuHan, Liu, Yixiu, Yan, Chenggang, and Yin, Haibing
- Published
- 2024
- Full Text
- View/download PDF
4. GFNet: gated fusion network for video saliency prediction
- Author
-
Wu, Songhe, Zhou, Xiaofei, Sun, Yaoqi, Gao, Yuhan, Zhu, Zunjie, Zhang, Jiyong, and Yan, Chenggang
- Published
- 2023
- Full Text
- View/download PDF
5. Enhanced local distribution learning for real image super-resolution
- Author
-
Sun, Yaoqi, Chen, Quan, Xu, Wen, Huang, Aiai, Yan, Chenggang, and Zheng, Bolun
- Published
- 2024
- Full Text
- View/download PDF
6. Pixantrone as a novel MCM2 inhibitor for ovarian cancer treatment
- Author
-
Chen, Qingshan, Sun, Yaoqi, Li, Hao, Liu, Shupeng, Zhang, Hai, Cheng, Zhongping, and Wang, Yu
- Published
- 2024
- Full Text
- View/download PDF
7. Multiple-environment Self-adaptive Network for aerial-view geo-localization
- Author
-
Wang, Tingyu, Zheng, Zhedong, Sun, Yaoqi, Yan, Chenggang, Yang, Yi, and Chua, Tat-Seng
- Published
- 2024
- Full Text
- View/download PDF
8. Depth-guided deep filtering network for efficient single image bokeh rendering
- Author
-
Chen, Quan, Zheng, Bolun, Zhou, Xiaofei, Huang, Aiai, Sun, Yaoqi, Chen, Chuqiao, Yan, Chenggang, and Yuan, Shanxin
- Published
- 2023
- Full Text
- View/download PDF
9. ADNet: Anti-noise dual-branch network for road defect detection
- Author
-
Wan, Bin, Zhou, Xiaofei, Sun, Yaoqi, Wang, Tingyu, lv, Chengtao, Wang, Shuai, Yin, Haibing, and Yan, Chenggang
- Published
- 2024
- Full Text
- View/download PDF
10. TMNet: Triple-modal interaction encoder and multi-scale fusion decoder network for V-D-T salient object detection
- Author
-
Wan, Bin, lv, Chengtao, Zhou, Xiaofei, Sun, Yaoqi, Zhu, Zunjie, Wang, Hongkui, and Yan, Chenggang
- Published
- 2024
- Full Text
- View/download PDF
11. SMINet:Semantics-aware multi-level feature interaction network for surface defect detection
- Author
-
Wan, Bin, Zhou, Xiaofei, Sun, Yaoqi, Zhu, Zunjie, Yin, Haibing, Hu, Ji, Zhang, Jiyong, and Yan, Chenggang
- Published
- 2023
- Full Text
- View/download PDF
12. CANet: Context-aware Aggregation Network for Salient Object Detection of Surface Defects
- Author
-
Wan, Bin, Zhou, Xiaofei, Zhu, Bin, Xiao, Mang, Sun, Yaoqi, Zheng, Bolun, Zhang, Jiyong, and Yan, Chenggang
- Published
- 2023
- Full Text
- View/download PDF
13. MCM2 in human cancer: functions, mechanisms, and clinical significance
- Author
-
Sun, Yaoqi, Cheng, Zhongping, and Liu, Shupeng
- Published
- 2022
- Full Text
- View/download PDF
14. miR-133a targets YES1 to reduce cisplatin resistance in ovarian cancer by regulating cell autophagy
- Author
-
Zhou, Yang, Wang, Chunyan, Ding, Jinye, Chen, Yingying, Sun, Yaoqi, and Cheng, Zhongping
- Published
- 2022
- Full Text
- View/download PDF
15. DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries.
- Author
-
Zou, Binfeng, Huang, Xingru, Jiang, Yitao, Jin, Kai, and Sun, Yaoqi
- Subjects
X-ray imaging ,DENTAL pulp ,IMAGE segmentation ,DISEASE progression ,DENTIN - Abstract
The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Deep fusion based video saliency detection
- Author
-
Wen, Hongfa, Zhou, Xiaofei, Sun, Yaoqi, Zhang, Jiyong, and Yan, Chenggang
- Published
- 2019
- Full Text
- View/download PDF
17. Image classification base on PCA of multi-view deep representation
- Author
-
Sun, Yaoqi, Li, Liang, Zheng, Liang, Hu, Ji, Li, Wenchao, Jiang, Yatong, and Yan, Chenggang
- Published
- 2019
- Full Text
- View/download PDF
18. Multi-Channel Hypergraph Collaborative Filtering with Attribute Inference.
- Author
-
Jiang, Yutong, Gao, Yuhan, Sun, Yaoqi, Wang, Shuai, and Yan, Chenggang
- Subjects
HYPERGRAPHS ,DATA modeling - Abstract
In the field of collaborative filtering, attribute information is often integrated to improve recommendations. However, challenges remain unaddressed. Firstly, existing data modeling methods often fall short of appropriately handling attribute information. Secondly, attribute data are often sparse and can potentially impact recommendation performance due to the challenge of incomplete correspondence between the attribute information and the recommendations. To tackle these challenges, we propose a hypergraph collaborative filtering with attribute inference (HCFA) framework, which segregates attribute and user behavior information into distinct channels and leverages hypergraphs to capture high-order correlations among vertices, offering a more natural approach to modeling. Furthermore, we introduce behavior-based attribute confidence (BAC) for assessing the reliability of inferred attributes concerning the corresponding behaviors and update the most credible portions to enhance recommendation quality. Extensive experiments conducted on three public benchmarks demonstrate the superiority of our model. It consistently outperforms other state-of-the-art approaches, with ablation experiments further confirming the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Segmentation of Low-Light Optical Coherence Tomography Angiography Images under the Constraints of Vascular Network Topology.
- Author
-
Li, Zhi, Huang, Gaopeng, Zou, Binfeng, Chen, Wenhao, Zhang, Tianyun, Xu, Zhaoyang, Cai, Kunyan, Wang, Tingyu, Sun, Yaoqi, Wang, Yaqi, Jin, Kai, and Huang, Xingru
- Subjects
OPTICAL coherence tomography ,ANGIOGRAPHY ,CARDIOVASCULAR system ,IMAGE segmentation ,RETINAL vein occlusion ,VEIN diseases - Abstract
Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Lightweight Cross-Modal Information Mutual Reinforcement Network for RGB-T Salient Object Detection.
- Author
-
Lv, Chengtao, Wan, Bin, Zhou, Xiaofei, Sun, Yaoqi, Zhang, Jiyong, and Yan, Chenggang
- Subjects
COST - Abstract
RGB-T salient object detection (SOD) has made significant progress in recent years. However, most existing works are based on heavy models, which are not applicable to mobile devices. Additionally, there is still room for improvement in the design of cross-modal feature fusion and cross-level feature fusion. To address these issues, we propose a lightweight cross-modal information mutual reinforcement network for RGB-T SOD. Our network consists of a lightweight encoder, the cross-modal information mutual reinforcement (CMIMR) module, and the semantic-information-guided fusion (SIGF) module. To reduce the computational cost and the number of parameters, we employ the lightweight module in both the encoder and decoder. Furthermore, to fuse the complementary information between two-modal features, we design the CMIMR module to enhance the two-modal features. This module effectively refines the two-modal features by absorbing previous-level semantic information and inter-modal complementary information. In addition, to fuse the cross-level feature and detect multiscale salient objects, we design the SIGF module, which effectively suppresses the background noisy information in low-level features and extracts multiscale information. We conduct extensive experiments on three RGB-T datasets, and our method achieves competitive performance compared to the other 15 state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A High-Throughput Processor for GDN-Based Deep Learning Image Compression.
- Author
-
Shao, Hu, Liu, Bingtao, Li, Zongpeng, Yan, Chenggang, Sun, Yaoqi, and Wang, Tingyu
- Subjects
IMAGE compression ,FIELD programmable gate arrays ,DEEP learning ,BIT rate - Abstract
Deep learning-based image compression techniques can take advantage of the autoencoder's benefits to achieve greater compression quality at the same bit rate as traditional image compression, which is more in line with user desires. Designing a high-performance processor that can increase the inference speed and efficiency of the deep learning image compression (DIC) network is important to make this technology more extensively employed in mobile devices. To the best of our knowledge, there is no dedicated processor that can accelerate DIC with low power consumption, and general-purpose network accelerators based on field programmable gate arrays (FPGA) cannot directly process compressed networks, so we propose a processor suitable for DIC in this paper. First, we analyze the image compression algorithm and quantize the data of the network into 16-bit fixed points using a dynamic hierarchical quantization. Then, we design an operation module, which is the core computational part for processing. It is composed of convolution, sampling, and normalization units, which pipeline the inference calculation for each layer of the network. To achieve high-throughput inference computing, the processing elements group (PEG) array with local buffers is developed for convolutional computation. Based on the common components in encoding and decoding, the sampling and normalization units are compatible with codec computation and utilized for image compression with time-sharing multiplexing. According to the control signal, the operation module could change the order of data flow through the three units so that they perform encoding and decoding operations, respectively. Based on these design methods and schemes, DIC is deployed into the Xilinx Zynq ZCU104 development board to achieve high-throughput image compression at 6 different bit rates. The experimental results show that the processor can run at 200 MHz and achieve 283.4 GOPS for the 16 bits fixed-point DIC network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. CAE-Net: Cross-Modal Attention Enhancement Network for RGB-T Salient Object Detection.
- Author
-
Lv, Chengtao, Wan, Bin, Zhou, Xiaofei, Sun, Yaoqi, Hu, Ji, Zhang, Jiyong, and Yan, Chenggang
- Subjects
INFRARED imaging ,THERMOGRAPHY ,OBJECT recognition (Computer vision) ,PROBLEM solving - Abstract
RGB salient object detection (SOD) performs poorly in low-contrast and complex background scenes. Fortunately, the thermal infrared image can capture the heat distribution of scenes as complementary information to the RGB image, so the RGB-T SOD has recently attracted more and more attention. Many researchers have committed to accelerating the development of RGB-T SOD, but some problems still remain to be solved. For example, the defective sample and interfering information contained in the RGB or thermal image hinder the model from learning proper saliency features, meanwhile the low-level features with noisy information result in incomplete salient objects or false positive detection. To solve these problems, we design a cross-modal attention enhancement network (CAE-Net). First, we concretely design a cross-modal fusion (CMF) module to fuse cross-modal features, where the cross-attention unit (CAU) is employed to enhance the two modal features, and channel attention is used to dynamically weigh and fuse the two modal features. Then, we design the joint-modality decoder (JMD) to fuse cross-level features, where the low-level features are purified by higher level features, and multi-scale features are sufficiently integrated. Besides, we add two single-modality decoder (SMD) branches to preserve more modality-specific information. Finally, we employ a multi-stream fusion (MSF) module to fuse three decoders' features. Comprehensive experiments are conducted on three RGB-T datasets, and the results show that our CAE-Net is comparable to the other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Comprehensive Analysis Reveals Distinct Immunological and Prognostic Characteristics of CD276/B7-H3 in Pan-Cancer.
- Author
-
Ding, Jinye, Sun, Yaoqi, Sulaiman, Zubaidan, Li, Caixia, Cheng, Zhongping, and Liu, Shupeng
- Subjects
GENE expression ,PROGNOSIS ,MULTIPLE tumors ,SURVIVAL rate ,SQUAMOUS cell carcinoma - Abstract
aiman,
1 Caixia Li,1, 2 Zhongping Cheng,1, 2 Shupeng Liu1, 2 1 Department of Obstetrics and Gynecology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, People's Republic of China;2 Institute of Gynecological Minimally Invasive Medicine, School of Medicine, Tongji University, Shanghai, People's Republic of ChinaCorrespondence: Zhongping Cheng; Shupeng Liu, Email [email protected] ; [email protected] Background: CD276 (also known as B7-H3), a newly discovered immunoregulatory protein that belongs to the B7 family, is a significant and attractive target for cancer immunotherapy. Existing evidence demonstrates its pivotal role in the tumorigenesis of some cancers. However, there still lacks a systematic and comprehensive pan-cancer analysis of the role of CD276 in tumor immunology and prognosis. Methods: We explored and validated the mRNA and protein expression levels of CD276 in multiple tumors through public databases and clinical tissues specimens. The Univariate Cox regression analysis and Kaplan–Meier analysis were applied to assess the prognostic value of CD276. The correlation between CD276 expression and clinical characteristics and immunological features in diverse tumors was also explored. GSEA was performed to illuminate the biological function and involved pathways of CD276. Moreover, the CellMiner database was used to interpret the relationship between CD276 and multiple chemotherapeutic agents. CCK-8 assay was performed to validate the biological function of CD276 in vitro. Results: In general, CD276 was differentially expressed between most tumor tissues and their corresponding normal tissues. Higher expression levels of CD276 were associated with poorer survival outcomes in most tumor cohorts from TCGA. There was a close correlation between CD276 expression and clinical features, the infiltration levels of specific immune cells, immune subtypes, TMB, MSI, MMR, recognized immunoregulatory genes and drug sensitivity across diverse human cancers. The scRNA-seq data analysis further revealed that CD276 was mainly expressed on the tumor infiltrating macrophages. Additionally, in vitro experiments showed that knockdown of CD276 inhibited the proliferation of ovarian cancer (OV) and cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) cell lines. Conclusion: CD276 is a potent biomarker for predicting the prognosis and immunological features in some tumors, and it may play a critical role in the tumor immune microenvironment (TIME) through macrophage-associated signaling. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
24. Identification of an Autophagy-Related Signature for Prognosis and Immunotherapy Response Prediction in Ovarian Cancer.
- Author
-
Ding, Jinye, Wang, Chunyan, Sun, Yaoqi, Guo, Jing, Liu, Shupeng, and Cheng, Zhongping
- Subjects
OVARIAN cancer ,INDUCED ovulation ,IMMUNE checkpoint inhibitors ,DISEASE risk factors ,CELL communication ,PROGNOSIS ,PROPORTIONAL hazards models ,PROGRESSION-free survival - Abstract
Background: Ovarian cancer (OC) is one of the most malignant tumors in the female reproductive system, with a poor prognosis. Various responses to treatments including chemotherapy and immunotherapy are observed among patients due to their individual characteristics. Applicable prognostic markers could make it easier to refine risk stratification for OC patients. Autophagy is closely implicated in the occurrence and development of tumors, including OC. Whether autophagy -related genes can be used as prognostic markers for OC patients remains unclear. Methods: The gene transcriptome data of 374 OC patients were downloaded from The Cancer Genome Atlas (TCGA) database. The correlation between the autophagy levels and outcomes of OC patients was identified through the single sample gene set enrichment analysis (ssGSEA). Recognized molecular markers of autophagy in different clinical specimens were detected by immunohistochemistry (IHC) assay. The gene set enrichment analysis (GSEA), ESTIMATE, and CIBERSORT analysis were applied to explore the correlation of autophagy with the tumor immune microenvironment (TIME). Single-cell RNA-sequencing (scRNA-seq) data from seven OC patients were included for characterizing cell-cell interaction patterns of autophagy-high or low tumor cells. Machine learning, Stepwise Cox regression and LASSO-Cox analysis were used to screen autophagy hub genes, which were used to establish an autophagy-related signature for prognosis evaluation. Four tumor immunotherapy cohorts were obtained from the GEO (Gene Expression Omnibus) database and the literature for autophagy risk score validation. Results: The autophagy levels were closely related to the prognosis of the OC patients. Additionally, the autophagy levels were correlated with TIME status including immune score, and immune-cell infiltration. The scRNA-seq analysis found that tumor cells with high or low autophagy levels had different interactions with immune cells, especially macrophages. Eight autophagy-hub genes (ZFYVE1, AMBRA1, LAMP2, TRAF6, PDPK1, ATG2B, DAPK1 and TP53INP2) were screened for an autophagy-related signature. According to this signature, higher risk score was correlated with poor prognosis and better immunotherapy response in the OC patients. Conclusions: The autophagy-related signature is applicable to predict the prognosis and immune checkpoint inhibitors (ICIs) therapy efficiency in OC patients. It is possible to identify OC patients who will respond to ICIs therapy and have a favorable prognosis, although more verification is needed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. AutoDeconJ: a GPU-accelerated ImageJ plugin for 3D light-field deconvolution with optimal iteration numbers predicting.
- Author
-
Su, Changqing, Gao, Yuhan, Zhou, You, Sun, Yaoqi, Yan, Chenggang, Yin, Haibing, and Xiong, Bo
- Subjects
DEEP learning ,INTERNET servers ,THREE-dimensional imaging ,MICROSCOPY ,FORECASTING - Abstract
Motivation Light-field microscopy (LFM) is a compact solution to high-speed 3D fluorescence imaging. Usually, we need to do 3D deconvolution to the captured raw data. Although there are deep neural network methods that can accelerate the reconstruction process, the model is not universally applicable for all system parameters. Here, we develop AutoDeconJ, a GPU-accelerated ImageJ plugin for 4.4× faster and more accurate deconvolution of LFM data. We further propose an image quality metric for the deconvolution process, aiding in automatically determining the optimal number of iterations with higher reconstruction accuracy and fewer artifacts. Results Our proposed method outperforms state-of-the-art light-field deconvolution methods in reconstruction time and optimal iteration numbers prediction capability. It shows better universality of different light-field point spread function (PSF) parameters than the deep learning method. The fast, accurate and general reconstruction performance for different PSF parameters suggests its potential for mass 3D reconstruction of LFM data. Availability and implementation The codes, the documentation and example data are available on an open source at: https://github.com/Onetism/AutoDeconJ.git. Supplementary information Supplementary data are available at Bioinformatics online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Learning Frequency Domain Priors for Image Demoireing.
- Author
-
Zheng, Bolun, Yuan, Shanxin, Yan, Chenggang, Tian, Xiang, Zhang, Jiyong, Sun, Yaoqi, Liu, Lin, Leonardis, Ales, and Slabaugh, Gregory
- Subjects
PIXELS ,IMAGE reconstruction ,CONVOLUTIONAL neural networks ,BANDPASS filters ,COLOR removal in water purification ,IMAGE color analysis - Abstract
Image demoireing is a multi-faceted image restoration task involving both moire pattern removal and color restoration. In this paper, we raise a general degradation model to describe an image contaminated by moire patterns, and propose a novel multi-scale bandpass convolutional neural network (MBCNN) for single image demoireing. For moire pattern removal, we propose a multi-block-size learnable bandpass filters (M-LBFs), based on a block-wise frequency domain transform, to learn the frequency domain priors of moire patterns. We also introduce a new loss function named Dilated Advanced Sobel loss (D-ASL) to better sense the frequency information. For color restoration, we propose a two-step tone mapping strategy, which first applies a global tone mapping to correct for a global color shift, and then performs local fine tuning of the color per pixel. To determine the most appropriate frequency domain transform, we investigate several transforms including DCT, DFT, DWT, learnable non-linear transform and learnable orthogonal transform. We finally adopt the DCT. Our basic model won the AIM2019 demoireing challenge. Experimental results on three public datasets show that our method outperforms state-of-the-art methods by a large margin. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Evolution of ICTs-empowered-identification: A general re-ranking method for person re-identification
- Author
-
Zhu, Bin, Xu, Tongkun, Zheng, Bolun, Zhang, Quan, Sun, Yaoqi, Liu, Anan, Mao, Zhendong, and Yan, Chenggang
- Published
- 2021
- Full Text
- View/download PDF
28. Predicting Protein–Protein Interactions Based on Ensemble Learning-Based Model from Protein Sequence.
- Author
-
Zhan, Xinke, Xiao, Mang, You, Zhuhong, Yan, Chenggang, Guo, Jianxin, Wang, Liping, Sun, Yaoqi, and Shang, Bingwan
- Subjects
AMINO acid sequence ,PROTEIN models ,PROTEIN-protein interactions ,SUPPORT vector machines ,FEATURE extraction ,HELICOBACTER pylori - Abstract
Simple Summary: Due to most traditional high-throughput experiments are tedious and laborious in identifying potential protein–protein interaction. To better improve accuracy prediction in protein–protein interactions. We proposed a novel computational method that can identify unknown protein–protein interaction efficiently and hope this method can provide a helpful idea and tool for proteomics research. Protein–protein interactions (PPIs) play an essential role in many biological cellular functions. However, it is still tedious and time-consuming to identify protein–protein interactions through traditional experimental methods. For this reason, it is imperative and necessary to develop a computational method for predicting PPIs efficiently. This paper explores a novel computational method for detecting PPIs from protein sequence, the approach which mainly adopts the feature extraction method: Locality Preserving Projections (LPP) and classifier: Rotation Forest (RF). Specifically, we first employ the Position Specific Scoring Matrix (PSSM), which can remain evolutionary information of biological for representing protein sequence efficiently. Then, the LPP descriptor is applied to extract feature vectors from PSSM. The feature vectors are fed into the RF to obtain the final results. The proposed method is applied to two datasets: Yeast and H. pylori, and obtained an average accuracy of 92.81% and 92.56%, respectively. We also compare it with K nearest neighbors (KNN) and support vector machine (SVM) to better evaluate the performance of the proposed method. In summary, all experimental results indicate that the proposed approach is stable and robust for predicting PPIs and promising to be a useful tool for proteomics research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Identification of Biomarkers for Predicting Ovarian Reserve of Primordial Follicle via Transcriptomic Analysis.
- Author
-
Liu, Li, Liu, Biting, Li, Ke, Wang, Chunyan, Xie, Yan, Luo, Ning, Wang, Lian, Sun, Yaoqi, Huang, Wei, Cheng, Zhongping, and Liu, Shupeng
- Subjects
FERTILITY preservation ,INDUCED ovulation ,BIOMARKERS ,TRANSCRIPTOMES ,REPRODUCTIVE technology ,OVARIAN follicle ,GENE expression ,OVARIES - Abstract
Ovarian reserve (OR) is mainly determined by the number of primordial follicles in the ovary and continuously depleted until ovarian senescence. With the development of assisted reproductive technology such as ovarian tissue cryopreservation and autotransplantation, growing demand has arisen for objective assessment of OR at the histological level. However, no specific biomarkers of OR can be used effectively in clinic nowadays. Herein, bulk RNA-seq datasets of the murine ovary with the biological ovarian age (BOA) dynamic changes and single-cell RNA-seq datasets of follicles at different stages of folliculogenesis were obtained from the GEO database to identify gene signature correlated to the primordial follicle pool. The correlations between gene signature expression and OR were also validated in several comparative OR models. The results showed that genes including Lhx8, Nobox , Sohlh1 , Tbpl2 , Stk31 , and Padi6 were highly correlated to the OR of the primordial follicle pool, suggesting that these genes might be used as biomarkers for predicting OR at the histological level. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Bidirectional difference locating and semantic consistency reasoning for change captioning.
- Author
-
Sun, Yaoqi, Li, Liang, Yao, Tingting, Lu, Tongyv, Zheng, Bolun, Yan, Chenggang, Zhang, Hua, Bao, Yongjun, Ding, Guiguang, and Slabaugh, Gregory
- Subjects
AMBIGUITY ,SEMANTICS ,BINARY codes ,TASKS - Abstract
Change captioning is an emerging task to describe the changes between a pair of images. The difficulty in this task is to discover the differences between the two images. Recently, some methods have been proposed to address this problem. However, they all employ unidirectional difference localization to identify the changes. This can lead to ambiguity about the nature of the changes. Instead, we propose a framework with bidirectional difference localization and semantic consistency reasoning to describe the image changes. First, we locate the changes in the two images by capturing bidirectional differences. Then we design a decoder with spatial‐channel attention to generate the change caption. Finally, we introduce semantic consistency reasoning to constrain our bidirectional difference localization module and spatial‐channel attention module. Extensive experiments on three public data sets show that the performance of our proposed model outperforms the state‐of‐the‐art change captioning models by a large margin. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Involvement of Cancer Stem Cells in Chemoresistant Relapse of Epithelial Ovarian Cancer Identified by Transcriptome Analysis.
- Author
-
Sun, Yaoqi, Yao, Lin, Wang, Chunyan, Xiong, Bing, Guo, Jing, Wang, Lian, Zhu, Jihui, Cheng, Zhongping, and Liu, Shupeng
- Subjects
- *
OVARIAN epithelial cancer , *CANCER stem cells , *DISEASE relapse , *TRANSCRIPTOMES , *DRUG resistance in cancer cells - Abstract
Epithelial ovarian cancer (EOC) is the most lethal gynecological malignancy. Despite the initial resection and chemotherapeutic treatment, relapse is common, which leads to poor survival rates in patients. A primary cause of recurrence is the persistence of ovarian cancer stem cells (OCSCs) with high tumorigenicity and chemoresistance. To achieve a better therapeutic response in EOC relapse, the mechanisms underlying acquired chemoresistance associated with relapse-initiating OCSCs need to be studied. Transcriptomes of both chemosensitive primary and chemoresistant relapse EOC samples were obtained from ICGC OV-AU dataset for differential expression analysis. The upregulated genes were further studied using KEGG and GO analysis. Significantly increased expression of eighteen CSC-related genes was found in chemoresistant relapse EOC groups. Upregulation of the expression in four hub genes including WNT3A, SMAD3, KLF4, and PAX6 was verified in chemoresistant relapse samples via immunohistochemistry staining, which confirmed the existence and enrichment of OCSCs in chemoresistant relapse EOC. KEGG and GO enrichment analysis in microarray expression datasets of isolated OCSCs indicated that quiescent state, increased ability of drug efflux, and enhanced response to DNA damage may have caused the chemoresistance in relapse EOC patients. These findings demonstrated a correlation between OCSCs and acquired chemoresistance and illustrated potential underlying mechanisms of OCSC-initiated relapse in EOC patients. Meanwhile, the differentially expressed genes in OCSCs may serve as novel preventive or therapeutic targets against EOC recurrence in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Each Part Matters: Local Patterns Facilitate Cross-View Geo-Localization.
- Author
-
Wang, Tingyu, Zheng, Zhedong, Yan, Chenggang, Zhang, Jiyong, Sun, Yaoqi, Zheng, Bolun, and Yang, Yi
- Subjects
SCALABILITY ,DEEP learning ,FEATURE extraction ,TASK analysis ,IMAGE retrieval - Abstract
Cross-view geo-localization is to spot images of the same geographic target from different platforms, e.g., drone-view cameras and satellites. It is challenging in the large visual appearance changes caused by extreme viewpoint variations. Existing methods usually concentrate on mining the fine-grained feature of the geographic target in the image center, but underestimate the contextual information in neighbor areas. In this work, we argue that neighbor areas can be leveraged as auxiliary information, enriching discriminative clues for geo-localization. Specifically, we introduce a simple and effective deep neural network, called Local Pattern Network (LPN), to take advantage of contextual information in an end-to-end manner. Without using extra part estimators, LPN adopts a square-ring feature partition strategy, which provides the attention according to the distance to the image center. It eases the part matching and enables the part-wise representation learning. Owing to the square-ring partition design, the proposed LPN has good scalability to rotation variations and achieves competitive results on three prevailing benchmarks, i.e., University-1652, CVUSA and CVACT. Besides, we also show the proposed LPN can be easily embedded into other frameworks to further boost performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Cross‐modal semantic correlation learning by Bi‐CNN network.
- Author
-
Wang, Chaoyi, Li, Liang, Yan, Chenggang, Wang, Zhan, Sun, Yaoqi, and Zhang, Jiyong
- Subjects
SEMANTIC computing ,MACHINE learning ,INFORMATION retrieval ,DATA extraction ,DATA analysis - Abstract
Cross modal retrieval can retrieve images through a text query and vice versa. In recent years, cross modal retrieval has attracted extensive attention. The purpose of most now available cross modal retrieval methods is to find a common subspace and maximize the different modal correlation. To generate specific representations consistent with cross modal tasks, this paper proposes a novel cross modal retrieval framework, which integrates feature learning and latent space embedding. In detail, we proposed a deep CNN and a shallow CNN to extract the feature of the samples. The deep CNN is used to extract the representation of images, and the shallow CNN uses a multi‐dimensional kernel to extract multi‐level semantic representation of text. Meanwhile, we enhance the semantic manifold by constructing cross modal ranking and within‐modal discriminant loss to improve the division of semantic representation. Moreover, the most representative samples are selected by using online sampling strategy, so that the approach can be implemented on a large‐scale data. This approach not only increases the discriminative ability among different categories, but also maximizes the relativity between different modalities. Experiments on three real word datasets show that the proposed method is superior to the popular methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Leveraging Multiple Implicit Feedback for Personalized Recommendation with Neural Network.
- Author
-
Wen, Hongfa, Liu, Xin, Yan, Chenggang, Jiang, Linhua, Sun, Yaoqi, Zhang, Jiyong, and Yin, Haibing
- Published
- 2019
- Full Text
- View/download PDF
35. Deeper feature integration network for salient object detection of strip steel surface defects.
- Author
-
Wan, Bin, Zhou, Xiaofei, Zheng, Bolun, Sun, Yaoqi, Zhang, Jiyong, and Yan, Chenggang
- Subjects
STEEL strip ,SURFACE defects ,OBJECT recognition (Computer vision) ,DEEP learning ,FEATURE extraction ,COMPUTER vision - Abstract
With the development of productivity, people set higher demands on the quality of steel. In recent years, artificial intelligence, especially deep learning-based computer vision technology has attracted great attention, which can be used for detecting steel surface defects such as scratches, patches, and rust spots. However, due to the complexity of the strip steel surface, it is still a challenge to accurately and effectively detect defect regions by the existing defects detection methods. Therefore, we propose a unique saliency model, i.e., deeper feature integration network to highlight the defect regions on the strip steel surface. To be specific, after each encoder stage, we introduce the multiscale global feature extraction module to elevate the multiscale deep features from the encoder. Meanwhile, we deploy the deeper feature extraction module, which contains a bidirectional feature extraction unit, to dig the effective representation for defects. Particularly, the forward one is equipped with a channel-space weighted module and the backward one is equipped with a split attention module. After that, the features from the two branches are progressively integrated by the decoder, yielding the final high-quality saliency maps that give a good depiction of the strip steel surface defects. Extensive experiments are executed on the public dataset, and the comparison results show that our model performs better than the 15 state-of-the-art methods, which proves the effectiveness and superiority of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Hard parameter sharing for compressing dense-connection-based image restoration network.
- Author
-
Tian, Xiang, Zheng, Bolun, Li, Shengyu, Yan, Chenggang, Zhang, Jiyong, Sun, Yaoqi, Shen, Tao, and Xiao, Mang
- Subjects
IMAGE reconstruction ,CONVOLUTIONAL neural networks ,IMAGE denoising ,COMPUTER vision - Abstract
The dense connection is a powerful technique to build wider and deeper convolution neural networks (CNNs) for handling several computer vision tasks. Despite the excellent performance, it consumes numerous parameters and produces a large weight model file. We studied the distribution of convolution layers and proposed a hard parameter sharing approach known as convolution pool (CP) for compressing dense-connection-based image restoration CNN models. CP is used to reallocate the parameters to specific convolution layers to ensure that some can be shared in different layers. We design a set of dense-connection-based baselines for three typical image restoration tasks, including image denoising, super-resolution, and JPEG deblocking, to validate the performance of the proposed method. Moreover, we comprehensively analyze the potential problems by introducing CP, including group convolution, dilated convolution, and modeling efficiency. Experimental results demonstrate that the proposed method can efficiently achieve an impressive compression rate with negligible performance reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. STS: Spatial–Temporal–Semantic Personalized Location Recommendation.
- Author
-
Li, Wenchao, Liu, Xin, Yan, Chenggang, Ding, Guiguang, Sun, Yaoqi, and Zhang, Jiyong
- Subjects
GAUSSIAN processes ,SOCIAL networks ,SEMANTICS ,FORECASTING - Abstract
The rapidly growing location-based social network (LBSN) has become a promising platform for studying users' mobility patterns. Many online applications can be built based on such studies, among which, recommending locations is of particular interest. Previous studies have shown the importance of spatial and temporal influences on location recommendation; however, most existing approaches build a universal spatial–temporal model for all users despite the fact that users always demonstrate heterogeneous check-in behavior patterns. In order to realize truly personalized location recommendations, we propose a Gaussian process based model for each user to systematically and non-linearly combine temporal and spatial information to predict the user's displacement from their currently checked-in location to the next one. The locations whose distances to the user's current checked-in location are the closest to the predicted displacement are recommended. We also propose an enhancement to take into account category information of locations for semantic-aware recommendation. A unified recommendation framework called spatial–temporal–semantic (STS) is introduced to combine displacement prediction and the semantic-aware enhancement to provide final top-N recommendation. Extensive experiments over real datasets show that the proposed STS framework significantly outperforms the state-of-the-art location recommendation models in terms of precision and mean reciprocal rank (MRR). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Progressive Decision Boundary Shifting for Unsupervised Domain Adaptation.
- Author
-
Li L, Lu T, Sun Y, Gao Y, Yan C, Hu Z, and Huang Q
- Abstract
Unsupervised domain adaptation (UDA) is attracting more attention from researchers for boosting the task-specific generalization on target domain. It focuses on addressing the domain shift between the labeled source domain and the unlabeled target domain. Recent biclassifier-based UDA models perform category-level alignment to reduce domain shift, and meanwhile, self-training is used for improving the discriminability of target instances. However, the error accumulation problem of instances with high semantic uncertainty may cause discriminability degradation and category-level misalignment. To solve this issue, we design the progressive decision boundary shifting algorithm, where stable category information of target instances is explored for learning a discriminability structure on target domain. Specifically, we first model the semantic uncertainty of instances by progressively shifting decision boundaries of category. Then, we introduce the uncertainty decoupling in a contrastive manner, where the discriminative information is learned from the source domain for instance with low semantic uncertainty. Furthermore, we minimize the predictive entropy of instances with high semantic uncertainty to reduce their prediction confidence. Extensive experiments on three popular datasets show that our model outperforms the current state-of-the-art (SOTA) UDA methods.
- Published
- 2024
- Full Text
- View/download PDF
39. Quality-Aware Selective Fusion Network for V-D-T Salient Object Detection.
- Author
-
Bao L, Zhou X, Lu X, Sun Y, Yin H, Hu Z, Zhang J, and Yan C
- Abstract
Depth images and thermal images contain the spatial geometry information and surface temperature information, which can act as complementary information for the RGB modality. However, the quality of the depth and thermal images is often unreliable in some challenging scenarios, which will result in the performance degradation of the two-modal based salient object detection (SOD). Meanwhile, some researchers pay attention to the triple-modal SOD task, namely the visible-depth-thermal (VDT) SOD, where they attempt to explore the complementarity of the RGB image, the depth image, and the thermal image. However, existing triple-modal SOD methods fail to perceive the quality of depth maps and thermal images, which leads to performance degradation when dealing with scenes with low-quality depth and thermal images. Therefore, in this paper, we propose a quality-aware selective fusion network (QSF-Net) to conduct VDT salient object detection, which contains three subnets including the initial feature extraction subnet, the quality-aware region selection subnet, and the region-guided selective fusion subnet. Firstly, except for extracting features, the initial feature extraction subnet can generate a preliminary prediction map from each modality via a shrinkage pyramid architecture, which is equipped with the multi-scale fusion (MSF) module. Then, we design the weakly-supervised quality-aware region selection subnet to generate the quality-aware maps. Concretely, we first find the high-quality and low-quality regions by using the preliminary predictions, which further constitute the pseudo label that can be used to train this subnet. Finally, the region-guided selective fusion subnet purifies the initial features under the guidance of the quality-aware maps, and then fuses the triple-modal features and refines the edge details of prediction maps through the intra-modality and inter-modality attention (IIA) module and the edge refinement (ER) module, respectively. Extensive experiments are performed on VDT-2048 dataset, and the results show that our saliency model consistently outperforms 13 state-of-the-art methods with a large margin. Our code and results are available at https://github.com/Lx-Bao/QSFNet.
- Published
- 2024
- Full Text
- View/download PDF
40. Dynamic Selective Network for RGB-D Salient Object Detection.
- Author
-
Wen H, Yan C, Zhou X, Cong R, Sun Y, Zheng B, Zhang J, Bao Y, and Ding G
- Subjects
- Algorithms, Semantics
- Abstract
RGB-D saliency detection is receiving more and more attention in recent years. There are many efforts have been devoted to this area, where most of them try to integrate the multi-modal information, i.e. RGB images and depth maps, via various fusion strategies. However, some of them ignore the inherent difference between the two modalities, which leads to the performance degradation when handling some challenging scenes. Therefore, in this paper, we propose a novel RGB-D saliency model, namely Dynamic Selective Network (DSNet), to perform salient object detection (SOD) in RGB-D images by taking full advantage of the complementarity between the two modalities. Specifically, we first deploy a cross-modal global context module (CGCM) to acquire the high-level semantic information, which can be used to roughly locate salient objects. Then, we design a dynamic selective module (DSM) to dynamically mine the cross-modal complementary information between RGB images and depth maps, and to further optimize the multi-level and multi-scale information by executing the gated and pooling based selection, respectively. Moreover, we conduct the boundary refinement to obtain high-quality saliency maps with clear boundary details. Extensive experiments on eight public RGB-D datasets show that the proposed DSNet achieves a competitive and excellent performance against the current 17 state-of-the-art RGB-D SOD models.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.