534 results on '"self-supervised"'
Search Results
2. LISO: Lidar-Only Self-supervised 3D Object Detection
- Author
-
Baur, Stefan Andreas, Moosmann, Frank, Geiger, Andreas, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Optimizing Delay Estimation in Breast RUCT Reconstruction Using Self-supervised Blind Segment Network
- Author
-
He, Lei, Liu, Zhaohui, Zhang, Qiude, Zhou, Liang, Cai, Yuxin, Yuan, Jing, Ding, Mingyue, Yuchi, Ming, Qiu, Wu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bhattarai, Binod, editor, Ali, Sharib, editor, Rau, Anita, editor, Caramalau, Razvan, editor, Nguyen, Anh, editor, Gyawali, Prashnna, editor, Namburete, Ana, editor, and Stoyanov, Danail, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Edge-Net: A Self-supervised Medical Image Segmentation Model Based on Edge Attention
- Author
-
Wang, Miao, Zheng, Zechen, Fan, Chao, Wang, Congqian, He, Xuelei, He, Xiaowei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Fetal Ultrasound Video Representation Learning Using Contrastive Rubik’s Cube Recovery
- Author
-
Zhang, Kangning, Jiao, Jianbo, Noble, J. Alison, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gomez, Alberto, editor, Khanal, Bishesh, editor, King, Andrew, editor, and Namburete, Ana, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Image-Conditioned Diffusion Models for Medical Anomaly Detection
- Author
-
Baugh, Matthew, Reynaud, Hadrien, Marimont, Sergio Naval, Cechnicka, Sarah, Müller, Johanna P., Tarroni, Giacomo, Kainz, Bernhard, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sudre, Carole H., editor, Mehta, Raghav, editor, Ouyang, Cheng, editor, Qin, Chen, editor, Rakic, Marianne, editor, and Wells, William M., editor
- Published
- 2025
- Full Text
- View/download PDF
7. SeFlow: A Self-supervised Scene Flow Method in Autonomous Driving
- Author
-
Zhang, Qingwen, Yang, Yi, Li, Peizheng, Andersson, Olov, Jensfelt, Patric, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
8. IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement.
- Author
-
Cui, Jianan, Luo, Yi, Chen, Donghe, Shi, Kuangyu, Su, Xinhui, and Liu, Huafeng
- Abstract
Purpose: Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. Methods: In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. Results: For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. Conclusion: The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. FRAPPE: fast rank approximation with explainable features for tensors.
- Author
-
Shiao, William and Papalexakis, Evangelos E.
- Subjects
REGRESSION analysis ,HEURISTIC ,GENERALIZATION - Abstract
Tensor decompositions have proven to be effective in analyzing the structure of multidimensional data. However, most of these methods require a key parameter: the number of desired components. In the case of the CANDECOMP/PARAFAC decomposition (CPD), the ideal value for the number of components is known as the canonical rank and greatly affects the quality of the decomposition results. Existing methods use heuristics or Bayesian methods to estimate this value by repeatedly calculating the CPD, making them extremely computationally expensive. In this work, we propose FRAPPE, the first method to estimate the canonical rank of a tensor without having to compute the CPD. This method is the result of two key ideas. First, it is much cheaper to generate synthetic data with known rank compared to computing the CPD. Second, we can greatly improve the generalization ability and speed of our model by generating synthetic data that matches a given input tensor in terms of size and sparsity. We can then train a specialized single-use regression model on a synthetic set of tensors engineered to match a given input tensor and use that to estimate the canonical rank of the tensor—all without computing the expensive CPD. FRAPPE is over 24 × faster than the best-performing baseline, and exhibits a 10 % improvement in MAPE on a synthetic dataset. It also performs as well as or better than the baselines on real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Self-supervised learning framework application for medical image analysis: a review and summary.
- Author
-
Zeng, Xiangrui, Abdullah, Nibras, and Sumari, Putra
- Subjects
- *
IMAGE analysis , *IMAGE segmentation , *IMAGE recognition (Computer vision) , *COMPUTER vision , *DIAGNOSTIC imaging - Abstract
Manual annotation of medical image datasets is labor-intensive and prone to biases. Moreover, the rate at which image data accumulates significantly outpaces the speed of manual annotation, posing a challenge to the advancement of machine learning, particularly in the realm of supervised learning. Self-supervised learning is an emerging field that capitalizes on unlabeled data for training, thereby circumventing the need for extensive manual labeling. This learning paradigm generates synthetic pseudo-labels through pretext tasks, compelling the network to acquire image representations in a pseudo-supervised manner and subsequently fine-tuning with a limited set of annotated data to achieve enhanced performance. This review begins with an overview of prevalent types and advancements in self-supervised learning, followed by an exhaustive and systematic examination of methodologies within the medical imaging domain from 2018 to September 2024. The review encompasses a range of medical image modalities, including CT, MRI, X-ray, Histology, and Ultrasound. It addresses specific tasks, such as Classification, Localization, Segmentation, Reduction of False Positives, Improvement of Model Performance, and Enhancement of Image Quality. The analysis reveals a descending order in the volume of related studies, with CT and MRI leading the list, followed by X-ray, Histology, and Ultrasound. Except for CT and MRI, there is a greater prevalence of studies focusing on contrastive learning methods over generative learning approaches. The performance of MRI/Ultrasound classification and all image types segmentation still has room for further exploration. Generally, this review can provide conceptual guidance for medical professionals to combine self-supervised learning with their research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Monocular Depth and Ego-motion Estimation with Scale Based on Superpixel and Normal Constraints.
- Author
-
Lu, Junxin, Gao, Yongbin, Chen, Jieyu, Hwang, Jeng-Neng, Fujita, Hamido, and Fang, Zhijun
- Subjects
AUGMENTED reality ,MONOCULARS ,VIRTUAL reality ,AUTONOMOUS vehicles ,AMBIGUITY - Abstract
Three-dimensional perception in intelligent virtual and augmented reality (VR/AR) and autonomous vehicles (AV) applications is critical and attracting significant attention. The self-supervised monocular depth and ego-motion estimation serves as a more intelligent learning approach that provides the required scene depth and location for 3D perception. However, the existing self-supervised learning methods suffer from scale ambiguity, boundary blur, and imbalanced depth distribution, limiting the practical applications of VR/AR and AV. In this article, we propose a new self-supervised learning framework based on superpixel and normal constraints to address these problems. Specifically, we formulate a novel 3D edge structure consistency loss to alleviate the boundary blur of depth estimation. To address the scale ambiguity of estimated depth and ego-motion, we propose a novel surface normal network for efficient camera height estimation. The surface normal network is composed of a deep fusion module and a full-scale hierarchical feature aggregation module. Meanwhile, to realize the global smoothing and boundary discriminability of the predicted normal map, we introduce a novel fusion loss which is based on the consistency constraints of the normal in edge domains and superpixel regions. Experiments are conducted on several benchmarks, and the results illustrate that the proposed approach outperforms the state-of-the-art methods in depth, ego-motion, and surface normal estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Self-supervised CondenseNet for feature learning to increase the accuracy in image classification.
- Author
-
Darvish-Motevali, Mahmoud, Sohrabi, Mohammad Karim, and Roshdi, Israfil
- Subjects
ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,IMAGE recognition (Computer vision) ,ARTIFICIAL intelligence ,COMPUTER science ,DEEP learning - Abstract
Deep learning methods are leveraged in various computer science and artificial intelligence areas, including image classification. Convolutional neural network (CNN) is one of the most widely used deep neural networks for which, several highly effective architectures for image classification have been presented. In this paper, an improved version of the recently introduced CondenseNet is provided as a new network architecture. On the other hand, due to the necessity of reducing the dependence on labeled data in the training process of neural networks, a self-supervised learning method is also proposed for labeling unlabeled images. The results of the experiments show the proper performance of the proposed self-supervised CondenseNet method compared to the basic version of CondenseNet. The experiments are conducted on CIFAR_10 and CIFAR-100 datasets and show better accuracy of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Self-supervised single-view 3D point cloud reconstruction through GAN inversion.
- Author
-
Li, Ying, Guo, HaoYu, and Sheng, Huankun
- Subjects
- *
GENERATIVE adversarial networks , *POINT cloud , *UNIFORMITY , *SILHOUETTES - Abstract
Recent single-view reconstruction methods have sought to reconstruct 3D point clouds from images and corresponding silhouette collections alone. However, merely utilizing input images as supervision without any auxiliary methods amplifies the matching ambiguity. To address this issue, we propose a self-supervised 3D point cloud reconstruction method based on generative adversarial network (GAN) inversion. Three novel components are introduced to solve the intrinsic challenges of cross-dimensional inversion. First, we develop a uniform loss to enhance the uniformity of the point clouds generated by the GAN. Second, we devise a coarse-to-fine differentiable point cloud renderer to facilitate accurate projections. Third, we design a pseudo ground-truth pose predictor that can estimate the precise viewpoints of the input images. Experimental results on both synthetic datasets and real-world datasets demonstrate that our approach outperforms existing state-of-the-art 2D supervised reconstruction methods and is comparable to 3D supervised approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. ANAGL: A Noise-Resistant and Anti-Sparse Graph Learning for Micro-Video Recommendation.
- Author
-
Ma, Jingwei, Bian, Kangkang, Xu, Yang, and Zhu, Lei
- Subjects
RECOMMENDER systems ,NOISE ,INSTITUTIONAL repositories - Abstract
In recent years, graph convolutional networks (GCNs) have seen widespread utilization within micro-video recommendation systems, facilitating the understanding of user preferences through interactions with micro-videos. Despite the commendable performance exhibited by GCN-based methodologies, several persistent issues demand further scrutiny. Primarily, most user-micro-video interactions involve implicit behaviors, such as clicks or abstentions, which may inadvertently capture irrelevant micro-video content, thereby introducing significant noise (false touches, low watch-ratio, low ratings) into users' histories. Consequently, this noise undermines the efficacy of micro-video recommendations. Moreover, the abundance of micro-videos has resulted in fewer interactions between users and micro-video content. To tackle these challenges, we propose a noise-resistant and anti-sparse graph learning framework for micro-video recommendation. Initially, we construct a denoiser that leverages implicit multi-attribute information (e.g., watch-ratio, timestamp, ratings, and so on) to filter noisy data from user interaction histories. This process yields high-fidelity micro-video information, enabling a more precise modeling of users' feature preferences. Subsequently, we employ a multi-view reconstruction approach and utilize cross-view self-supervised learning to gain insights into user and micro-video features. This strategic approach effectively mitigates the issue of data sparsity. Extensive experiments conducted on two publicly available micro-video recommendation datasets validate the effectiveness of our proposed method. For in-depth details and access to the code, please refer to our repository at "https://github.com/kbk12/ANAGL.git." [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Self-supervised learning framework application for medical image analysis: a review and summary
- Author
-
Xiangrui Zeng, Nibras Abdullah, and Putra Sumari
- Subjects
Self-supervised ,Medical image ,Computer vision ,CNN ,Transformer ,Medical technology ,R855-855.5 - Abstract
Abstract Manual annotation of medical image datasets is labor-intensive and prone to biases. Moreover, the rate at which image data accumulates significantly outpaces the speed of manual annotation, posing a challenge to the advancement of machine learning, particularly in the realm of supervised learning. Self-supervised learning is an emerging field that capitalizes on unlabeled data for training, thereby circumventing the need for extensive manual labeling. This learning paradigm generates synthetic pseudo-labels through pretext tasks, compelling the network to acquire image representations in a pseudo-supervised manner and subsequently fine-tuning with a limited set of annotated data to achieve enhanced performance. This review begins with an overview of prevalent types and advancements in self-supervised learning, followed by an exhaustive and systematic examination of methodologies within the medical imaging domain from 2018 to September 2024. The review encompasses a range of medical image modalities, including CT, MRI, X-ray, Histology, and Ultrasound. It addresses specific tasks, such as Classification, Localization, Segmentation, Reduction of False Positives, Improvement of Model Performance, and Enhancement of Image Quality. The analysis reveals a descending order in the volume of related studies, with CT and MRI leading the list, followed by X-ray, Histology, and Ultrasound. Except for CT and MRI, there is a greater prevalence of studies focusing on contrastive learning methods over generative learning approaches. The performance of MRI/Ultrasound classification and all image types segmentation still has room for further exploration. Generally, this review can provide conceptual guidance for medical professionals to combine self-supervised learning with their research.
- Published
- 2024
- Full Text
- View/download PDF
16. Self-Supervised and Supervised Image Enhancement Networks with Time-Shift Module.
- Author
-
Tuncal, Kubra, Sekeroglu, Boran, and Abiyev, Rahib
- Subjects
IMAGE intensifiers ,HUMAN beings - Abstract
Enhancing image quality provides more interpretability for both human beings and machines. Traditional image enhancement techniques work well for specific uses, but they struggle with images taken in extreme conditions, such as varied distortions, noise, and contrast deformations. Deep-learning-based methods produce superior quality in enhancing images since they are capable of learning the spatial characteristics within the images. However, deeper models increase the computational costs and require additional modules for particular problems. In this paper, we propose self-supervised and supervised image enhancement models based on the time-shift image enhancement method (TS-IEM). We embedded the TS-IEM into a four-layer CNN model and reconstructed the reference images for the self-supervised model. The reconstructed images are also used in the supervised model as an additional layer to improve the learning process and obtain better-quality images. Comprehensive experiments and qualitative and quantitative analysis are performed using three benchmark datasets of different application domains. The results showed that the self-supervised model could provide reasonable results for the datasets without reference images. On the other hand, the supervised model outperformed the state-of-the-art methods in quantitative analysis by producing well-enhanced images for different tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Seismic Blind Deconvolution Based on Self-Supervised Machine Learning.
- Author
-
Yin, Xia, Xu, Wenhao, Yang, Zhifang, and Wu, Bangyu
- Subjects
DECONVOLUTION (Mathematics) ,CONVOLUTIONAL neural networks ,SUPERVISED learning ,ELECTRONIC data processing ,MACHINE learning - Abstract
Seismic deconvolution is a useful tool in seismic data processing. Classical non-machine learning deconvolution methods usually apply quite a few constraints to both wavelet inversion and reflectivity inversion. Supervised machine learning deconvolution methods often require appropriate training labels. The existing self-supervised machine learning deconvolution methods need a given wavelet, which is a non-blind process. To overcome these issues, we propose a blind deconvolution method based on self-supervised machine learning. This method first estimates an initial zero-phase wavelet by smoothing the amplitude spectrum of averaged seismic data. Then, the loss function of self-supervised machine learning is taken as the error between the observed seismic data and the reconstructed seismic data that come from the convolution of phase-rotated wavelet and reflectivity generated by the network. We utilize a residual neural network with long skip connections as the reflectivity inversion network and a fully connected convolutional neural network as the wavelet phase inversion network. Numerical experiments on synthetic data and field data show that the proposed method can obtain reflectivity inversion results with higher resolution than the existing self-supervised machine learning method without given wavelet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Mapless mobile robot navigation at the edge using self-supervised cognitive map learners.
- Author
-
Polykretis, Ioannis, Danielescu, Andreea, Goerke, Nils, and Wang, Siao
- Subjects
ARTIFICIAL neural networks ,COGNITIVE maps (Psychology) ,SUPERVISED learning ,DEEP reinforcement learning ,MOBILE robots ,REINFORCEMENT learning ,RANDOM walks - Abstract
Navigation of mobile agents in unknown, unmapped environments is a critical task for achieving general autonomy. Recent advancements in combining Reinforcement Learning with Deep Neural Networks have shown promising results in addressing this challenge. However, the inherent complexity of these approaches, characterized by multi-layer networks and intricate reward objectives, limits their autonomy, increases memory footprint, and complicates adaptation to energy-efficient edge hardware. To overcome these challenges, we propose a brain-inspired method that employs a shallow architecture trained by a local learning rule for self-supervised navigation in uncharted environments. Our approach achieves performance comparable to a stateof-the-art Deep Q Network (DQN) method with respect to goal-reaching accuracy and path length, with a similar (slightly lower) number of parameters, operations, and training iterations. Notably, our self-supervised approach combines novelty-based and random walks to alleviate the need for objective reward definition and enhance agent autonomy. At the same time, the shallow architecture and local learning rule do not call for error backpropagation, decreasing the memory overhead and enabling implementation on edge neuromorphic processors. These results contribute to the potential of embodied neuromorphic agents utilizing minimal resources while effectively handling variability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. 一种自监督掩码图像建模的遮挡目标检测方法.
- Author
-
冯欣 and 胡成杭
- Subjects
OBJECT recognition (Computer vision) ,TRANSFORMER models ,IMAGE reconstruction ,DETECTORS ,SELF - Abstract
Copyright of Journal of Chongqing University of Technology (Natural Science) is the property of Chongqing University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
20. Double-target self-supervised clustering with multi-feature fusion for medical question texts.
- Author
-
Xifeng Shen, Yuanyuan Sun, Chunxia Zhang, Cheng Yang, Yi Qin, Weining Zhang, Jiale Nan, Meiling Che, and Dongping Gao
- Subjects
SEMANTICS ,DEEP learning ,VOCABULARY - Abstract
Background. To make the question text represent more information and construct an end-to-end text clustering model, we propose a double-target self-supervised clustering with multi-feature fusion (MF-DSC) for texts which describe questions related to the medical field. Since medical question-and-answer data are unstructured texts and characterized by short characters and irregular language use, the features extracted by a single model cannot fully characterize the text content. Methods. Firstly, word weights were obtained based on term frequency, and word vectors were generated according to lexical semantic information. Then we fused term frequency and lexical semantics to obtain weighted word vectors, which were used as input to the model for deep learning. Meanwhile, a self-attention mechanism was introduced to calculate the weight of each word in the question text, i.e., the interactions between words. To learn fusing cross-document topic features and build an end-to-end text clustering model, two target functions, L cluster and L topic, were constructed and integrated to a unified clustering framework, which also helped to learn a friendly representation that facilitates text clustering. After that, we conducted comparison experiments with five other models to verify the effectiveness of MF-DSC. Results. The MF-DSC outperformed other models in normalized mutual information (NMI), adjusted Rand indicator (ARI) average clustering accuracy (ACC) and F1 with 0.4346, 0.4934, 0.8649 and 0.5737, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A hybrid self-supervised model predicting life satisfaction in South Korea
- Author
-
Hung Viet Nguyen and Haewon Byeon
- Subjects
explainable AI ,hybrid model ,life satisfaction ,self-supervised ,TabNet ,Public aspects of medicine ,RA1-1270 - Abstract
ObjectiveLife satisfaction pertains to an individual’s subjective evaluation of their life quality, grounded in their personal criteria. It stands as a crucial cognitive aspect of subjective wellbeing, offering a reliable gauge of a person’s comprehensive wellbeing status. In this research, our objective is to develop a hybrid self-supervised model tailored for predicting individuals’ life satisfaction in South Korea.MethodsWe employed the Busan Metropolitan City Social Survey Data in 2021, a comprehensive dataset compiled by the Big Data Statistics Division of Busan Metropolitan City. After preprocessing, our analysis focused on a total of 32,390 individuals with 51 variables. We developed the self-supervised pre-training TabNet model as a key component of this study. In addition, we integrated the proposed model with the Local Interpretable Model-agnostic Explanation (LIME) technique to enhance the ease and intuitiveness of interpreting local model behavior.ResultsThe performance of our advanced model surpassed conventional tree-based ML models, registering an AUC of 0.7778 for the training set and 0.7757 for the test set. Furthermore, our integrated model simplifies and clarifies the interpretation of local model actions, effectively navigating past the intricate nuances of TabNet’s standard explanatory mechanisms.ConclusionOur proposed model offers a transparent understanding of AI decisions, making it a valuable tool for professionals in the social sciences and psychology, even if they lack expertise in data analytics.
- Published
- 2024
- Full Text
- View/download PDF
22. A Novel Tracking Framework for Devices in X-ray Leveraging Supplementary Cue-Driven Self-supervised Features
- Author
-
Islam, Saahil, Murthy, Venkatesh N., Neumann, Dominik, Cimen, Serkan, Sharma, Puneet, Maier, Andreas, Comaniciu, Dorin, Ghesu, Florin C., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
23. Domain Adaptation of Echocardiography Segmentation Via Reinforcement Learning
- Author
-
Judge, Arnaud, Judge, Thierry, Duchateau, Nicolas, Sandler, Roman A., Sokol, Joseph Z., Bernard, Olivier, Jodoin, Pierre-Marc, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
24. Dysphonia Diagnosis Using Self-supervised Speech Models in Mono and Cross-Lingual Settings
- Author
-
Aziz, Dosti, Sztahó, Dávid, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nöth, Elmar, editor, Horák, Aleš, editor, and Sojka, Petr, editor
- Published
- 2024
- Full Text
- View/download PDF
25. Investigating the EEG Embedding by Visualization
- Author
-
Wen, Yongcheng, Mo, Jiawei, Hu, Wenxin, Liang, Feng, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Leung, Victor C.M., editor, Li, Hezhang, editor, Hu, Xiping, editor, and Ning, Zhaolong, editor
- Published
- 2024
- Full Text
- View/download PDF
26. Learning Paradigms and Modelling Methodologies for Digital Twins in Process Industry
- Author
-
Mayr, Michael, Chasparis, Georgios C., Küng, Josef, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wrembel, Robert, editor, Chiusano, Silvia, editor, Kotsis, Gabriele, editor, Tjoa, A Min, editor, and Khalil, Ismail, editor
- Published
- 2024
- Full Text
- View/download PDF
27. MonoRetNet: A Self-supervised Model for Monocular Depth Estimation with Bidirectional Half-Duplex Retention
- Author
-
Fan, Dengxin, Liu, Songyan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Pan, Yijie, editor, and Zhang, Qinhu, editor
- Published
- 2024
- Full Text
- View/download PDF
28. SEGCN: Structural Enhancement Graph Clustering Network
- Author
-
Chen, Yuwen, Yan, Xuefeng, Cui, Peng, Gong, Lina, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Song, Xiangyu, editor, Feng, Ruyi, editor, Chen, Yunliang, editor, Li, Jianxin, editor, and Min, Geyong, editor
- Published
- 2024
- Full Text
- View/download PDF
29. Self-supervised Graph Neural Network Based Community Search over Heterogeneous Information Networks
- Author
-
Wei, Jinyang, Zhou, Lihua, Wang, Lizhen, Chen, Hongmei, Xiao, Qing, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Meng, Xiaofeng, editor, Zhang, Xueying, editor, Guo, Danhuai, editor, Hu, Di, editor, Zheng, Bolong, editor, and Zhang, Chunju, editor
- Published
- 2024
- Full Text
- View/download PDF
30. Self-Supervised Representation Learning for Multivariate Time Series of Power Grid with Self-Distillation Augmentation
- Author
-
Ye, Ligang, Jia, Hongyi, Xia, Weishang, Liu, Tianqi, Yang, Yiyong, Ma, Huimin, Han, Zhaogang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yang, Qingxin, editor, Li, Zewen, editor, and Luo, An, editor
- Published
- 2024
- Full Text
- View/download PDF
31. MMT: Transformer for Multi-modal Multi-label Self-supervised Learning
- Author
-
Wang, Jiahe, Li, Jia, Liu, Xingrui, Gao, Xizhan, Niu, Sijie, Dong, Jiwen, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, You, Peng, editor, Liu, Shuaiqi, editor, and Wang, Jun, editor
- Published
- 2024
- Full Text
- View/download PDF
32. VANet: A New Network for Multi-modal Self-supervised Learning from Video and Audio
- Author
-
Liu, Xingrui, Zhang, Chen, Feng, Zeming, Dong, Jiwen, Niu, Sijie, Gao, Xizhan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, You, Peng, editor, Liu, Shuaiqi, editor, and Wang, Jun, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Iterative Noisy-Target Approach: Speech Enhancement Without Clean Speech
- Author
-
Zhang, Yifan, Jiang, Wenbin, Zhuo, Qing, Yu, Kai, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Jia, Jia, editor, Ling, Zhenhua, editor, Chen, Xie, editor, Li, Ya, editor, and Zhang, Zixing, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Unsupervised Traditional Chinese Herb Mention Normalization via Robustness-Promotion Oriented Self-supervised Training
- Author
-
Li, Wei, Yang, Zheng, Shao, Yanqiu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fang, Lu, editor, Pei, Jian, editor, Zhai, Guangtao, editor, and Wang, Ruiping, editor
- Published
- 2024
- Full Text
- View/download PDF
35. Self-supervised Deep-Learning Segmentation of Corneal Endothelium Specular Microscopy Images
- Author
-
Sanchez, Sergio, Mendoza, Kevin, Quintero, Fernando, Prada, Angelica M., Tello, Alejandro, Galvis, Virgilio, Romero, Lenny A., Marrugo, Andres G., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Orjuela-Cañón, Alvaro David, editor, Lopez, Jesus A, editor, and Arias-Londoño, Julián David, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Learnable Color Image Zero-Watermarking Based on Feature Comparison
- Author
-
Wang, Baowei, Dai, Changyu, Wu, Yufeng, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Luo, Biao, editor, Cheng, Long, editor, Wu, Zheng-Guang, editor, Li, Hongyi, editor, and Li, Chaojie, editor
- Published
- 2024
- Full Text
- View/download PDF
37. MFSFFuse: Multi-receptive Field Feature Extraction for Infrared and Visible Image Fusion Using Self-supervised Learning
- Author
-
Gao, Xueyan, Liu, Shiguang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Luo, Biao, editor, Cheng, Long, editor, Wu, Zheng-Guang, editor, Li, Hongyi, editor, and Li, Chaojie, editor
- Published
- 2024
- Full Text
- View/download PDF
38. MRI recovery with self-calibrated denoisers without fully-sampled data
- Author
-
Shafique, Muhammad, Liu, Sizhuo, Schniter, Philip, and Ahmad, Rizwan
- Published
- 2024
- Full Text
- View/download PDF
39. A self-supervised missing trace interpolation framework for seismic data reconstruction
- Author
-
Li, Ming, Yan, Xue-song, and Hu, Cheng-yu
- Published
- 2024
- Full Text
- View/download PDF
40. An ensemble of self-supervised teachers for minimal student model with auto-tuned hyperparameters via improved Bayesian optimization
- Author
-
Kishore, Jaydeep and Mukherjee, Snehasis
- Published
- 2024
- Full Text
- View/download PDF
41. PIFiA: self-supervised approach for protein functional annotation from single-cell imaging data
- Author
-
Anastasia Razdaibiedina, Alexander Brechalov, Helena Friesen, Mojca Mattiazzi Usaj, Myra Paz David Masinas, Harsha Garadi Suresh, Kyle Wang, Charles Boone, Jimmy Ba, and Brenda Andrews
- Subjects
Self-supervised ,Machine Learning ,Single-cell ,Imaging ,Protein ,Biology (General) ,QH301-705.5 ,Medicine (General) ,R5-920 - Abstract
Abstract Fluorescence microscopy data describe protein localization patterns at single-cell resolution and have the potential to reveal whole-proteome functional information with remarkable precision. Yet, extracting biologically meaningful representations from cell micrographs remains a major challenge. Existing approaches often fail to learn robust and noise-invariant features or rely on supervised labels for accurate annotations. We developed PIFiA (Protein Image-based Functional Annotation), a self-supervised approach for protein functional annotation from single-cell imaging data. We imaged the global yeast ORF-GFP collection and applied PIFiA to generate protein feature profiles from single-cell images of fluorescently tagged proteins. We show that PIFiA outperforms existing approaches for molecular representation learning and describe a range of downstream analysis tasks to explore the information content of the feature profiles. Specifically, we cluster extracted features into a hierarchy of functional organization, study cell population heterogeneity, and develop techniques to distinguish multi-localizing proteins and identify functional modules. Finally, we confirm new PIFiA predictions using a colocalization assay, suggesting previously unappreciated biological roles for several proteins. Paired with a fully interactive website ( https://thecellvision.org/pifia/ ), PIFiA is a resource for the quantitative analysis of protein organization within the cell.
- Published
- 2024
- Full Text
- View/download PDF
42. Integrating chromatin conformation information in a self-supervised learning model improves metagenome binning
- Author
-
Ho, Harrison, Chovatia, Mansi, Egan, Rob, He, Guifen, Yoshinaga, Yuko, Liachko, Ivan, O’Malley, Ronan, and Wang, Zhong
- Subjects
Biological Sciences ,Bioinformatics and Computational Biology ,Human Genome ,Genetics ,Generic health relevance ,Chromatin ,Metagenome ,Algorithms ,Benchmarking ,Supervised Machine Learning ,Metagenome binning ,Self-supervised ,Hi-C ,Medical and Health Sciences - Abstract
Metagenome binning is a key step, downstream of metagenome assembly, to group scaffolds by their genome of origin. Although accurate binning has been achieved on datasets containing multiple samples from the same community, the completeness of binning is often low in datasets with a small number of samples due to a lack of robust species co-abundance information. In this study, we exploited the chromatin conformation information obtained from Hi-C sequencing and developed a new reference-independent algorithm, Metagenome Binning with Abundance and Tetra-nucleotide frequencies-Long Range (metaBAT-LR), to improve the binning completeness of these datasets. This self-supervised algorithm builds a model from a set of high-quality genome bins to predict scaffold pairs that are likely to be derived from the same genome. Then, it applies these predictions to merge incomplete genome bins, as well as recruit unbinned scaffolds. We validated metaBAT-LR's ability to bin-merge and recruit scaffolds on both synthetic and real-world metagenome datasets of varying complexity. Benchmarking against similar software tools suggests that metaBAT-LR uncovers unique bins that were missed by all other methods. MetaBAT-LR is open-source and is available at https://bitbucket.org/project-metabat/metabat-lr.
- Published
- 2023
43. Self-Supervised Tabular Data Anomaly Detection Method Based on Knowledge Enhancement.
- Author
-
GAO Xiaoyu, ZHAO Xiaoyong, and WANG Lei
- Subjects
SUPERVISED learning ,RANDOM noise theory ,KNOWLEDGE transfer ,ELECTRONIC data processing ,SEMANTICS - Abstract
The traditional supervised anomaly detection methods have developed rapidly. In order to reduce the dependence on labels, self-supervised pre-training methods are widely studied, and the studies show that additional intrinsic semantic knowledge embedding is crucial for table learning. In order to mine the rich knowledge information in tabular data, the self-supervised tabular data anomaly detection method based on knowledge enhancement (STKE) is proposed with the following improvements. The proposed data processing module integrates domain knowledge (semantics) and statistical mathematics knowledge into feature construction. At the same time, self-supervised pre-training (parameter learning) provides contextual knowledge priors to achieve the rich information transfer of tabular data. The mask mechanism is used on the original data to learn the masked features by learning the relevant non-masked features, and predict the original value of the additive Gaussian noise in the hidden layer space of the data. This strategy promotes the model even in the presence of noisy inputs. The original feature information can also be recovered. A hybrid attention mechanism is used to effectively extract association information between data features. The experimental results of the proposed method on six datasets show superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Masked self-supervised ECG representation learning via multiview information bottleneck.
- Author
-
Yang, Shunxiang, Lian, Cheng, Zeng, Zhigang, Xu, Bingrong, Su, Yixin, and Xue, Chenyang
- Subjects
- *
ELECTROCARDIOGRAPHY , *SUPERVISED learning , *DATA augmentation , *SELF-adaptive software - Abstract
In recent years, self-supervised learning-based models have been widely used for electrocardiogram (ECG) representation learning. However, most of the models utilize contrastive learning that strongly depend on data augmentation. In this paper, we propose a masked self-supervised learning model based on multiview information bottleneck principle. Our method masks the ECG signal instances in the time and frequency domains at a high ratio and then uses the autoencoder to reconstruct the original input. Not only the intra-view relations within each view but also the inter-view relations between two views are exploited in ECG representation learning. Furthermore, we use the multiview information bottleneck principle to remove redundant information in the time and frequency domains, so that the representations of both views contain more task-relevant information. Our model is pre-trained on three larger ECG datasets at once and fine-tuned on each classification task. Experimental results show that our model not only outperforms state-of-the-art models with self-supervised learning, but also outperforms models with supervised learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Self-Supervised Joint Learning for pCLE Image Denoising †.
- Author
-
Yang, Kun, Zhang, Haojie, Qiu, Yufei, Zhai, Tong, and Zhang, Zhiguo
- Subjects
- *
IMAGE denoising , *LASER endoscopy , *FLUORESCENCE microscopy , *DEEP learning - Abstract
Probe-based confocal laser endoscopy (pCLE) has emerged as a powerful tool for disease diagnosis, yet it faces challenges such as the formation of hexagonal patterns in images due to the inherent characteristics of fiber bundles. Recent advancements in deep learning offer promise in image denoising, but the acquisition of clean-noisy image pairs for training networks across all potential scenarios can be prohibitively costly. Few studies have explored training denoising networks on such pairs. Here, we propose an innovative self-supervised denoising method. Our approach integrates noise prediction networks, image quality assessment networks, and denoising networks in a collaborative, jointly trained manner. Compared to prior self-supervised denoising methods, our approach yields superior results on pCLE images and fluorescence microscopy images. In summary, our novel self-supervised denoising technique enhances image quality in pCLE diagnosis by leveraging the synergy of noise prediction, image quality assessment, and denoising networks, surpassing previous methods on both pCLE and fluorescence microscopy images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. PIFiA: self-supervised approach for protein functional annotation from single-cell imaging data.
- Author
-
Razdaibiedina, Anastasia, Brechalov, Alexander, Friesen, Helena, Mattiazzi Usaj, Mojca, Masinas, Myra Paz David, Garadi Suresh, Harsha, Wang, Kyle, Boone, Charles, Ba, Jimmy, and Andrews, Brenda
- Subjects
- *
SUPERVISED learning , *TASK analysis , *PROTEIN-protein interactions , *FLUORESCENCE microscopy , *PROTEINS , *PROTEIN analysis - Abstract
Fluorescence microscopy data describe protein localization patterns at single-cell resolution and have the potential to reveal whole-proteome functional information with remarkable precision. Yet, extracting biologically meaningful representations from cell micrographs remains a major challenge. Existing approaches often fail to learn robust and noise-invariant features or rely on supervised labels for accurate annotations. We developed PIFiA (Protein Image-based Functional Annotation), a self-supervised approach for protein functional annotation from single-cell imaging data. We imaged the global yeast ORF-GFP collection and applied PIFiA to generate protein feature profiles from single-cell images of fluorescently tagged proteins. We show that PIFiA outperforms existing approaches for molecular representation learning and describe a range of downstream analysis tasks to explore the information content of the feature profiles. Specifically, we cluster extracted features into a hierarchy of functional organization, study cell population heterogeneity, and develop techniques to distinguish multi-localizing proteins and identify functional modules. Finally, we confirm new PIFiA predictions using a colocalization assay, suggesting previously unappreciated biological roles for several proteins. Paired with a fully interactive website (https://thecellvision.org/pifia/), PIFiA is a resource for the quantitative analysis of protein organization within the cell. Synopsis: PIFiA is a self-supervised deep-learning approach for protein functional annotation from single-cell images. It generates feature profiles from images of the yeast ORF-GFP collection that can be used in downstream analyses. PIFiA features identify new functional groups of proteins within organelles and proteins with heterogeneous localizations. PIFiA features successfully predict protein–protein interactions and members of protein complexes. PIFiA outperforms previous methods on four different standards of protein function. Images and analysis are available at thecellvision.org/pifia. PIFiA is a self-supervised deep-learning approach for protein functional annotation from single-cell images. It generates feature profiles from images of the yeast ORF-GFP collection that can be used in downstream analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. 基于查询语义特性的稠密文本检索模型.
- Author
-
赵铁柱, 林伦凯, and 杨秋鸿
- Abstract
Addressing the issues of low negative sampling efficiency and tendency towards overfitting in existing dense passage retrieval (DPR) models, this paper proposed a DPR model based on query semantic characteristics (Q-DPR). Firstly, it introduced a negative sampling method based on neighbor queries for the negative sampling process. This method constructed high-quality negative samples rapidly by retrieving neighboring queries, thereby reducing the training costs. Secondly, to mitigate overfitting, it proposed a query self-supervised method based on contrastive learning. This method alleviated overfitting to training labels by establishing a self-supervised contrastive loss among queries, thereby enhancing retrieval accuracy. Q-DPR performed exceptionally well on the large-scale MSMARCO dataset for open-domain question answering, achieving a mean reciprocal rank of 0.348 and a recall rate of 0.975. Experimental results demonstrate that this model successfully reduces trai-ning overhead while also improving retrieval performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A credible traffic prediction method based on self-supervised causal discovery.
- Author
-
Wang, Dan, Liu, Yingjie, and Song, Bin
- Abstract
Next-generation wireless network aims to support low-latency, high-speed data transmission services by incorporating artificial intelligence (AI) technologies. To fulfill this promise, AI-based network traffic prediction is essential for pre-allocating resources, such as bandwidth and computing power. This can help reduce network congestion and improve the quality of service (QoS) for users. Most studies achieve future traffic prediction by exploiting deep learning and reinforcement learning, to mine spatio-temporal correlated variables. Nevertheless, the prediction results obtained only by the spatio-temporal correlated variables cannot reflect real traffic changes. This phenomenon prevents the true prediction variables from being inferred, making the prediction algorithm perform poorly. Inspired by causal science, we propose a novel network traffic prediction method based on self-supervised spatio-temporal causal discovery (SSTCD). We first introduce the Granger causal discovery algorithm to build a causal graph among prediction variables and obtain spatio-temporal causality in the observed data, which reflects the real reasons affecting traffic changes. Next, a graph neural network (GNN) is adopted to incorporate causality for traffic prediction. Furthermore, we propose a self-supervised method to implement causal discovery to to address the challenge of lacking ground-truth causal graphs in the observed data. Experimental results demonstrate the effectiveness of the SSTCD method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Pairwise-Pixel Self-Supervised and Superpixel-Guided Prototype Contrastive Loss for Weakly Supervised Semantic Segmentation.
- Author
-
Xie, Lu, Li, Weigang, and Zhao, Yuntao
- Abstract
Semantic segmentation plays an important role in many fields because of its powerful ability to classify each pixel efficiently and accurately, but it relies on a large amount of manual annotations. In many cases, the annotations are very scarce and expensive, such as in medical image segmentation. To address this problem, researchers have been increasingly concerned about building efficient deep learning algorithms using rough label information in the past few years, with weakly supervised semantic segmentation method being one of them. Currently, most weakly supervised semantic segmentation methods rely on prototype learning to obtain the correlation between pixels; when the images of different categories are similar or indistinguishable, the extracted prototype has no representativeness to guide the training of model. Inspired by metric learning, we construct the pixel-level pairwise samples and propose a new self-supervised contrastive loss based on them, which makes full use of the class activation maps to reduce the intra-class difference and increase the inter-class difference; we also propose a novel prototype loss by a superpixel-guided clustering method to mine the valuable information in the image, which gathers the similar feature vectors to obtain the prototypes more accurately. The comparative experiments are carried out on PASCAL VOC 2012 and MS COCO 2014, the segmentation mIoU on the test set of PASCAL VOC 2012 has reached 69.5%, and the mIoU on the test set of MS COCO 2014 has reached 40.6%. The experimental results demonstrate our method achieves new state-of-the-art performance, which verifies the effectiveness and feasibility of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. ACP-DRL: an anticancer peptides recognition method based on deep representation learning.
- Author
-
Xiaofang Xu, Chaoran Li, Xinpu Yuan, Qiangjian Zhang, Yi Liu, Yunping Zhu, and Tao Chen
- Subjects
DEEP learning ,LANGUAGE models ,PEPTIDES ,INHIBITION of cellular proliferation ,PROTEIN models - Abstract
Cancer, a significant global public health issue, resulted in about 10 million deaths in 2022. Anticancer peptides (ACPs), as a category of bioactive peptides, have emerged as a focal point in clinical cancer research due to their potential to inhibit tumor cell proliferation with minimal side effects. However, the recognition of ACPs through wet-lab experiments still faces challenges of low efficiency and high cost. Our work proposes a recognition method for ACPs named ACP-DRL based on deep representation learning, to address the challenges associated with the recognition of ACPs in wet-lab experiments. ACP-DRL marks initial exploration of integrating protein language models into ACPs recognition, employing in-domain further pre-training to enhance the development of deep representation learning. Simultaneously, it employs bidirectional long short-term memory networks to extract amino acid features from sequences. Consequently, ACP-DRL eliminates constraints on sequence length and the dependence on manual features, showcasing remarkable competitiveness in comparison with existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.