37,138 results
Search Results
2. Table of Contents.
- Subjects
ARTIFICIAL neural networks ,REINFORCEMENT learning ,REMAINING useful life ,GENERATIVE adversarial networks ,TIME delay systems - Published
- 2022
- Full Text
- View/download PDF
3. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects.
- Author
-
Li, Zewen, Liu, Fan, Yang, Wenjie, Peng, Shouheng, and Zhou, Jun
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,ARTIFICIAL neural networks ,NATURAL language processing ,COMPUTER vision - Abstract
A convolutional neural network (CNN) is one of the most significant networks in the deep learning field. Since CNN made impressive achievements in many areas, including but not limited to computer vision and natural language processing, it attracted much attention from both industry and academia in the past few years. The existing reviews mainly focus on CNN’s applications in different scenarios without considering CNN from a general perspective, and some novel ideas proposed recently are not covered. In this review, we aim to provide some novel ideas and prospects in this fast-growing field. Besides, not only 2-D convolution but also 1-D and multidimensional ones are involved. First, this review introduces the history of CNN. Second, we provide an overview of various convolutions. Third, some classic and advanced CNN models are introduced; especially those key points making them reach state-of-the-art results. Fourth, through experimental analysis, we draw some conclusions and provide several rules of thumb for functions and hyperparameter selection. Fifth, the applications of 1-D, 2-D, and multidimensional convolution are covered. Finally, some open issues and promising directions for CNN are discussed as guidelines for future work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. A Survey of Modulation Classification Using Deep Learning: Signal Representation and Data Preprocessing.
- Author
-
Peng, Shengliang, Sun, Shujun, and Yao, Yu-Dong
- Subjects
DEEP learning ,ARTIFICIAL neural networks ,CLASSIFICATION algorithms ,TELECOMMUNICATION systems ,FEATURE extraction ,PHASE shift keying - Abstract
Modulation classification is one of the key tasks for communications systems monitoring, management, and control for addressing technical issues, including spectrum awareness, adaptive transmissions, and interference avoidance. Recently, deep learning (DL)-based modulation classification has attracted significant attention due to its superiority in feature extraction and classification accuracy. In DL-based modulation classification, one major challenge is to preprocess a received signal and represent it in a proper format before feeding the signal into deep neural networks. This article provides a comprehensive survey of the state-of-the-art DL-based modulation classification algorithms, especially the techniques of signal representation and data preprocessing utilized in these algorithms. Since a received signal can be represented by either features, images, sequences, or a combination of them, existing algorithms of DL-based modulation classification can be categorized into four groups and are reviewed accordingly in this article. Furthermore, the advantages as well as disadvantages of each signal representation method are summarized and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. On Information Plane Analyses of Neural Network Classifiers—A Review.
- Author
-
Geiger, Bernhard C.
- Subjects
ARTIFICIAL neural networks ,INFORMATION theory ,ELECTRONIC data processing - Abstract
We review the current literature concerned with information plane (IP) analyses of neural network (NN) classifiers. While the underlying information bottleneck theory and the claim that information-theoretic compression is causally linked to generalization are plausible, empirical evidence was found to be both supporting and conflicting. We review this evidence together with a detailed analysis of how the respective information quantities were estimated. Our survey suggests that compression visualized in IPs is not necessarily information-theoretic but is rather often compatible with geometric compression of the latent representations. This insight gives the IP a renewed justification. Aside from this, we shed light on the problem of estimating mutual information in deterministic NNs and its consequences. Specifically, we argue that, even in feedforward NNs, the data processing inequality needs not to hold for estimates of mutual information. Similarly, while a fitting phase, in which the mutual information is between the latent representation and the target increases, is necessary (but not sufficient) for good classification performance, depending on the specifics of mutual information estimation, such a fitting phase needs to not be visible in the IP. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs.
- Author
-
Chen, Junyang, Gong, Zhiguo, Wang, Wei, Wang, Cong, Xu, Zhenghua, Lv, Jianming, Li, Xueliang, Wu, Kaishun, and Liu, Weiwen
- Subjects
RANDOM graphs ,DATA mining ,VIRTUAL networks ,DATA visualization ,MACHINE learning - Abstract
Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network embedding, aims at preserving graph structures in a low-dimensional space. These learned representations can be used for subsequent machine learning tasks, such as vertex classification, link prediction, and data visualization. Recently, graph convolutional network (GCN)-based models, e.g., GraphSAGE, have drawn a lot of attention for their success in inductive NRL. When conducting unsupervised learning on large-scale graphs, some of these models employ negative sampling (NS) for optimization, which encourages a target vertex to be close to its neighbors while being far from its negative samples. However, NS draws negative vertices through a random pattern or based on the degrees of vertices. Thus, the generated samples could be either highly relevant or completely unrelated to the target vertex. Moreover, as the training goes, the gradient of NS objective calculated with the inner product of the unrelated negative samples and the target vertex may become zero, which will lead to learning inferior representations. To address these problems, we propose an adversarial training method tailored for unsupervised inductive NRL on large networks. For efficiently keeping track of high-quality negative samples, we design a caching scheme with sampling and updating strategies that has a wide exploration of vertex proximity while considering training costs. Besides, the proposed method is adaptive to various existing GCN-based models without significantly complicating their optimization process. Extensive experiments show that our proposed method can achieve better performance compared with the state-of-the-art models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Filter Sketch for Network Pruning.
- Author
-
Lin, Mingbao, Cao, Liujuan, Li, Shaojie, Ye, Qixiang, Tian, Yonghong, Liu, Jianzhuang, Tian, Qi, and Ji, Rongrong
- Subjects
COST control ,COVARIANCE matrices ,RECOMMENDER systems ,INFORMATION filtering - Abstract
We propose a novel network pruning approach by information preserving of pretrained network weights (filters). Network pruning with the information preserving is formulated as a matrix sketch problem, which is efficiently solved by the off-the-shelf frequent direction method. Our approach, referred to as FilterSketch, encodes the second-order information of pretrained weights, which enables the representation capacity of pruned networks to be recovered with a simple fine-tuning procedure. FilterSketch requires neither training from scratch nor data-driven iterative optimization, leading to a several-orders-of-magnitude reduction of time cost in the optimization of pruning. Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of floating-point operations (FLOPs) and prunes 59.9% of network parameters with negligible accuracy cost for ResNet-110. On ILSVRC-2012, it reduces 45.5% of FLOPs and removes 43.0% of parameters with only 0.69% accuracy drop for ResNet-50. Our code and pruned models can be found at https://github.com/lmbxmu/FilterSketch. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Neuromorphic Context-Dependent Learning Framework With Fault-Tolerant Spike Routing.
- Author
-
Yang, Shuangming, Wang, Jiang, Deng, Bin, Azghadi, Mostafa Rahimi, and Linares-Barranco, Bernabe
- Subjects
ARTIFICIAL neural networks - Abstract
Neuromorphic computing is a promising technology that realizes computation based on event-based spiking neural networks (SNNs). However, fault-tolerant on-chip learning remains a challenge in neuromorphic systems. This study presents the first scalable neuromorphic fault-tolerant context-dependent learning (FCL) hardware framework. We show how this system can learn associations between stimulation and response in two context-dependent learning tasks from experimental neuroscience, despite possible faults in the hardware nodes. Furthermore, we demonstrate how our novel fault-tolerant neuromorphic spike routing scheme can avoid multiple fault nodes successfully and can enhance the maximum throughput of the neuromorphic network by 0.9%–16.1% in comparison with previous studies. By utilizing the real-time computational capabilities and multiple-fault-tolerant property of the proposed system, the neuronal mechanisms underlying the spiking activities of neuromorphic networks can be readily explored. In addition, the proposed system can be applied in real-time learning and decision-making applications, brain–machine integration, and the investigation of brain cognition during learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Part-Based Semantic Transform for Few-Shot Semantic Segmentation.
- Author
-
Yang, Boyu, Wan, Fang, Liu, Chang, Li, Bohao, Ji, Xiangyang, and Ye, Qixiang
- Subjects
SEMANTICS ,EXPECTATION-maximization algorithms ,IMAGE segmentation ,FEATURE extraction ,SHOT peening - Abstract
Few-shot semantic segmentation remains an open problem for the lack of an effective method to handle the semantic misalignment between objects. In this article, we propose part-based semantic transform (PST) and target at aligning object semantics in support images with those in query images by semantic decomposition-and-match. The semantic decomposition process is implemented with prototype mixture models (PMMs), which use an expectation–maximization (EM) algorithm to decompose object semantics into multiple prototypes corresponding to object parts. The semantic match between prototypes is performed with a min-cost flow module, which encourages correct correspondence while depressing mismatches between object parts. With semantic decomposition-and-match, PST enforces the network’s tolerance to objects’ appearance and/or pose variation and facilities channelwise and spatial semantic activation of objects in query images. Extensive experiments on Pascal VOC and MS-COCO datasets show that PST significantly improves upon state-of-the-arts. In particular, on MS-COCO, it improves the performance of five-shot semantic segmentation by up to 7.79% with a moderate cost of inference speed and model size. Code for PST is released at https://github.com/Yang-Bob/PST. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Observer-Based Output Feedback Event-Triggered Adaptive Control for Linear Multiagent Systems Under Switching Topologies.
- Author
-
Zhang, Juan, Zhang, Huaguang, Zhang, Kun, and Cai, Yuliang
- Subjects
LINEAR control systems ,ADAPTIVE control systems ,MULTIAGENT systems ,PSYCHOLOGICAL feedback ,TOPOLOGY ,LINEAR systems - Abstract
The consensus problem of general linear multiagent systems (MASs) is studied under switching topologies by using observer-based event-triggered control method in this article. On the basis of the output information of agents, two kinds of novel event-triggered adaptive control schemes are designed to achieve the leaderless and leader-follower consensus problems, which do not need to utilize the global information of the communication networks. Finally, two simulation examples are introduced to show that the consensus error converges to zero and Zeno behavior is eliminated in MASs. Compared with the existing output feedback control research, one of the significant advantages of our methods is that the controller protocols and triggering mechanisms do not rely on any global information, are independent of the network scale, and are fully distributed ways. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Deep Neural Message Passing With Hierarchical Layer Aggregation and Neighbor Normalization.
- Author
-
Fan, Xiaolong, Gong, Maoguo, Tang, Zedong, and Wu, Yue
- Subjects
MESSAGE passing (Computer science) ,CERVICAL plexus ,REPRESENTATIONS of graphs - Abstract
As a unified framework for graph neural networks, message passing-based neural network (MPNN) has attracted a lot of research interest and has been shown successfully in a number of domains in recent years. However, because of over-smoothing and vanishing gradients, deep MPNNs are still difficult to train. To alleviate these issues, we first introduce a deep hierarchical layer aggregation (DHLA) strategy, which utilizes a block-based layer aggregation to aggregate representations from different layers and transfers the output of the previous block to the subsequent block, so that deeper MPNNs can be easily trained. Additionally, to stabilize the training process, we also develop a novel normalization strategy, neighbor normalization (NeighborNorm), which normalizes the neighbor of each node to further address the training issue in deep MPNNs. Our analysis reveals that NeighborNorm can smooth the gradient of the loss function, i.e., adding NeighborNorm makes the optimization landscape much easier to navigate. Experimental results on two typical graph pattern-recognition tasks, including node classification and graph classification, demonstrate the necessity and effectiveness of the proposed strategies for graph message-passing neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Elastic Net Nonparallel Hyperplane Support Vector Machine and Its Geometrical Rationality.
- Author
-
Qi, Kai and Yang, Hu
- Subjects
QUADRATIC programming ,PETRI nets ,SUPPORT vector machines ,HYPERPLANES - Abstract
Twin support vector machine (TWSVM), which constructs two nonparallel classifying hyperplanes, is widely applied to various fields. However, TWSVM solves two quadratic programming problems (QPPs) separately such that the final classifiers lack consistency and enough prediction accuracy. Moreover, by reason of only considering the 1-norm penalty for slack variables, TWSVM is not well defined in the geometrical view. In this article, we propose a novel elastic net nonparallel hyperplane support vector machine (ENNHSVM), which adopts elastic net penalty for slack variables and constructs two nonparallel separating hyperplanes simultaneously. We further discuss the properties of ENNHSVM theoretically and derive the violation tolerance upper bound to better demonstrate the relative violations of training samples in the same class. In particular, we design a safe screening rule for ENNHSVM to speed up the calculations. We finally compare the performance of ENNHSVM on both synthetic datasets and benchmark datasets with the Lagrangian SVM, the twin parametric-margin SVM, the elastic net SVM, the TWSVM, and the nonparallel hyperplane SVM. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Quantum-Inspired Support Vector Machine.
- Author
-
Ding, Chen, Bao, Tian-Yi, and Huang, He-Liang
- Subjects
SUPPORT vector machines ,SUPERVISED learning ,BIOLOGICALLY inspired computing ,MACHINE learning ,BIG data ,LEAST squares - Abstract
Support vector machine (SVM) is a particularly powerful and flexible supervised learning model that analyzes data for both classification and regression, whose usual algorithm complexity scales polynomially with the dimension of data space and the number of data points. To tackle the big data challenge, a quantum SVM algorithm was proposed, which is claimed to achieve exponential speedup for least squares SVM (LS-SVM). Here, inspired by the quantum SVM algorithm, we present a quantum-inspired classical algorithm for LS-SVM. In our approach, an improved fast sampling technique, namely indirect sampling, is proposed for sampling the kernel matrix and classifying. We first consider the LS-SVM with a linear kernel, and then discuss the generalization of our method to nonlinear kernels. Theoretical analysis shows our algorithm can make classification with arbitrary success probability in logarithmic runtime of both the dimension of data space and the number of data points for low rank, low condition number, and high dimensional data matrix, matching the runtime of the quantum SVM. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Easy2Hard: Learning to Solve the Intractables From a Synthetic Dataset for Structure-Preserving Image Smoothing.
- Author
-
Feng, Yidan, Deng, Sen, Yan, Xuefeng, Yang, Xin, Wei, Mingqiang, and Liu, Ligang
- Subjects
COMPUTER vision ,COMPUTER graphics ,DEEP learning ,PRIOR learning ,ARTIFICIAL neural networks ,TASK analysis - Abstract
Image smoothing is a prerequisite for many computer vision and graphics applications. In this article, we raise an intriguing question whether a dataset that semantically describes meaningful structures and unimportant details can facilitate a deep learning model to smooth complex natural images. To answer it, we generate ground-truth labels from easy samples by candidate generation and a screening test and synthesize hard samples in structure-preserving smoothing by blending intricate and multifarious details with the labels. To take full advantage of this dataset, we present a joint edge detection and structure-preserving image smoothing neural network (JESS-Net). Moreover, we propose the distinctive total variation loss as prior knowledge to narrow the gap between synthetic and real data. Experiments on different datasets and real images show clear improvements of our method over the state of the arts in terms of both the image cleanness and structure-preserving ability. Code and dataset are available at https://github.com/YidFeng/Easy2Hard. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Hyperspectral Image Super-Resolution via Deep Spatiospectral Attention Convolutional Neural Networks.
- Author
-
Hu, Jin-Fan, Huang, Ting-Zhu, Deng, Liang-Jian, Jiang, Tai-Xiang, Vivone, Gemine, and Chanussot, Jocelyn
- Subjects
CONVOLUTIONAL neural networks ,HIGH resolution imaging ,DEEP learning ,MULTISPECTRAL imaging ,SPATIAL resolution ,ERROR functions - Abstract
Hyperspectral images (HSIs) are of crucial importance in order to better understand features from a large number of spectral channels. Restricted by its inner imaging mechanism, the spatial resolution is often limited for HSIs. To alleviate this issue, in this work, we propose a simple and efficient architecture of deep convolutional neural networks to fuse a low-resolution HSI (LR-HSI) and a high-resolution multispectral image (HR-MSI), yielding a high-resolution HSI (HR-HSI). The network is designed to preserve both spatial and spectral information thanks to a new architecture based on: 1) the use of the LR-HSI at the HR-MSI’s scale to get an output with satisfied spectral preservation and 2) the application of the attention and pixelShuffle modules to extract information, aiming to output high-quality spatial details. Finally, a plain mean squared error loss function is used to measure the performance during the training. Extensive experiments demonstrate that the proposed network architecture achieves the best performance (both qualitatively and quantitatively) compared with recent state-of-the-art HSI super-resolution approaches. Moreover, other significant advantages can be pointed out by the use of the proposed approach, such as a better network generalization ability, a limited computational burden, and the robustness with respect to the number of training samples. Please find the source code and pretrained models from https://liangjiandeng.github.io/Projects_Res/HSRnet_2021tnnls.html. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Consistent Meta-Regularization for Better Meta-Knowledge in Few-Shot Learning.
- Author
-
Tian, Pinzhuo, Li, Wenbin, and Gao, Yang
- Subjects
MACHINE learning ,DEEP learning ,TECHNOLOGICAL innovations - Abstract
Recently, meta-learning provides a powerful paradigm to deal with the few-shot learning problem. However, existing meta-learning approaches ignore the prior fact that good meta-knowledge should alleviate the data inconsistency between training and test data, caused by the extremely limited data, in each few-shot learning task. Moreover, legitimately utilizing the prior understanding of meta-knowledge can lead us to design an efficient method to improve the meta-learning model. Under this circumstance, we consider the data inconsistency from the distribution perspective, making it convenient to bring in the prior fact, and propose a new consistent meta-regularization (Con-MetaReg) to help the meta-learning model learn how to reduce the data-distribution discrepancy between the training and test data. In this way, the ability of meta-knowledge on keeping the training and test data consistent is enhanced, and the performance of the meta-learning model can be further improved. The extensive analyses and experiments demonstrate that our method can indeed improve the performances of different meta-learning models in few-shot regression, classification, and fine-grained classification. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. On a Finitely Activated Terminal RNN Approach to Time-Variant Problem Solving.
- Author
-
Sun, Mingxuan, Zhang, Yu, Wu, Yuxin, and He, Xiongxiong
- Subjects
RECURRENT neural networks ,PROBLEM solving ,QUADRATIC programming - Abstract
This article concerns with terminal recurrent neural network (RNN) models for time-variant computing, featuring finite-valued activation functions (AFs), and finite-time convergence of error variables. Terminal RNNs stand for specific models that admit terminal attractors, and the dynamics of each neuron retains finite-time convergence. The might-existing imperfection in solving time-variant problems, through theoretically examining the asymptotically convergent RNNs, is pointed out for which the finite-time-convergent models are most desirable. The existing AFs are summarized, and it is found that there is a lack of the AFs that take only finite values. A finitely valued terminal RNN, among others, is taken into account, which involves only basic algebraic operations and taking roots. The proposed terminal RNN model is used to solve the time-variant problems undertaken, including the time-variant quadratic programming and motion planning of redundant manipulators. The numerical results are presented to demonstrate effectiveness of the proposed neural network, by which the convergence rate is comparable with that of the existing power-rate RNN. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Generative Dual-Adversarial Network With Spectral Fidelity and Spatial Enhancement for Hyperspectral Pansharpening.
- Author
-
Dong, Wenqian, Hou, Shaoxiong, Xiao, Song, Qu, Jiahui, Du, Qian, and Li, Yunsong
- Subjects
GENERATIVE adversarial networks ,DUAL-task paradigm ,SPATIAL resolution ,FRACTIONS - Abstract
Hyperspectral (HS) pansharpening is of great importance in improving the spatial resolution of HS images for remote sensing tasks. HS image comprises abundant spectral contents, whereas panchromatic (PAN) image provides spatial information. HS pansharpening constitutes the possibility for providing the pansharpened image with both high spatial and spectral resolution. This article develops a specific pansharpening framework based on a generative dual-adversarial network (called PS-GDANet). Specifically, the pansharpening problem is formulated as a dual task that can be solved by a generative adversarial network (GAN) with two discriminators. The spatial discriminator forces the intensity component of the pansharpened image to be as consistent as possible with the PAN image, and the spectral discriminator helps to preserve spectral information of the original HS image. Instead of designing a deep network, PS-GDANet extends GANs to two discriminators and provides a high-resolution pansharpened image in a fraction of iterations. The experimental results demonstrate that PS-GDANet outperforms several widely accepted state-of-the-art pansharpening methods in terms of qualitative and quantitative assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Network Pruning Using Adaptive Exemplar Filters.
- Author
-
Lin, Mingbao, Ji, Rongrong, Li, Shaojie, Wang, Yan, Wu, Yongjian, Huang, Feiyue, and Ye, Qixiang
- Subjects
ADAPTIVE filters ,MESSAGE passing (Computer science) ,COMMUNITIES ,COMPUTER architecture - Abstract
Popular network pruning algorithms reduce redundant information by optimizing hand-crafted models, and may cause suboptimal performance and long time in selecting filters. We innovatively introduce adaptive exemplar filters to simplify the algorithm design, resulting in an automatic and efficient pruning approach called EPruner. Inspired by the face recognition community, we use a message-passing algorithm Affinity Propagation on the weight matrices to obtain an adaptive number of exemplars, which then act as the preserved filters. EPruner breaks the dependence on the training data in determining the “important” filters and allows the CPU implementation in seconds, an order of magnitude faster than GPU-based SOTAs. Moreover, we show that the weights of exemplars provide a better initialization for the fine-tuning. On VGGNet-16, EPruner achieves a 76.34%-FLOPs reduction by removing 88.80% parameters, with 0.06% accuracy improvement on CIFAR-10. In ResNet-152, EPruner achieves a 65.12%-FLOPs reduction by removing 64.18% parameters, with only 0.71% top-5 accuracy loss on ILSVRC-2012. Our code is available at https://github.com/lmbxmu/EPruner. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Temporal Network Embedding for Link Prediction via VAE Joint Attention Mechanism.
- Author
-
Jiao, Pengfei, Guo, Xuan, Jing, Xin, He, Dongxiao, Wu, Huaming, Pan, Shirui, Gong, Maoguo, and Wang, Wenjun
- Subjects
TIME-varying networks ,RECURRENT neural networks ,ELECTRIC network topology - Abstract
Network representation learning or embedding aims to project the network into a low-dimensional space that can be devoted to different network tasks. Temporal networks are an important type of network whose topological structure changes over time. Compared with methods on static networks, temporal network embedding (TNE) methods are facing three challenges: 1) it cannot describe the temporal dependence across network snapshots; 2) the node embedding in the latent space fails to indicate changes in the network topology; and 3) it cannot avoid a lot of redundant computation via parameter inheritance on a series of snapshots. To overcome these problems, we propose a novel TNE method named temporal network embedding method based on the VAE framework (TVAE), which is based on a variational autoencoder (VAE) to capture the evolution of temporal networks for link prediction. It not only generates low-dimensional embedding vectors for nodes but also preserves the dynamic nonlinear features of temporal networks. Through the combination of a self-attention mechanism and recurrent neural networks, TVAE can update node representations and keep the temporal dependence of vectors over time. We utilize parameter inheritance to keep the new embedding close to the previous one, rather than explicitly using regularization, and thus, it is effective for large-scale networks. We evaluate our model and several baselines on synthetic data sets and real-world networks. The experimental results demonstrate that TVAE has superior performance and lower time cost compared with the baselines. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.