34 results on '"Adjacency matrix"'
Search Results
2. DAGCRN: Graph convolutional recurrent network for traffic forecasting with dynamic adjacency matrix
- Author
-
Zheng Shi, Yingjun Zhang, Jingping Wang, Jiahu Qin, Xiaoqian Liu, Hui Yin, and Hua Huang
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2023
3. Spatial-temporal graph convolution network model with traffic fundamental diagram information informed for network traffic flow prediction.
- Author
-
Liu, Zhao, Ding, Fan, Dai, Yunqi, Li, Linchao, Chen, Tianyi, and Tan, Huachun
- Subjects
- *
COMPUTER network traffic , *TRAFFIC flow , *STANDARD deviations , *TIME-varying networks , *INFORMATION networks , *TRAFFIC congestion , *MATHEMATICAL convolutions , *FLOQUET theory - Abstract
• Informed fundamental diagram knowledge into a newly proposed data-driven model. • Applied and evaluated the proposed PI-SGTGCN model through real-world dataset. • Investigated the physical interpretation of the dynamic graph adjacency matrix. Accurate and fine-grained traffic state prediction has always been an important research field. For long-term traffic flow prediction, the high-dimensional and coupled traffic feature evolution patterns are deeply recessive, posing challenges in effectively characterizing and modeling them. This paper proposed a novel spatial–temporal graph convolution network model with traffic Fundamental Diagram (FD) information informed. The model decouples the high-dimensional spatiotemporal relationships in the transportation network and fully leverages the physical evolution laws of traffic states. First, the Graph Convolutional Network (GCN) with a spatial attention mechanism was proposed to capture spatial relations of the road network. The mechanism can better represent the spatial dynamics of the graph adjacency matrix in GCN. Second, this study injected prior physical knowledge into the graph adjacency matrix. This process was achieved by embedding characteristics of FDs from historical traffic data on the diagonal of the matrix, by which the propagation pattern of traffic states in road network could be considered. Third, to further catch the time dependence of the road network, the Gated Recurrent Unit (GRU) structure and the Transformer encoding structure were employed to locally and globally reform traffic state time sequences. Finally, experiments on a revised traffic dataset demonstrated that the proposed method consistently outperforms other baselines regarding Mean Absolute Error and Root Mean Square Error across all cases. Moreover, it achieved the optimal Mean Absolute Percentage Error in the 30- and 60-minute prediction tasks. This study shows a novel solution to inform traffic physical laws into data-driven state prediction models, and the reliability of the proposed method in long-term prediction offers valuable support for improving traffic management and alleviating traffic congestion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Learning deep neural networks for node classification
- Author
-
Dechang Pi and Bentian Li
- Subjects
0209 industrial biotechnology ,Contextual image classification ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,General Engineering ,Pattern recognition ,02 engineering and technology ,Pointwise mutual information ,Computer Science Applications ,Support vector machine ,020901 industrial engineering & automation ,Artificial Intelligence ,Softmax function ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Artificial intelligence ,Adjacency matrix ,business ,Classifier (UML) - Abstract
Deep Neural Network (DNN) has made great leaps in image classification and speech recognition in recent years. However, employing DNN for node classification such as in social network remains to be a non-trivial problem. Moreover, the current advanced method of implementing node classification tasks usually takes two steps, i.e. firstly, the embedding vector of the node is obtained through network embedding and then the classifier such as SVM is leveraged to do the task. Distinctly, this may only get the suboptimal solution of the problem. To settle the above issues, a novel Deep Neural Network method for node classification named DNNNC is proposed in the framework of Deep Learning. Specifically, we first get the positive pointwise mutual information (PPMI) matrix from the given adjacency matrix. Then, the data is fed to deep neural network composed of deep stacked sparse autoencoders and softmax layer, which could learn the node representation while encoding the rich nonlinear structural and semantic information and could be well trained for node classification under the DNN framework. Extensive experiments are conducted on real-world network datasets for node classification task and have shown that the proposed model DNNNC outperforms the state-of-the-art method in the view of superior performance.
- Published
- 2019
5. A new single-chromosome evolutionary algorithm for community detection in complex networks by combining content and structural information
- Author
-
Yasser Jafari, Elmira Pourabbasi, Vahid Majidnezhad, and Saeid Taghavi Afshord
- Subjects
Computer science ,Rank (computer programming) ,General Engineering ,Evolutionary algorithm ,Sorting ,Complex network ,computer.software_genre ,Computer Science Applications ,Similarity (network science) ,Chromosome (genetic algorithm) ,Artificial Intelligence ,Adjacency matrix ,Data mining ,Virtual network ,computer - Abstract
Community detection is an important step in perceiving network structure and performance for complex network analysis. The rapid growth of network data in recent years has piqued the interest of many researchers in community detection. The majority of community detection methods only consider the network structure. Nonetheless, real-world network nodes may have some characteristics that can be useful for community detection. This study proposed a novel single-chromosome evolutionary algorithm with a distinctive architecture modification operator for community detection in complex networks using a combination of structural and content information. To this end, a novel virtual network was created by taking into account the structure and content of nodes, and communities were discovered for this network by optimizing the objective function (and using the combinatorial adjacency matrix instead of the structural adjacency matrix) in a series of steps. The nodes in this network were the same as the nodes in the main network; however, the links were developed based on similarities between nodes and their structural neighborhood. The proposed algorithm also included a method for sorting new nodes in order to determine the analysis order of nodes along with the local improvement of solution, as well as a new criterion, CS, for measuring the content similarity of nodes. The proposed algorithm was evaluated in real-networks and compared to various state-of-the-art and widely used methods. The Friedman rank algorithm was then used to rank the proposed algorithm and the existing methods using six real networks. According to the NMI criterion used in the Friedman rank test, the rank of the proposed algorithms increased by 96.8762%, 70.2693%, 26.0005%, 23.5294%, 46.5109%, and 23.5294% compared respectively with ASCD-ARC, BTLSC, Adapt-SA, PSB-PG, RSECD, and NEMBP, which have all been proposed in recent years.
- Published
- 2021
6. On fast enumeration of maximal cliques in large graphs
- Author
-
Bowen Xiong, Yi Zhou, Yan Jin, Yangming Zhou, and Kun He
- Subjects
Theoretical computer science ,Artificial Intelligence ,Computer science ,General Engineering ,Benchmark (computing) ,Enumeration ,Adjacency list ,Graph theory ,Adjacency matrix ,Clique (graph theory) ,Upper and lower bounds ,Degeneracy (graph theory) ,Computer Science Applications - Abstract
Maximal Clique Enumeration (MCE) is a fundamental and challenging problem in graph theory and various network applications. Numerous algorithms have been proposed in the past decades, however, only a few of them focus on improving the practical efficiency in large graphs. To this end, we propose an efficient algorithm called FACEN based on the Bron–Kerbosch framework. To optimize the memory and time consumption, we apply a hybrid data structure with adjacency list and partial adjacency matrix, and introduce a dynamic pivot selection rule based on the degeneracy order. FACEN is evaluated on a total of 64 benchmark instances from various sources. Computational results indicate that the proposed algorithm is highly competitive with the current leading MCE methods. In particular, our algorithm is able to enumerate all maximal cliques on the tested real-world social networks with millions of vertices and edges. For very large graphs, we provide an additional experiment for solving the MCE variant with lower bound, and investigate the benefits of FACEN.
- Published
- 2022
7. Node classification using kernel propagation in graph neural networks
- Author
-
Conrad S. Tucker and Sakthi Kumar Arul Prakash
- Subjects
0209 industrial biotechnology ,Computer science ,Node (networking) ,General Engineering ,Scale (descriptive set theory) ,02 engineering and technology ,computer.software_genre ,Random walk ,Graph ,Computer Science Applications ,Kernel (linear algebra) ,020901 industrial engineering & automation ,Artificial Intelligence ,Kernel (statistics) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Benchmark (computing) ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Isomorphism ,Data mining ,Adjacency matrix ,computer - Abstract
In this work, we introduce a kernel propagation method that enables graph neural networks (GNNs) to leverage higher-order network structural information without increasing the complexity of the networks. Recent studies have introduced GNNs that include higher-order neighborhood features containing global network information by propagating node features using a higher-order feature propagation rule. Though these GNNs have shown to improve node classification performance, they fail to include local connectivity information. Alternatively, GNNs also concatenate increasing orders of adjacency matrix in deeper layers in order to include higher-order structural information. In addition to global network information, GNNs also make use of node features which are network and node dependent features that serve to distinguish structurally isomorphic sub-structures within graphs. However, such node features may not always be available or depending on the network, may lead to deteriorating classification performance. Hence, to resolve these limitations, we propose a kernel propagation method that introduces a pre-processing step for GNNs to leverage higher-order structural features. The higher-order structural features are computed using a weighted random walk matrix which is node independent while using the first-order spectral propagation rule which explicitly considers local connectivity. Through our benchmark experiments, we find that the computed higher-order structural features are capable of replacing node dependent features while performing node classification task with performance on par with the state of the art approaches. Further, we also find that including both node features and higher-order structural features increases the performance of GNNs on large scale benchmark networks considered in this study. Our results show that considering local and global structural information as input to GNNs lead to an improvement in node classification performance in the absence/presence of node features without loss of performance.
- Published
- 2021
8. An improved algorithm for support vector clustering based on maximum entropy principle and kernel matrix
- Author
-
Chonghui Guo and Fang Li
- Subjects
Mathematical optimization ,Principle of maximum entropy ,Improved algorithm ,General Engineering ,Support vector clustering ,Computer Science Applications ,symbols.namesake ,Constraint algorithm ,Data point ,Artificial Intelligence ,Lagrange multiplier ,symbols ,Entropy (information theory) ,Adjacency matrix ,Mathematics - Abstract
The support vector clustering (SVC) algorithm consists of two main phases: SVC training and cluster assignment. The former requires calculating Lagrange multipliers and the latter requires calculating adjacency matrix, which may cause a high computational burden for cluster analysis. To overcome these difficulties, in this paper, we present an improved SVC algorithm. In SVC training phase, an entropy-based algorithm for the problem of calculating Lagrange multipliers is proposed by means of Lagrangian duality and the Jaynes' maximum entropy principle, which evidently reduces the time of calculating Lagrange multipliers. In cluster assignment phase, the kernel matrix is used to preliminarily classify the data points before calculating adjacency matrix, which effectively reduces the computing scale of adjacency matrix. As a result, a lot of computational savings can be achieved in the improved algorithm by exploiting the special structure in SVC problem. Validity and performance of the proposed algorithm are demonstrated by numerical experiments.
- Published
- 2011
9. Recovery schemes of Hop Count Matrix via topology inference and applications in range-free localization.
- Author
-
Tu, Qiang, Zhao, Yingying, and Liu, Xingcheng
- Subjects
- *
TOPOLOGY , *DECISION trees , *INTERNET of things , *MATRICES (Mathematics) - Abstract
Hop Count Matrix (HCM) contains rich connectivity information, which is very important for various Internet of Things (IoT) applications, especially for obtaining the locations of sensor nodes. However, some items of HCMs may be missing due to attacks by malicious nodes or unexpected termination of flooding operations. To solve this problem, two methods, called HCMR-AM and HCMR-DT, are proposed to recover the missing items. In HCMR-AM, the collected partial hop counts are employed to construct Adjacency Matrix (AM), and then the constructed AM is used to obtain the complete HCM. In HCMR-DT, the recovery of HCM is transformed into a classification problem, where multi-dimensional features are used for joint prediction to achieve more accurate recovery performance. Extensive experimental results demonstrate that compared to the original SVT and HCMR-NBC, our proposed algorithms have significant improvement in recovery performance and execution efficiency. In addition, the complete HCM is used for node localization, and experimental results show that the HCM recovered by the proposed methods can achieve the same localization performance as the HCM without missing value when the observation ratio of HCM is greater than 30%, which cannot be achieved by other recovery algorithms. • A hop count matrix recovery scheme based on adjacency matrix is proposed. • A hop count matrix recovery method using Decision Tree is developed. • The complete hop count matrices are suitable for range-free localization. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization
- Author
-
Xin Gao, Jim Jing-Yan Wang, Jianhua Z. Huang, and Yijun Sun
- Subjects
0209 industrial biotechnology ,Graph kernel ,Optimization problem ,Feature vector ,Feature selection ,02 engineering and technology ,Non-negative matrix factorization ,Nonnegative matrix factorization ,020901 industrial engineering & automation ,Nearest neighbor graph ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Adjacency matrix ,Engineering(all) ,Mathematics ,Multi-kernel learning ,business.industry ,General Engineering ,Data representation ,Pattern recognition ,Computer Science Applications ,Graph regularization ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Graph has been used to regularize nonnegative matrix factorization (NMF).However, noisy features and nonlinear distributed data effect the graph construction.We proposed to integrate feature selection and multi-kernel learning to this problem.Novel algorithms are developed to learn feature/kernel weights and NMF parameters. Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.
- Published
- 2015
- Full Text
- View/download PDF
11. An improved algorithm for support vector clustering based on maximum entropy principle and kernel matrix
- Author
-
Guo, Chonghui and Li, Fang
- Subjects
- *
SUPPORT vector machines , *ALGORITHMS , *CLUSTER analysis (Statistics) , *MAXIMUM entropy method , *LINEAR algebra , *MATHEMATICAL optimization , *NUMERICAL analysis , *COMPUTER science - Abstract
Abstract: The support vector clustering (SVC) algorithm consists of two main phases: SVC training and cluster assignment. The former requires calculating Lagrange multipliers and the latter requires calculating adjacency matrix, which may cause a high computational burden for cluster analysis. To overcome these difficulties, in this paper, we present an improved SVC algorithm. In SVC training phase, an entropy-based algorithm for the problem of calculating Lagrange multipliers is proposed by means of Lagrangian duality and the Jaynes’ maximum entropy principle, which evidently reduces the time of calculating Lagrange multipliers. In cluster assignment phase, the kernel matrix is used to preliminarily classify the data points before calculating adjacency matrix, which effectively reduces the computing scale of adjacency matrix. As a result, a lot of computational savings can be achieved in the improved algorithm by exploiting the special structure in SVC problem. Validity and performance of the proposed algorithm are demonstrated by numerical experiments. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
12. A cognitive map simulation approach to adjusting the design factors of the electronic commerce web sites
- Author
-
Lee, Kun Chang and Lee, Sangjae
- Subjects
- *
ELECTRONIC commerce , *GEOGRAPHICAL perception , *COMPUTER simulation - Abstract
The electronic commerce (EC) has been widely studied in the academic as well as practical fields. Especially, a lot of special topics regarding the EC such as B2C and B2B have been investigated in literature. However, there are much less studies about the EC sites themselves. Besides, only a few studies exist about the issues regarding how to adjust the design factors of the EC sites. The main objective of this study is to fill this research void by employing two techniques: (1) cognitive map and (2) linear structural relationship (LISREL). The cognitive map was used to operationalize the causal relationships among design factors of the EC sites, and investigate the simulation to find the optimal strategy of adjusting the design factors. The LISREL was performed to prove the proposed research model, where original Technology Acceptance Model (TAM) [Davis MIS Q. 13 (1989) 319] is adopted as a basic framework for providing causal relationships. Usable questionnaires were collected from 114 respondents who are proved to be qualified for this study. They were educated to surf two typical EC sites appropriately and tested before answering the questionnaires. Those respondents who completed questionnaires successfully were given a book coupon of 5$ equivalent. After LISREL experiments, the proposed research model was tested, and an adjacency matrix was induced which is to be used for the cognitive map simulation. With the adjacency matrix and 15 hypothetical market situations, the cognitive map simulations were successfully performed yielding that the proposed two techniques could be used for successfully adjusting the design factors of the EC sites under consideration in line with the changes in customers'' tastes and market situations. One of the noticeable practical advantages of this study is that decision makers can identify the most relevant design factors and thereby allocate limited resources to them reasonably by performing the cognitive map simulation in advance before doing design adjustment to the EC sites in actuality. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
13. LEISN: A long explicit–implicit spatio-temporal network for traffic flow forecasting.
- Author
-
Lai, Qiang and Chen, Peng
- Subjects
- *
TRAFFIC flow , *TRAFFIC estimation , *RECURRENT neural networks - Abstract
Recent studies have shown that it is necessary to further improve the prediction accuracy of complex and dynamic traffic flow from two angles: temporal dependence and spatial dependence. Although many spatio-temporal networks are extracting features from both temporal and spatial dependencies, there are still two problems that need to be addressed. Firstly, existing methods often focus on modeling local spatial dependence relationship between adjacent nodes, but neglect non-local spatial dependence relationship between distant nodes. Secondly, while recent work has explored various methods such as convolutional and recurrent neural networks for modeling temporal dependence, they may not fully capture the long-term temporal dependence in traffic data. Therefore, addressing the challenges of long-term dependency, explicit spatial dependency, and implicit spatial dependency, this paper proposes a Long-term Explicit–Implicit Spatio-Temporal Network (LEISN) for traffic flow prediction. A Long-term Dependency Module has been designed to store hidden states generated from multiple previous time steps, facilitating the transmission of long-term features. Based on this, two graph convolution-based spatial feature extraction branches are designed to extract explicit spatial features based on the adjacency matrix generated by spatial topology and implicit spatial features based on the implicit adjacency matrix generated by trend similarity, respectively. Then all spatio-temporal features are fused to produce the next state. Additionally, a new encoder framework, LENSI-ED, was proposed based on LEISN. Comparative experiments are conducted on four datasets, and the results showed that our model has advantages over existing methods. The proposed model addresses the issues of local and non-local spatial dependence relationships and long-term temporal dependence in traffic flow forecasting. • A long explicit–implicit spatio-temporal network for traffic flow prediction is proposed. • A new encoder framework named LENSI-ED is proposed based on LEISN. • Comparative experiments are conducted to show the advantages of the neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Knowledge-base graph recovery using sparse matrix techniques☆
- Author
-
Nicky L. Sizemore
- Subjects
Theoretical computer science ,Computer science ,business.industry ,General Engineering ,Sparse PCA ,Pattern recognition ,Sparse approximation ,Directed acyclic graph ,Graph ,Computer Science Applications ,Knowledge base ,Artificial Intelligence ,Graph (abstract data type) ,Adjacency matrix ,Artificial intelligence ,business ,Sparse matrix - Abstract
This article describes use of a sparse matrix representation of the directed acyclic graph (DAG) representing a knowledge base to perform static analysis of the knowledge-base structure. Knowledge-base preparation and subsequent analysis and results obtained are also discussed.
- Published
- 1994
15. Graph neural network-based bearing fault diagnosis using Granger causality test.
- Author
-
Zhang, Zhewen and Wu, Lifeng
- Subjects
- *
FAULT diagnosis , *GRANGER causality test , *CONVOLUTIONAL neural networks , *SUPPORT vector machines , *RANDOM forest algorithms , *DEEP learning , *SPECTRUM analysis - Abstract
Detecting bearing faults helps ensure the healthy operation of machinery and prevents serious accidents. However, fault diagnosis method based on deep learning relies on the correlation between the extracted vibration signal features. It does not consider the causal relationship between fault, noise, and vibration waveform changes, resulting in lower bearing fault diagnosis accuracy under realistic working conditions. This study proposes a graph neural network (GNN) method based on the Granger causality test for bearing fault detection called GCT-GNN to address this issue. The proposed method first performs feature transformation on the original signals to extract time-domain and frequency-domain features, forming a feature matrix. Then, spectral analysis is conducted on both faulty signals with noise and healthy signals to calculate the lag order between them. Subsequently, the Granger causality test is employed to quantify the impact of faults and noise on signal changes, and the quantified results are used to calculate weights, constructing an adjacency matrix. The completed adjacency matrix and feature matrix are input into the GNN for feature mapping, last classifying the bearing fault data. In this paper, five models, Deep residual shrinkage Network (DRSN), GNN, Support Vector Machines(SVM), Random Forests (RF), and Convolutional Neural Network(CNN) were selected, and comparative experiments were carried out on two public data sets. The results show that, compared with other models, GCT-GNN has better anti-noise ability and is better than other models in various fault diagnoses. • The effect of noise on vibration signal is quantified by Granger causality test. • The lag order was calculated by using spectrum analysis method. • The method of calculating weight by causality is proposed to construct causality graph. • Construct a graph neural network using causal graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Dynamic multi-graph neural network for traffic flow prediction incorporating traffic accidents.
- Author
-
Ye, Yaqin, Xiao, Yue, Zhou, Yuxuan, Li, Shengwen, Zang, Yuanfei, and Zhang, Yixuan
- Subjects
- *
TRAFFIC flow , *TRAFFIC accidents , *MULTIGRAPH , *TRAFFIC estimation - Abstract
Traffic flow forecasting is the foundation of intelligent transportation development and an important task in realizing intelligent transportation services. This task is challenging due to the complex spatiotemporal dependencies between road nodes and some other external factors. Most existing GCN-based methods usually use a single and fixed adjacency matrix to characterize the global spatiotemporal relationship of road networks, which limits the expressiveness of the model in different scenarios and ignores the dynamic nature of node relationships that change over time. In addition, sudden traffic accidents may also cause fluctuations in traffic flow in the short term, which may affect the accuracy of the model prediction. To address the above problems, this paper proposes a dynamic multi-graph neural network (DMGNN) incorporating traffic accidents for multi-step traffic flow prediction. First, to provide richer prior knowledge for the model, we construct multiple graphs to represent various contextual dependencies among nodes. Second, we designed a dynamic graph adjustment module to update the adjacency matrix used in each training step. Finally, we build a deep learning framework based on GAT and Bi-LSTM to focus on local fluctuations caused by traffic incidents and to extract sophisticated spatiotemporal correlations between data. We conducted extensive experiments on two real traffic datasets to evaluate the model, and the ablation experiments verified the effectiveness of each module. On the standard public dataset PEMSD3, compared to the optimal baseline model, our model improves the RMSE, MAE, and MAPE of the multi-step prediction by about 21%, 21%, and 22%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. HyGate-GCN: Hybrid-Gate-Based Graph Convolutional Networks with dynamical ratings estimation for personalized POI recommendation.
- Author
-
Anjiri, Simon Nandwa, Ding, Derui, and Song, Yan
- Abstract
The presence of user-generated ratings has dramatically facilitated the development of recommendation systems to aid users in discovering relevant and personalized points of interest (POI). It is worth mentioning that users' choices and preferences are not static but rather dynamic, reflecting the ever-changing nature of human experiences and influences. Furthermore, the utilization of social influence and geographical proximity of users is still insufficient to capture the homophily effect within networks. In this paper, an interesting Hybrid Gate-based Graph Convolutional Network (HyGate-GCN) combining with feature vectors embedding and interaction, where a modified gated-GCN is proposed for personalized recommendations by adequately employing the behavior of users' check-ins, temporal properties of users' decisions, social properties of users, as well as the user/POI profile information data. Specifically, a novel POI graph reflecting the geographical proximity is first established to describe the behavior of users' check-ins and, at the same time, an improved overlap ratio about POIs is employed to effectively describe temporal properties of users' decisions. Then, an attention mechanism is developed to encode feature vectors of both the users and POIs, with the objective of assigning higher importance to features that are deemed relevant. Furthermore, a temporal Kalman filter dynamically estimating ratings is developed to exploit the information about the evolving preferences of users over time. Finally, a modified gated-GCN model with merging and refining gates is constructed to effectively acquire the homophily phenomenon in both trust network graphs and spatial adjacency matrix graphs of users and POIs respectively. Experimental results provide evidence of the effectiveness of our approach in improving accuracy and personalization. • A POI graph is established to describe the behavior of users' check-ins. • An improved overlap ratio reflects the temporal properties of users' decisions. • A Kalman filter estimates ratings to acquire the evolution of users' preferences. • An improved gated-GCN model is proposed to acquire the homophily phenomenon. • A HyGate-GCN model is developed via feature interaction and the improved Gate-GCN. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Heterogeneous Views and Spatial Structure Enhancement for triple error detection.
- Author
-
Xue, Xinyue, Zhang, Chunxia, Wang, Yizhou, Song, Haipei, Xue, Xiaojun, and Niu, Zhendong
- Subjects
- *
KNOWLEDGE graphs , *DATA distribution , *SEMANTICS , *ENCODING - Abstract
Knowledge graph error detection is to identify erroneous triples in knowledge graphs that are inconsistent with objective facts in the real world. In practice, the quality of knowledge graphs is an indispensable foundation for the widespread and accurate knowledge application services such as intelligent retrieval and human machine dialogue. Technically, the existing knowledge graph error detection methods face the following two problems: few available negative samples of triples, and an uneven data distribution. That distribution is caused by the large disparity in the number of head and tail entities belonging to the same relationship and the disparity in the number of triples with different relationships. To alleviate these problems, this paper proposes an approach based on the H eterogeneous V iews and S patial S tructure E nhancement (HVSSE) in a contrastive learning framework for triple error detection task. Specifically, the heterogeneous views are constructed to include four kinds of triple views, i.e., positive and negative triple views based on head or tail entity co-occurrence. Moreover, Graph-Spatial-Transformer with an explicit spatial structure encoding is designed to fully capture the contextual information of triple nodes. Thereby, driven by the framework of contrastive learning, our HVSSE model can not only learn more discriminative embedding of triples, but also capture the local structure of triples and global contextual information of knowledge graphs. Experimental results on five public datasets indicate that our proposed approach is superior to the state-of-the-art methods, showing its the effectiveness on triple error detection. • Heterogeneous views with spatial structure enhancement are proposed. • Positive and negative triple views on head/tail entity co-occurrence are constructed. • Graph-Spatial-Transformer encoder is designed to capture spatial global semantics. • Bidirectional degree centrality and adjacency matrix are introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Adaptive graph neural network for traffic flow prediction considering time variation.
- Author
-
Chen, Fenghao, Sun, Xiaoyong, Wang, Yuchen, Xu, Zhiyi, and Ma, Weifeng
- Subjects
- *
GRAPH neural networks , *RECURRENT neural networks , *DEEP learning , *TRAFFIC flow , *LEARNING modules - Abstract
Traffic prediction has drawn considerable attention due to its potential to optimize the operational efficiency of road networks. Existing methods commonly combine graph neural network (GNN) and recurrent neural network (RNN) to model spatio-temporal correlations. However, the above models still face challenges, including an inability to capture time-varying spatial correlations, inadequate consideration of spatio-temporal heterogeneity and inefficient iterative operations. To address the above challenges, in this paper, we propose a novel framework for traffic prediction, named time-based adaptive graph neural network (TAGNN). First, a novel graph learning module was developed to generate time-based adaptive graph dependency matrices, which capture hidden spatial correlations at different time steps. Second, two embedding matrices are proposed to assist the model in capturing spatio-temporal heterogeneity by attaching essential external features. Third, a temporal convolution module is proposed to capture temporal correlations by stacking grouped convolution. The receptive field expands exponentially with each additional layer, reducing parameters and improving prediction efficiency. Extensive experimental results demonstrate that our model adequately extracts the spatio-temporal correlation of nodes while ensuring prediction efficiency. • A time-based adaptive adjacency matrix capturing time-varying spatial correlations. • A novel temporal convolutional module with superior prediction efficiency. • Adding external features to capture spatio-temporal heterogeneity of traffic flows. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A scheme for high level data classification using random walk and network measures.
- Author
-
Cupertino, Thiago Henrique, Guimarães Carneiro, Murillo, Zheng, Qiusheng, Zhang, Junbao, and Zhao, Liang
- Subjects
- *
RANDOM walks , *DATA structures , *HEURISTIC programming , *COMPUTER simulation , *INFORMATION storage & retrieval systems - Abstract
Supervised classification techniques are known to exploit physical information of the analysed data, such as similarity, distribution and other low level features. Despite the relevance of such features, recent works have showed that a higher variety of patterns can be detected by combining low level and high level features. In this paper, it is proposed a supervised classification technique which applies limiting probabilities of the random walk theory over underlying networks constructed from input labeled data. The appealing feature of the proposed approach is that the adjacency matrix which carries both physical and structural information about the data. Structural information are given by features extracted from network connections. The class of a given unlabeled sample is estimated by a heuristic called ease of access , which is measured by the random walk process over the adjacency matrix. Such approach makes the technique quite general as one can put distinct data measures of interest in the connection matrix of the underlying data network to guide the random walker. Specifically, we show examples of combining low and high level features in the proposed classification scheme. Simulation results using artificial and real data sets suggest that the proposed technique is not only competitive with current and established classification techniques, but it also can reveal intrinsic structural patterns formed by the input data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Transport causality knowledge-guided GCN for propagated delay prediction in airport delay propagation networks.
- Author
-
Sun, Mengyuan, Tian, Yong, Wang, Xunuo, Huang, Xiao, Li, Qianqian, Li, Zhixiong, and Li, Jiangchen
- Subjects
- *
COMMERCIAL aeronautics , *FLIGHT delays & cancellations (Airlines) , *AERONAUTICAL safety measures , *AIRPORTS , *FORECASTING - Abstract
Flight delays pose a worldwide challenge that significantly affect the safety and efficiency of air transportation systems. However, propagated delay prediction, as well as its causality among airport delay propagation networks, has not considered some crucial issues regarding spatiotemporal dependence and propagation relationships. Thus, this study proposes a transport causality knowledge-guided extended graph convolutional network (GCN) framework to tackle these issues. In particular, a causality knowledge-guided airport delay propagation network (ADPN) is developed using the second modified transfer entropy (SMTE) principle. Furthermore, a causality-embedded adjacency matrix is utilized by an extended GCN for propagated delay prediction. Comprehensive validations and results indicate that the proposed method benefits significantly from the causality knowledge, and increases the prediction performances up to 15.51%. Thus, transport causality is significant and efficient for understanding propagated delay features and airport delay propagation network characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Link prediction and its optimization based on low-rank representation of network structures.
- Author
-
Chai, Lang, Tu, Lilan, Yu, Xinyi, Wang, Xianjia, and Chen, Juan
- Subjects
- *
MATRIX norms , *FORECASTING - Abstract
Currently, among the link prediction researches based on low-rank representation, almost none of literature has considered how to select an appropriate base-matrix and the impact of the structural characteristics of the reconstructed network on the link prediction. Therefore, in this paper, we use the adjacency matrix of the fully connected network (FCN) as the base-matrix for low-rank representation, and any local structure of the observed networks can be represented by the interactions of FCN structures. To explore the properties of link predictions, the nuclear norm of the adjacency matrix for the reconstructed network is taken as a penalty term in the newly proposed low-rank representation objective function. According to the optimal interactive coefficients achieved by solving the novel objective function, in this paper, we design a novel link prediction algorithm (LRNP algorithm) and its optimized algorithm (OLRNP algorithm). Experimental results based on real networks and synthetic networks lead to several conclusions. (1) The LRNP algorithm has good convergence properties. When changing the parameters of the LRNP algorithm, the changes along prediction performance do not exceed 9.48%. LRNP also performs well for sparse networks. (2) Compared with baseline link prediction algorithms, LRNP also shows excellent performance, and its AUC and Precision can increase by 14.35% and 14.89%. (3) The OLRNP algorithm exhibits better performance than the LRNP algorithm, and its AUC and Precision can rise by up to 7.50% and 6.79%, respectively. The data and codes are publicly available at https://github.com/pinglanchu/LRNP-OLRNP. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Bipartite mixed membership distribution-free model. A novel model for community detection in overlapping bipartite weighted networks.
- Author
-
Qing, Huan and Wang, Jingli
- Subjects
- *
BIPARTITE graphs - Abstract
Modeling and estimating mixed memberships for overlapping unipartite un-weighted networks has been well studied in recent years. However, to our knowledge, there is no model for a more general case, the overlapping bipartite weighted networks. To close this gap, we introduce a novel model, the Bipartite Mixed Membership Distribution-Free (BiMMDF) model. Our model allows an adjacency matrix to follow any distribution as long as its expectation has a block structure related to node membership. In particular, BiMMDF can model overlapping bipartite signed networks and it is an extension of many previous models, including the popular mixed membership stochastic blockmodels. An efficient algorithm with a theoretical guarantee of consistent estimation is applied to fit BiMMDF. We then obtain the separation conditions of BiMMDF for different distributions. Furthermore, we also consider missing edges for sparse networks. The advantage of BiMMDF is demonstrated in extensive synthetic networks and eight real-world networks. • We propose a novel model BiMMDF for overlapping bipartite weighted networks. • We use an algorithm with a theoretical guarantee of consistency to fit BiMMDF. • Separation conditions of BiMMDF for different distributions are analyzed. • We also consider missing edges for sparse networks. • We conduct substantial experiments to demonstrate the advantage of BiMMDF. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Learning deep neural networks for node classification.
- Author
-
Li, Bentian and Pi, Dechang
- Subjects
- *
DEEP learning , *FOLKSONOMIES , *SPEECH perception , *CLASSIFICATION , *SOCIAL networks - Abstract
• Propose a novel deep neural network method for node classification. • The model could overcome the existing problem of only getting the suboptimal solution. • A superior performance of results demonstrates the effectiveness of proposed approach. Deep Neural Network (DNN) has made great leaps in image classification and speech recognition in recent years. However, employing DNN for node classification such as in social network remains to be a non-trivial problem. Moreover, the current advanced method of implementing node classification tasks usually takes two steps, i.e. firstly, the embedding vector of the node is obtained through network embedding and then the classifier such as SVM is leveraged to do the task. Distinctly, this may only get the suboptimal solution of the problem. To settle the above issues, a novel Deep Neural Network method for node classification named DNNNC is proposed in the framework of Deep Learning. Specifically, we first get the positive pointwise mutual information (PPMI) matrix from the given adjacency matrix. Then, the data is fed to deep neural network composed of deep stacked sparse autoencoders and softmax layer, which could learn the node representation while encoding the rich nonlinear structural and semantic information and could be well trained for node classification under the DNN framework. Extensive experiments are conducted on real-world network datasets for node classification task and have shown that the proposed model DNNNC outperforms the state-of-the-art method in the view of superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Spatio-temporal graph mixformer for traffic forecasting.
- Author
-
Lablack, Mourad and Shen, Yanming
- Subjects
- *
TRAFFIC estimation , *INTELLIGENT transportation systems , *COUNTING - Abstract
Traffic forecasting is of great importance for intelligent transportation systems (ITS). Because of the intricacy implied in traffic behavior and the non-Euclidean nature of traffic data, it is challenging to give an accurate traffic prediction. Despite that previous studies considered the relationship between different nodes, the majority have relied on a static representation and failed to capture the dynamic node interactions over time. Additionally, prior studies employed RNN-based models to capture the temporal dependency. While RNNs are a popular choice for forecasting problems, they tend to be memory hungry and slow to train. Furthermore, recent studies start utilizing similarity algorithms to better express the implication of a node over the other. However, to our knowledge, none have explored the contribution of node i 's past, over the future state of node j. In this paper, we propose a Spatio-Temporal Graph Mixformer (STGM) network, a highly optimized model with low memory footprint. We address the aforementioned limits by utilizing a novel attention mechanism to capture the correlation between temporal and spatial dependencies. Specifically, we use convolution layers with a variable fields of view for each head to capture long–short term temporal dependency. Additionally, we train an estimator model that express the contribution of a node over the desired prediction. The estimation is fed alongside a distance matrix to the attention mechanism. Meanwhile, we use a gated mechanism and a mixer layer to further select and incorporate the different perspectives. Extensive experiments show that the proposed model enjoys a performance gain compared to the baselines while maintaining the lowest parameter counts. • A transformer-based architecture for traffic forecasting. • An adaptive adjacency matrix generation based on attention and similarity learning. • Temporal convolution with variable fields of view for lower parameter count. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A new single-chromosome evolutionary algorithm for community detection in complex networks by combining content and structural information.
- Author
-
Pourabbasi, Elmira, Majidnezhad, Vahid, Taghavi Afshord, Saeid, and Jafari, Yasser
- Subjects
- *
EVOLUTIONARY algorithms , *ALGORITHMS , *NETWORK performance , *VIRTUAL networks , *COMMUNITIES - Abstract
• Community detection through combination of content and structural information. • A single-chromosome evolutionary algorithm with architectural modification operator. • Used a new criterion, named CS, for measuring the content similarity of nodes. Community detection is an important step in perceiving network structure and performance for complex network analysis. The rapid growth of network data in recent years has piqued the interest of many researchers in community detection. The majority of community detection methods only consider the network structure. Nonetheless, real-world network nodes may have some characteristics that can be useful for community detection. This study proposed a novel single-chromosome evolutionary algorithm with a distinctive architecture modification operator for community detection in complex networks using a combination of structural and content information. To this end, a novel virtual network was created by taking into account the structure and content of nodes, and communities were discovered for this network by optimizing the objective function (and using the combinatorial adjacency matrix instead of the structural adjacency matrix) in a series of steps. The nodes in this network were the same as the nodes in the main network; however, the links were developed based on similarities between nodes and their structural neighborhood. The proposed algorithm also included a method for sorting new nodes in order to determine the analysis order of nodes along with the local improvement of solution, as well as a new criterion, CS, for measuring the content similarity of nodes. The proposed algorithm was evaluated in real-networks and compared to various state-of-the-art and widely used methods. The Friedman rank algorithm was then used to rank the proposed algorithm and the existing methods using six real networks. According to the NMI criterion used in the Friedman rank test, the rank of the proposed algorithms increased by 96.8762%, 70.2693%, 26.0005%, 23.5294%, 46.5109%, and 23.5294% compared respectively with ASCD-ARC, BTLSC, Adapt-SA, PSB-PG, RSECD, and NEMBP, which have all been proposed in recent years. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Urban short-term traffic speed prediction with complicated information fusion on accidents.
- Author
-
Xu, Xing, Hu, Xianqi, Zhao, Yun, Lü, Xiaoshu, and Aapaoja, Aki
- Subjects
- *
TRAFFIC speed , *CITY traffic , *TRAFFIC estimation , *TRAFFIC flow , *FORECASTING , *TRAFFIC accidents - Abstract
Optimizing the traffic flow prediction system is crucial in developing intelligent transportation since it increases the road network's capacity. The system's overall prediction accuracy will be increased by taking into account the relationship between the temporal and spatial properties of the road network and different external elements affecting the traffic situation. The traffic state, which is still a largely unexplored area, is impacted by the complicated interaction between accident information and the spatiotemporal properties of the route. This paper proposes an Accident Information Graph Fusion Attention Convolutional Network(AI-GFACN). Firstly, a highly correlated global road network is created using a global spatial feature point-edge swapping method, a D–D algorithm fusing Dijkstra, and Depth-First Search, which resolves the issue where the spatial features of accident sections are challenging to capture the diffusion effects caused by spatial features of nearby and further sections. Following the data's incorporation, it is suggested to combine the Spatio-temporal features of accident information and embed them in the road network. In addition, an attention mechanism is introduced, effectively addressing the difficulty in capturing the Spatio-temporal features of accident information within the road network. By integrating and categorizing the regionally distributed and temporally sustained congestion effects of various categories of accidents concerning previous research on accident information, this paper enhances the semantic expressiveness of accident information within the road network. Ablation experiments confirm the effectiveness and robustness of the proposed method, and it is applied to the dataset of Hangzhou West Lake District (including accident information), which increases short-term traffic speed prediction accuracy by 0.2% overall. • Embedding accident information into the adjacency matrix with a new method. • Accident information affects the Spatial–temporal characteristics of road segments. • Incorporate accident information into road segments by Spatio-temporal embedding. • Apply Spatio-temporal attention mechanism to assign weights adaptively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Topology-regularized universal vector autoregression for traffic forecasting in large urban areas.
- Author
-
Schimbinschi, Florin, Moreira-Matias, Luis, Nguyen, Vinh Xuan, and Bailey, James
- Subjects
- *
AUTONOMOUS vehicles , *TRAFFIC estimation , *CITIES & towns , *GENERALIZATION , *COMPUTER algorithms , *VECTOR autoregression model - Abstract
Autonomous vehicles are soon to become ubiquitous in large urban areas, encompassing cities, suburbs and vast highway networks. In turn, this will bring new challenges to the existing traffic management expert systems. Concurrently, urban development is causing growth, thus changing the network structures. As such, a new generation of adaptive algorithms are needed, ones that learn in real-time, capture the multivariate nonlinear spatio-temporal dependencies and are easily adaptable to new data (e.g. weather or crowdsourced data) and changes in network structure, without having to retrain and/or redeploy the entire system. We propose learning Topology-Regularized Universal Vector Autoregression (TRU-VAR) and examplify deployment with of state-of-the-art function approximators. Our expert system produces reliable forecasts in large urban areas and is best described as scalable, versatile and accurate. By introducing constraints via a topology-designed adjacency matrix (TDAM), we simultaneously reduce computational complexity while improving accuracy by capturing the non-linear spatio-temporal dependencies between timeseries. The strength of our method also resides in its redundancy through modularity and adaptability via the TDAM, which can be altered even while the system is deployed. The large-scale network-wide empirical evaluations on two qualitatively and quantitatively different datasets show that our method scales well and can be trained efficiently with low generalization error. We also provide a broad review of the literature and illustrate the complex dependencies at intersections and discuss the issues of data broadcasted by road network sensors. The lowest prediction error was observed for TRU-VAR, which outperforms ARIMA in all cases and the equivalent univariate predictors in almost all cases for both datasets. We conclude that forecasting accuracy is heavily influenced by the TDAM, which should be tailored specifically for each dataset and network type. Further improvements are possible based on including additional data in the model, such as readings from different metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Depressioner: Facial dynamic representation for automatic depression level prediction.
- Author
-
Niu, Mingyue, He, Lang, Li, Ya, and Liu, Bin
- Subjects
- *
MENTAL depression , *MEDICAL screening , *PROBLEM solving , *FORECASTING - Abstract
Physiological studies have shown that facial changes can be seen as a biomarker to analyze the severity of depression. Therefore, this study proposes a Depressioner model to predict the depression level by examining facial changes. Our method is mainly to solve two problems in the previous works: (1) each channel in the tensor obtained by the convolution layer can be regarded as a pattern extraction result related to depression. However, previous works rarely explore the relationship among channels, which is limited in integrating the advantages of various channels; (2) the average (or max) pooling is often used to vectorize the tensor, which is not conduction to capturing the depression cues from tensors with temporal attribute. To this end, this study designs two novel blocks namely Graph Convolution Embedding (GCE) block and Multi-Scale Vectorization (MSV) block. The GCE block treats each channel as a node in the graph and constructs the corresponding adjacency matrix. Furthermore, the GCE block adopts the graph convolution operation to examine the relationship among channels to take advantage of each channel and highlight useful elements. The MSV block combines the dilated convolution and attention mechanism to process each channel to extract the multi-scale representation of depression cues along temporal dimension. Moreover, it aggregates these representations into the vectorization result of tensor along channel dimension. Experimental results on AVEC 2013 (RMSE = 7.49, MAE = 6.12) and AVEC 2014 (RMSE = 7.56, MAE = 6.01) depression databases illustrate the effectiveness of our method, which may promote the auxiliary diagnosis of depression screening in the future. Meanwhile, these results also show that the proposed Depressioner model can capture the differences of facial changes among individuals with different depression levels. • A Graph Convolution Embedding (GCE) block is for channel relationship extraction. • A Multi-Scale Vectorization (MSV) block is used to vectorize the temporal tensor. • A Depressioner model is developed to predict the individual depression level. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Sequential inter-hop graph convolution neural network (SIhGCN) for skeleton-based human action recognition.
- Author
-
Setiawan, Feri, Yahya, Bernardo Nugroho, Chun, Seok-Ju, and Lee, Seok-Lyong
- Subjects
- *
CONVOLUTIONAL neural networks , *HUMAN behavior , *LAPLACIAN matrices , *HUMAN skeleton , *SKELETON - Abstract
• A graph convolution model for skeleton-based action recognition is proposed. • Normalized Laplacian Matrix is utilized to encode the graph information. • An attention-based feature aggregation is proposed to extract the salient features. • The proposed method achieves better results than the baseline models. Skeleton-based human action recognition has attracted a lot of attention due to its capability and potential to provide more information than just using the sequence of RGB images. The use of Graph Convolutional Neural Network (GCN) becomes more popular since it can model the human skeleton very well. However, the existing GCN architectures ignore the different levels of importance on each hop during the feature aggregation and use the final hop information for further calculation, resulting in considerable information loss. Besides, they use the standard Laplacian or adjacency matrix to encode the property of a graph into a set of vectors which has a limitation in terms of graph invariants. In this work, we propose a Sequential Inter-hop Graph Convolution Neural Network (SIhGCN) which can capture salient graph information from every single hop rather than the final hop only and our work utilizes the normalized Laplacian matrix which provides better representation since it relates well to graph invariants. The proposed method is validated on two large datasets, NTU-RBG + D and Kinetics, to demonstrate the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Topology-regularized universal vector autoregression for traffic forecasting in large urban areas
- Author
-
Vinh Nguyen, Florin Schimbinschi, James Bailey, and Luis Moreira-Matias
- Subjects
Multivariate statistics ,Computer science ,Big data ,02 engineering and technology ,computer.software_genre ,Machine learning ,Topology ,Vector autoregression ,Artificial Intelligence ,Urban planning ,020204 information systems ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Autoregressive integrated moving average ,Time series ,050210 logistics & transportation ,Modularity (networks) ,business.industry ,05 social sciences ,General Engineering ,Univariate ,Expert system ,Computer Science Applications ,Scalability ,Domain knowledge ,Structural risk minimization ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Fast paced growth in urban areas will soon drive traffic forecasting systems obsolete.Next generation systems should be: topological, modular, scalable, online & nonlinear.The proposed method with such properties has low network wide generalization error.Method outperforms baselines and univariate equivalents over two large area datasets.The topological design adjacency matrix is pivotal & requires expert domain knowledge. Autonomous vehicles are soon to become ubiquitous in large urban areas, encompassing cities, suburbs and vast highway networks. In turn, this will bring new challenges to the existing traffic management expert systems. Concurrently, urban development is causing growth, thus changing the network structures. As such, a new generation of adaptive algorithms are needed, ones that learn in real-time, capture the multivariate nonlinear spatio-temporal dependencies and are easily adaptable to new data (e.g. weather or crowdsourced data) and changes in network structure, without having to retrain and/or redeploy the entire system.We propose learning Topology-Regularized Universal Vector Autoregression (TRU-VAR) and examplify deployment with of state-of-the-art function approximators. Our expert system produces reliable forecasts in large urban areas and is best described as scalable, versatile and accurate. By introducing constraints via a topology-designed adjacency matrix (TDAM), we simultaneously reduce computational complexity while improving accuracy by capturing the non-linear spatio-temporal dependencies between timeseries. The strength of our method also resides in its redundancy through modularity and adaptability via the TDAM, which can be altered even while the system is deployed. The large-scale network-wide empirical evaluations on two qualitatively and quantitatively different datasets show that our method scales well and can be trained efficiently with low generalization error.We also provide a broad review of the literature and illustrate the complex dependencies at intersections and discuss the issues of data broadcasted by road network sensors. The lowest prediction error was observed for TRU-VAR, which outperforms ARIMA in all cases and the equivalent univariate predictors in almost all cases for both datasets. We conclude that forecasting accuracy is heavily influenced by the TDAM, which should be tailored specifically for each dataset and network type. Further improvements are possible based on including additional data in the model, such as readings from different metrics.
- Published
- 2017
32. On fast enumeration of maximal cliques in large graphs.
- Author
-
Jin, Yan, Xiong, Bowen, He, Kun, Zhou, Yangming, and Zhou, Yi
- Subjects
- *
SOCIAL networks , *ALGORITHMS , *DATA structures , *GRAPH theory - Abstract
Maximal Clique Enumeration (MCE) is a fundamental and challenging problem in graph theory and various network applications. Numerous algorithms have been proposed in the past decades, however, only a few of them focus on improving the practical efficiency in large graphs. To this end, we propose an efficient algorithm called FACEN based on the Bron–Kerbosch framework. To optimize the memory and time consumption, we apply a hybrid data structure with adjacency list and partial adjacency matrix, and introduce a dynamic pivot selection rule based on the degeneracy order. FACEN is evaluated on a total of 64 benchmark instances from various sources. Computational results indicate that the proposed algorithm is highly competitive with the current leading MCE methods. In particular, our algorithm is able to enumerate all maximal cliques on the tested real-world social networks with millions of vertices and edges. For very large graphs, we provide an additional experiment for solving the MCE variant with lower bound, and investigate the benefits of FACEN. • Enumerating maximal cliques in large graphs is computationally challenging. • An exact method on fast maximal clique enumeration is presented. • We introduce a new dynamic branching heuristic based on the degeneracy order. • The method is highly competitive with the current leading methods. • The method is efficient for solving the real-world social network benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Node classification using kernel propagation in graph neural networks.
- Author
-
Arul Prakash, Sakthi Kumar and Tucker, Conrad S.
- Subjects
- *
RANDOM matrices , *INFORMATION networks , *CLASSIFICATION , *RANDOM walks , *TASK performance - Abstract
• Spectral Kernel propagation layer for differentiating local/global connectivity. • Multiplicative attention mechanism that improves the stability of learning. • Node classification without the use of additional node attributes/features. In this work, we introduce a kernel propagation method that enables graph neural networks (GNNs) to leverage higher-order network structural information without increasing the complexity of the networks. Recent studies have introduced GNNs that include higher-order neighborhood features containing global network information by propagating node features using a higher-order feature propagation rule. Though these GNNs have shown to improve node classification performance, they fail to include local connectivity information. Alternatively, GNNs also concatenate increasing orders of adjacency matrix in deeper layers in order to include higher-order structural information. In addition to global network information, GNNs also make use of node features which are network and node dependent features that serve to distinguish structurally isomorphic sub-structures within graphs. However, such node features may not always be available or depending on the network, may lead to deteriorating classification performance. Hence, to resolve these limitations, we propose a kernel propagation method that introduces a pre-processing step for GNNs to leverage higher-order structural features. The higher-order structural features are computed using a weighted random walk matrix which is node independent while using the first-order spectral propagation rule which explicitly considers local connectivity. Through our benchmark experiments, we find that the computed higher-order structural features are capable of replacing node dependent features while performing node classification task with performance on par with the state of the art approaches. Further, we also find that including both node features and higher-order structural features increases the performance of GNNs on large scale benchmark networks considered in this study. Our results show that considering local and global structural information as input to GNNs lead to an improvement in node classification performance in the absence/presence of node features without loss of performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. A cognitive map simulation approach to adjusting the design factors of the electronic commerce web sites
- Author
-
Kun Chang Lee and Sangjae Lee
- Subjects
Operationalization ,Cognitive map ,Operations research ,Artificial Intelligence ,Computer science ,General Engineering ,Technology acceptance model ,Computer Science Applications ,Research model - Abstract
The electronic commerce (EC) has been widely studied in the academic as well as practical fields. Especially, a lot of special topics regarding the EC such as B2C and B2B have been investigated in literature. However, there are much less studies about the EC sites themselves. Besides, only a few studies exist about the issues regarding how to adjust the design factors of the EC sites. The main objective of this study is to fill this research void by employing two techniques: (1) cognitive map and (2) linear structural relationship (LISREL). The cognitive map was used to operationalize the causal relationships among design factors of the EC sites, and investigate the simulation to find the optimal strategy of adjusting the design factors. The LISREL was performed to prove the proposed research model, where original Technology Acceptance Model (TAM) [Davis MIS Q. 13 (1989) 319] is adopted as a basic framework for providing causal relationships. Usable questionnaires were collected from 114 respondents who are proved to be qualified for this study. They were educated to surf two typical EC sites appropriately and tested before answering the questionnaires. Those respondents who completed questionnaires successfully were given a book coupon of 5$ equivalent. After LISREL experiments, the proposed research model was tested, and an adjacency matrix was induced which is to be used for the cognitive map simulation. With the adjacency matrix and 15 hypothetical market situations, the cognitive map simulations were successfully performed yielding that the proposed two techniques could be used for successfully adjusting the design factors of the EC sites under consideration in line with the changes in customers' tastes and market situations. One of the noticeable practical advantages of this study is that decision makers can identify the most relevant design factors and thereby allocate limited resources to them reasonably by performing the cognitive map simulation in advance before doing design adjustment to the EC sites in actuality.
- Published
- 2003
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.