20 results
Search Results
2. Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks.
- Author
-
Garaev, Roman, Rasheed, Bader, and Khan, Adil Mehmood
- Subjects
- *
ARTIFICIAL neural networks , *PERTURBATION theory , *SCIENTIFIC community - Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L ∞ -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network's latent representations, (4) an analysis of networks' decision boundaries and (5) the use of equivalence of L 2 and L ∞ perturbation norm theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Predicting the Gap in the Day-Ahead and Real-Time Market Prices Leveraging Exogenous Weather Data.
- Author
-
Nizharadze, Nika, Farokhi Soofi, Arash, and Manshadi, Saeed
- Subjects
- *
ARTIFICIAL neural networks , *MARKET prices , *INDEPENDENT system operators , *MACHINE learning , *MARKET pricing , *WEATHER - Abstract
Predicting the price gap between the day-ahead Market (DAM) and the real-time Market (RTM) plays a vital role in the convergence bidding mechanism of Independent System Operators (ISOs) in wholesale electricity markets. This paper presents a model to predict the values of the price gap between the DAM and RTM using statistical machine learning algorithms and deep neural networks. In this paper, we seek to answer these questions: What will be the impact of predicting the DAM and RTM price gap directly on the prediction performance of learning methods? How can exogenous weather data affect the price gap prediction? In this paper, several exogenous features are collected, and the impacts of these features are examined to capture the best relations between the features and the target variable. An ensemble learning algorithm, namely the Random Forest (RF), is used to select the most important features. A Long Short-Term Memory (LSTM) network is used to capture long-term dependencies in predicting direct gap values between the markets stated. Moreover, the advantages of directly predicting the gap price rather than subtracting the price predictions of the DAM and RTM are shown. The presented results are based on the California Independent System Operator (CAISO)'s electricity market data for two years. The results show that direct gap prediction using exogenous weather features decreases the error of learning methods by 46 % . Therefore, the presented method mitigates the prediction error of the price gap between the DAM and RTM. Thus, the convergence bidders can increase their profit, and the ISOs can tune their mechanism accordingly. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. A Literature Review on Some Trends in Artificial Neural Networks for Modeling and Simulation with Time Series.
- Author
-
Muñoz-Zavala, Angel E., Macías-Díaz, Jorge E., Alba-Cuéllar, Daniel, and Guerrero-Díaz-de-León, José A.
- Subjects
- *
RECURRENT neural networks , *ARTIFICIAL neural networks , *LITERATURE reviews , *TIME series analysis , *FEEDFORWARD neural networks , *SELF-organizing maps , *RADIAL basis functions - Abstract
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in the literature for time series forecasting purposes: feedforward neural networks, radial basis function networks, recurrent neural networks, and self-organizing maps. We analyze the strengths and weaknesses of these architectures in the context of time series modeling. We then summarize some recent time series ANN modeling applications found in the literature, focusing mainly on the previously outlined architectures. In our opinion, these summarized techniques constitute a representative sample of the research and development efforts made in this field. We aim to provide the general reader with a good perspective on how ANNs have been employed for time series modeling and forecasting tasks. Finally, we comment on possible new research directions in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Deep Learning Stranded Neural Network Model for the Detection of Sensory Triggered Events.
- Author
-
Kontogiannis, Sotirios, Gkamas, Theodosios, and Pikridas, Christos
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *FACTORIES , *RECURRENT neural networks , *DISTRIBUTED sensors - Abstract
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to match patterns and classify abnormal behaviors. This paper presents a new deep learning model called stranded-NN. This model uses a set of NN models of variable layer depths depending on the input. This way, the proposed model can classify different types of emergencies occurring in different time intervals; real-time, close-to-real-time, or periodic. The proposed stranded-NN model has been compared against existing fixed-depth MLPs and LSTM networks used by the industry. Experimentation has shown that the stranded-NN model can outperform fixed depth MLPs 15–21% more in terms of accuracy for real-time events and at least 10–14% more for close-to-real-time events. Regarding LSTMs of the same memory depth as the NN strand input, the stranded NN presents similar results in terms of accuracy for a specific number of strands. Nevertheless, the stranded-NN model's ability to maintain multiple trained strands makes it a superior and more flexible classification and prediction solution than its LSTM counterpart, as well as being faster at training and classification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Development and Implementation of an ANN Based Flow Law for Numerical Simulations of Thermo-Mechanical Processes at High Temperatures in FEM Software.
- Author
-
Pantalé, Olivier
- Subjects
- *
HIGH temperatures , *COMPUTER simulation , *ARTIFICIAL neural networks , *FINITE element method , *MACHINE learning - Abstract
Numerical methods based on finite element (FE) have proven their efficiency for many years in the thermomechanical simulation of forming processes. Nevertheless, the application of these methods to new materials requires the identification and implementation of constitutive and flow laws within FE codes, which sometimes pose problems, particularly because of the strongly non-linear character of the behavior of these materials. Computational techniques based on machine learning and artificial neural networks are becoming more and more important in the development of these models and help the FE codes to integrate more complex behavior. In this paper, we present the development, implementation and use of an artificial neural network (ANN) based flow law for a GrC15 alloy under high temperature thermomechanical solicitations. The flow law modeling by ANN shows a significant superiority in terms of model prediction quality compared to classical approaches based on widely used Johnson–Cook or Arrhenius models. Once the ANN parameters have been identified on the base of experiments, the implementation of this flow law in a finite element code shows promising results in terms of solution quality and respect of the material behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. A Review of Deep Learning Algorithms and Their Applications in Healthcare.
- Author
-
Abdel-Jaber, Hussein, Devassy, Disha, Al Salam, Azhar, Hidaytallah, Lamya, and EL-Amir, Malak
- Subjects
- *
DEEP learning , *MACHINE learning , *SUPERVISED learning , *ARTIFICIAL neural networks , *NATURAL language processing , *GENERATIVE adversarial networks - Abstract
Deep learning uses artificial neural networks to recognize patterns and learn from them to make decisions. Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, or unsupervised learning strategies to learn automatically in deep architectures and has gained much popularity due to its superior ability to learn from huge amounts of data. It was found that deep learning approaches can be used for big data analysis successfully. Applications include virtual assistants such as Alexa and Siri, facial recognition, personalization, natural language processing, autonomous cars, automatic handwriting generation, news aggregation, the colorization of black and white images, the addition of sound to silent films, pixel restoration, and deep dreaming. As a review, this paper aims to categorically cover several widely used deep learning algorithms along with their architectures and their practical applications: backpropagation, autoencoders, variational autoencoders, restricted Boltzmann machines, deep belief networks, convolutional neural networks, recurrent neural networks, generative adversarial networks, capsnets, transformer, embeddings from language models, bidirectional encoder representations from transformers, and attention in natural language processing. In addition, challenges of deep learning are also presented in this paper, such as AutoML-Zero, neural architecture search, evolutionary deep learning, and others. The pros and cons of these algorithms and their applications in healthcare are explored, alongside the future direction of this domain. This paper presents a review and a checkpoint to systemize the popular algorithms and to encourage further innovation regarding their applications. For new researchers in the field of deep learning, this review can help them to obtain many details about the advantages, disadvantages, applications, and working mechanisms of a number of deep learning algorithms. In addition, we introduce detailed information on how to apply several deep learning algorithms in healthcare, such as in relation to the COVID-19 pandemic. By presenting many challenges of deep learning in one section, we hope to increase awareness of these challenges, and how they can be dealt with. This could also motivate researchers to find solutions for these challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification.
- Author
-
Ying Mei, Guanzheng Tan, and Zhentao Liu
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *CLASSIFICATION algorithms , *ARTIFICIAL neural networks , *MATHEMATICAL complex analysis , *GENETIC algorithms - Abstract
Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL) algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI) datasets and two well-known databases (Japanese Female Facial Expression, Cohn-Kanade). The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. COVID-19 Outbreak Prediction with Machine Learning.
- Author
-
Ardabili, Sina F., Mosavi, Amir, Ghamisi, Pedram, Ferdinand, Filip, Varkonyi-Koczy, Annamaria R., Reuter, Uwe, Rabczuk, Timon, and Atkinson, Peter M.
- Subjects
- *
COVID-19 pandemic , *MACHINE learning , *FORECASTING , *SOFT computing , *COVID-19 - Abstract
Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and these models are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models need to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to susceptible–infected–recovered (SIR) and susceptible-exposed-infectious-removed (SEIR) models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP; and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior across nations, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. This paper further suggests that a genuine novelty in outbreak prediction can be realized by integrating machine learning and SEIR models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece.
- Author
-
Papageorgiou, Konstantinos I., Poczeta, Katarzyna, Papageorgiou, Elpiniki, Gerogiannis, Vassilis C., and Stamoulis, George
- Subjects
- *
TIME series analysis , *REINFORCEMENT learning , *ARTIFICIAL neural networks , *NATURAL gas , *SHORT-term memory , *LOAD forecasting (Electric power systems) , *FUZZY clustering technique - Abstract
This paper introduced a new ensemble learning approach, based on evolutionary fuzzy cognitive maps (FCMs), artificial neural networks (ANNs), and their hybrid structure (FCM-ANN), for time series prediction. The main aim of time series forecasting is to obtain reasonably accurate forecasts of future data from analyzing records of data. In the paper, we proposed an ensemble-based forecast combination methodology as an alternative approach to forecasting methods for time series prediction. The ensemble learning technique combines various learning algorithms, including SOGA (structure optimization genetic algorithm)-based FCMs, RCGA (real coded genetic algorithm)-based FCMs, efficient and adaptive ANNs architectures, and a hybrid structure of FCM-ANN, recently proposed for time series forecasting. All ensemble algorithms execute according to the one-step prediction regime. The particular forecast combination approach was specifically selected due to the advanced features of each ensemble component, where the findings of this work evinced the effectiveness of this approach, in terms of prediction accuracy, when compared against other well-known, independent forecasting approaches, such as ANNs or FCMs, and the long short-term memory (LSTM) algorithm as well. The suggested ensemble learning approach was applied to three distribution points that compose the natural gas grid of a Greek region. For the evaluation of the proposed approach, a real-time series dataset for natural gas prediction was used. We also provided a detailed discussion on the performance of the individual predictors, the ensemble predictors, and their combination through two well-known ensemble methods (the average and the error-based) that are characterized in the literature as particularly accurate and effective. The prediction results showed the efficacy of the proposed ensemble learning approach, and the comparative analysis demonstrated enough evidence that the approach could be used effectively to conduct forecasting based on multivariate time series. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning.
- Author
-
Ahmed, Ahmed Abdelmoamen and Agunsoye, Gbenga
- Subjects
- *
INTERNET traffic , *MACHINE learning , *ARTIFICIAL neural networks , *RANDOM forest algorithms , *UPLOADING of data - Abstract
The increasing ubiquity of network traffic and the new online applications' deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm.
- Author
-
Mohamed, Soha Abd El-Moamen, Mohamed, Marghany Hassan, Farghally, Mohammed F., and Radac, Mircea-Bogdan
- Subjects
- *
FEEDFORWARD neural networks , *DEEP learning , *ARTIFICIAL neural networks , *PROBLEM solving , *MACHINE learning , *ALGORITHMS - Abstract
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalization. Many researchers have to date proposed several growing algorithms to optimize the feedforward neural network architectures. The proposed algorithm has been tested on various medical data sets. The results prove that the proposed algorithm is a better method to evaluate the accuracy and flexibility resulting from it. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. An Ensemble Extreme Learning Machine for Data Stream Classification.
- Author
-
Yang, Rui, Xu, Shuliang, and Feng, Lin
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *MACHINE theory , *NEURAL computers , *ARTIFICIAL neural networks - Abstract
Extreme learning machine (ELM) is a single hidden layer feedforward neural network (SLFN). Because ELM has a fast speed for classification, it is widely applied in data stream classification tasks. In this paper, a new ensemble extreme learning machine is presented. Different from traditional ELM methods, a concept drift detection method is embedded; it uses online sequence learning strategy to handle gradual concept drift and uses updating classifier to deal with abrupt concept drift, so both gradual concept drift and abrupt concept drift can be detected in this paper. The experimental results showed the new ELM algorithm not only can improve the accuracy of classification result, but also can adapt to new concept in a short time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Evolution of SOMs' Structure and Learning Algorithm: From Visualization of High-Dimensional Data to Clustering of Complex Data.
- Author
-
Gorzałczany, Marian B. and Rudziński, Filip
- Subjects
- *
MACHINE learning , *DATA visualization , *SELF-organizing maps , *ARTIFICIAL neural networks - Abstract
In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM's generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Citywide Cellular Traffic Prediction Based on a Hybrid Spatiotemporal Network.
- Author
-
Zhang, Dehai, Liu, Linan, Xie, Cheng, Yang, Bing, and Liu, Qing
- Subjects
- *
ARTIFICIAL neural networks , *INTELLIGENT networks , *5G networks , *MACHINE learning , *DEEP learning - Abstract
With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction has become an important part of cellular network intelligence. In this context, this paper proposes a deep learning method for space-time modeling and prediction of cellular network communication traffic. First, we analyze the temporal and spatial characteristics of cellular network traffic from Telecom Italia. On this basis, we propose a hybrid spatiotemporal network (HSTNet), which is a deep learning method that uses convolutional neural networks to capture the spatiotemporal characteristics of communication traffic. This work adds deformable convolution to the convolution model to improve predictive performance. The time attribute is introduced as auxiliary information. An attention mechanism based on historical data for weight adjustment is proposed to improve the robustness of the module. We use the dataset of Telecom Italia to evaluate the performance of the proposed model. Experimental results show that compared with the existing statistics methods and machine learning algorithms, HSTNet significantly improved the prediction accuracy based on MAE and RMSE. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Idea of Using Blockchain Technique for Choosing the Best Configuration of Weights in Neural Networks.
- Author
-
Winnicka, Alicja and Kęsik, Karolina
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning - Abstract
The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training or overtraining artificial neural networks is very time-consuming and requires high computing power. In this paper, we proposed using a blockchain technique to train neural networks. This type of activity is important due to the possible search for initial weights in the network, which affect faster training, due to gradient decrease. We performed the tests with much heavier calculations to indicate that such an action is possible. However, this type of solution can also be used for less demanding calculations, i.e., only a few iterations of training and finding a better configuration of initial weights. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Learning an Efficient Convolution Neural Network for Pansharpening.
- Author
-
Guo, Yecai, Ye, Fei, and Gong, Hao
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *SATELLITE-based remote sensing , *MULTISPECTRAL imaging , *PERFORMANCE evaluation , *COMPUTATIONAL complexity - Abstract
Pansharpening is a domain-specific task of satellite imagery processing, which aims at fusing a multispectral image with a corresponding panchromatic one to enhance the spatial resolution of multispectral image. Most existing traditional methods fuse multispectral and panchromatic images in linear manners, which greatly restrict the fusion accuracy. In this paper, we propose a highly efficient inference network to cope with pansharpening, which breaks the linear limitation of traditional methods. In the network, we adopt a dilated multilevel block coupled with a skip connection to perform local and overall compensation. By using dilated multilevel block, the proposed model can make full use of the extracted features and enlarge the receptive field without introducing extra computational burden. Experiment results reveal that our network tends to induce competitive even superior pansharpening performance compared with deeper models. As our network is shallow and trained with several techniques to prevent overfitting, our model is robust to the inconsistencies across different satellites. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Edge-Nodes Representation Neural Machine for Link Prediction.
- Author
-
Xu, Guangluan, Wang, Xiaoke, Wang, Yang, Lin, Daoyu, Sun, Xian, and Fu, Kun
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *REPRESENTATION theory , *PREDICTION models , *EXISTENCE theorems , *COMPUTER algorithms - Abstract
Link prediction is a task predicting whether there is a link between two nodes in a network. Traditional link prediction methods that assume handcrafted features (such as common neighbors) as the link's formation mechanism are not universal. Other popular methods tend to learn the link's representation, but they cannot represent the link fully. In this paper, we propose Edge-Nodes Representation Neural Machine (ENRNM), a novel method which can learn abundant topological features from the network as the link's representation to promote the formation of the link. The ENRNM learns the link's formation mechanism by combining the representation of edge and the representations of nodes on the two sides of the edge as link's full representation. To predict the link's existence, we train a fully connected neural network which can learn meaningful and abundant patterns. We prove that the features of edge and two nodes have the same importance in link's formation. Comprehensive experiments are conducted on eight networks, experiment results demonstrate that the method ENRNM not only exceeds plenty of state-of-the-art link prediction methods but also performs very well on diverse networks with different structures and characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. A Machine Learning View on Momentum and Reversal Trading.
- Author
-
Li, Zhixi and Tam, Vincent
- Subjects
- *
MACHINE learning , *MOMENTUM (Mechanics) , *STOCK exchanges , *SUPPORT vector machines , *ARTIFICIAL neural networks - Abstract
Momentum and reversal effects are important phenomena in stock markets. In academia, relevant studies have been conducted for years. Researchers have attempted to analyze these phenomena using statistical methods and to give some plausible explanations. However, those explanations are sometimes unconvincing. Furthermore, it is very difficult to transfer the findings of these studies to real-world investment trading strategies due to the lack of predictive ability. This paper represents the first attempt to adopt machine learning techniques for investigating the momentum and reversal effects occurring in any stock market. In the study, various machine learning techniques, including the Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perceptron Neural Network (MLP), and Long Short-Term Memory Neural Network (LSTM) were explored and compared carefully. Several models built on these machine learning approaches were used to predict the momentum or reversal effect on the stock market of mainland China, thus allowing investors to build corresponding trading strategies. The experimental results demonstrated that these machine learning approaches, especially the SVM, are beneficial for capturing the relevant momentum and reversal effects, and possibly building profitable trading strategies. Moreover, we propose the corresponding trading strategies in terms of market states to acquire the best investment returns. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. A Fire Detection Algorithm Based on Tchebichef Moment Invariants and PSO-SVM.
- Author
-
Bian, Yongming, Yang, Meng, Fan, Xuying, and Liu, Yuchao
- Subjects
- *
PARTICLE swarm optimization , *DETECTORS , *ALGORITHMS , *ARTIFICIAL neural networks , *MACHINE learning - Abstract
Automatic fire detection, which can detect and raise the alarm for fire early, is expected to help reduce the loss of life and property as much as possible. Due to its advantages over traditional methods, image processing technology has been applied gradually in fire detection. In this paper, a novel algorithm is proposed to achieve fire image detection, combined with Tchebichef (sometimes referred to as Chebyshev) moment invariants (TMIs) and particle swarm optimization-support vector machine (PSO-SVM). According to the correlation between geometric moments and Tchebichef moments, the translation, rotation, and scaling (TRS) invariants of Tchebichef moments are obtained first. Then, the TMIs of candidate images are calculated to construct feature vectors. To gain the best detection performance, a PSO-SVM model is proposed, where the kernel parameter and penalty factor of support vector machine (SVM) are optimized by particle swarm optimization (PSO). Then, the PSO-SVM model is utilized to identify the fire images. Compared with algorithms based on Hu moment invariants (HMIs) and Zernike moment invariants (ZMIs), the experimental results show that the proposed algorithm can improve the detection accuracy, achieving the highest detection rate of 98.18%. Moreover, it still exhibits the best performance even if the size of the training sample set is small and the images are transformed by TRS. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.