36 results
Search Results
2. Investigation of the Global Fear Associated with COVID-19 Using Subjectivity Analysis and Deep Learning.
- Author
-
Thakur, Nirmalya, Patel, Kesha A., Poon, Audrey, Shah, Rishika, Azizi, Nazif, and Han, Changhee
- Subjects
COVID-19 ,DEEP learning ,AGE groups ,SUBJECTIVITY ,DATA analysis - Abstract
The work presented in this paper makes multiple scientific contributions related to the investigation of the global fear associated with COVID-19 by performing a comprehensive analysis of a dataset comprising survey responses of participants from 40 countries. First, the results of subjectivity analysis performed using TextBlob, showed that in the responses where participants indicated their biggest concern related to COVID-19, the average subjectivity by the age group of 41–50 decreased from April 2020 to June 2020, the average subjectivity by the age group of 71–80 drastically increased from May 2020, and the age group of 11–20 indicated the least level of subjectivity between June 2020 to August 2020. Second, subjectivity analysis also revealed the percentage of highly opinionated, neutral opinionated, and least opinionated responses per age-group where the analyzed age groups were 11–20, 21–30, 31–40, 41–50, 51–60, 61–70, 71–80, and 81–90. For instance, the percentage of highly opinionated, neutral opinionated, and least opinionated responses by the age group of 11–20 were 17.92%, 16.24%, and 65.84%, respectively. Third, data analysis of responses from different age groups showed that the highest percentage of responses indicating that they were very worried about COVID-19 came from individuals in the age group of 21–30. Fourth, data analysis of the survey responses also revealed that in the context of taking precautions to prevent contracting COVID-19, the percentage of individuals in the age group of 31–40 taking precautions was higher as compared to the percentages of individuals from the age groups of 41–50, 51–60, 61–70, 71–80, and 81–90. Fifth, a deep learning model was developed to detect if the survey respondents were seeing or planning to see a psychologist or psychiatrist for any mental health issues related to COVID-19. The design of the deep learning model comprised 8 neurons for the input layer with the ReLU activation function, the ReLU activation function for all the hidden layers with 12 neurons each, and the sigmoid activation function for the output layer with 1 neuron. The model utilized the responses to multiple questions in the context of fear and preparedness related to COVID-19 from the dataset and achieved an accuracy of 91.62% after 500 epochs. Finally, two comparative studies with prior works in this field are presented to highlight the novelty and scientific contributions of this research work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Distributed model for customer churn prediction using convolutional neural network.
- Author
-
Tariq, Muhammad Usman, Babar, Muhammad, Poulin, Marc, and Khattak, Akmal Saeed
- Subjects
CONVOLUTIONAL neural networks ,CONSUMERS ,DEEP learning ,MACHINE learning ,ARTIFICIAL intelligence - Abstract
Purpose: The purpose of the proposed model is to assist the e-business to predict the churned users using machine learning. This paper aims to monitor the customer behavior and to perform decision-making accordingly. Design/methodology/approach: The proposed model uses the 2-D convolutional neural network (CNN; a technique of deep learning). The proposed model is a layered architecture that comprises two different phases that are data load and preprocessing layer and 2-D CNN layer. In addition, the Apache Spark parallel and distributed framework is used to process the data in a parallel environment. Training data is captured from Kaggle by using Telco Customer Churn. Findings: The proposed model is accurate and has an accuracy score of 0.963 out of 1. In addition, the training and validation loss is extremely less, which is 0.004. The confusion matric results show the true-positive values are 95% and the true-negative values are 94%. However, the false-negative is only 5% and the false-positive is only 6%, which is effective. Originality/value: This paper highlights an inclusive description of preprocessing required for the CNN model. The data set is addressed more carefully for the successful customer churn prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Review of Smart Meter Data Analytics: Applications, Methodologies, and Challenges.
- Author
-
Wang, Yi, Chen, Qixin, Hong, Tao, and Kang, Chongqing
- Abstract
The widespread popularity of smart meters enables an immense amount of fine-grained electricity consumption data to be collected. Meanwhile, the deregulation of the power industry, particularly on the delivery side, has continuously been moving forward worldwide. How to employ massive smart meter data to promote and enhance the efficiency and sustainability of the power grid is a pressing issue. To date, substantial works have been conducted on smart meter data analytics. To provide a comprehensive overview of the current research and to identify challenges for future research, this paper conducts an application-oriented review of smart meter data analytics. Following the three stages of analytics, namely, descriptive, predictive, and prescriptive analytics, we identify the key application areas as load analysis, load forecasting, and load management. We also review the techniques and methodologies adopted or developed to address each application. In addition, we also discuss some research trends, such as big data issues, novel machine learning technologies, new business models, the transition of energy systems, and data privacy and security. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Research on Model Selection Based on Telecommunications Fraud Feature Recognition.
- Author
-
Yang Li, Wei Du, and Fang Cai
- Subjects
FRAUD ,TELECOMMUNICATION ,DEEP learning ,ARTIFICIAL intelligence ,MACHINE learning ,FRAUD in science ,LOGICAL prediction - Abstract
At present, there is a high incidence of telecommunications fraud in the world, and the overall form of anti telecommunications fraud is still severe. With the continuous evolution of cutting-edge technologies such as big data and artificial intelligence, new solutions are given to tap the characteristics of telecommunications fraud and improve the accuracy and coverage of anti fraud model identification. This paper uses the desensitized signaling data, voice message details, APP traffic data and billing accounting data to analyze the characteristics and preferences of telecommunications fraud users. Through the experimental comparison of machine learning(ML) and deep learning(DL) model classification, this paper explores the factors to improve the accuracy of model classification, and finally verifies and expounds the feasibility of prediction model selection through model training and testing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
6. Big data classification using deep learning and apache spark architecture.
- Author
-
Brahmane, Anilkumar V. and Krishna, B. Chaitanya
- Subjects
BIG data ,DEEP learning ,MACHINE learning ,CLASSIFICATION ,MATHEMATICAL optimization ,BIOGRAPHY (Literary form) - Abstract
The oddity in large information is rising step by step so that the current programming instruments faces trouble in supervision of huge information. Moreover, the pace of the irregularity information in the immense datasets is a key imperative to the exploration business. Along these lines, this paper proposes a novel method for taking care of the large information utilizing Spark structure. The proposed method experiences two stages for arranging the enormous information, which includes highlight choice and arrangement, which is acted in the underlying hubs of Spark engineering. The proposed improvement calculation is named Rider Chaotic Biography streamlining (RCBO) calculation, which is the incorporation of the Rider Optimization Algorithm (ROA) and the standard confused biogeography-based-advancement (CBBO). The proposed RCBO-profound stacked auto-encoder utilizing Spark structure successfully handles the large information for achieving powerful huge information arrangement. Here, the proposed RCBO is utilized for choosing reasonable highlights from the monstrous dataset. Besides, the profound stacked auto-encoder utilizes RCBO for preparing so as to characterize colossal datasets. In this research we focused on problem of supervision related to big information of The Cover type Data in UCI machine learning repository. The dataset describes the forest cover set data to predict the forest cover type from cartographic variables. The dataset is multivariate in nature with number of web hits 263,361. The number of instances is 581012 with 54 numbers of attributes and the task associated for the dataset is classification. The examination of the proposed RCBO-profound stacked auto-encoder-based Spark structure utilizing the UCI AI datasets uncovered that the proposed technique beat different strategies, by procuring maximal exactness of 86.71%, dice coefficient of 92.7%, affectability of 75.2% and explicitness of 95.4% separately. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Feature Extraction of Ancient Chinese Characters Based on Deep Convolution Neural Network and Big Data Analysis.
- Author
-
Zhang, Cheng and Liu, Xingjun
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,CHINESE characters ,FEATURE extraction ,DATA analysis ,MACHINE learning ,BIG data ,FENG shui - Abstract
In recent years, deep learning has made good progress and has been applied to face recognition, video monitoring, image processing, and other fields. In this big data background, deep convolution neural network has also received more and more attention. In order to extract the ancient Chinese characters effectively, the paper will discuss the structure model, pool process, and network training of deep convolution neural network and compare the algorithm with the traditional machine learning algorithm. The results show that the accuracy and recall rate of the Chinese characters in the plaque of Ming Dynasty can reach the peak, 81.38% and 81.31%, respectively. When the number of training samples increases to 50, the recognition rate of MFA is 99.72%, which is much higher than other algorithms. This shows that the algorithm based on deep convolution neural network and big data analysis has excellent performance and can effectively identify the Chinese characters under different dynasties, different sample sizes, and different interference factors, which can provide a powerful reference for the extraction of ancient Chinese characters. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Explainable artificial intelligence and machine learning: A reality rooted perspective.
- Author
-
Emmert‐Streib, Frank, Yli‐Harja, Olli, and Dehmer, Matthias
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,DEEP learning ,BIG data ,DATA analysis ,TECHNOLOGICAL progress - Abstract
As a consequence of technological progress, nowadays, one is used to the availability of big data generated in nearly all fields of science. However, the analysis of such data possesses vast challenges. One of these challenges relates to the explainability of methods from artificial intelligence (AI) or machine learning. Currently, many of such methods are nontransparent with respect to their working mechanism and for this reason are called black box models, most notably deep learning methods. However, it has been realized that this constitutes severe problems for a number of fields including the health sciences and criminal justice and arguments have been brought forward in favor of an explainable AI (XAI). In this paper, we do not assume the usual perspective presenting XAI as it should be, but rather provide a discussion what XAI can be. The difference is that we do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics. This article is categorized under:Fundamental Concepts of Data and Knowledge > Explainable AIAlgorithmic Development > StatisticsTechnologies > Machine Learning [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Deep Learning Based Approach for Bearing Fault Diagnosis.
- Author
-
Miao He and He, David
- Subjects
BEARINGS (Machinery) ,FAULT location (Engineering) ,DEEP learning ,DATA analysis ,ACOUSTIC emission ,ARTIFICIAL neural networks ,FOURIER transforms - Abstract
Bearing is one of the most critical components in most electrical and power drives. Effective bearing fault diagnosis is important for keeping the electrical and power drives safe and operating normally. In the age of Internet of Things and Industrial 4.0, massive real-time data are collected from bearing health monitoring systems. Mechanical big data have the characteristics of large volume, diversity, and high velocity. There are two major problems in using the existing methods for bearing fault diagnosis with big data. The features are manually extracted relying on much prior knowledge about signal processing techniques and diagnostic expertise, and the used models have shallow architectures, limiting their capability in fault diagnosis. Effectively mining features from big data and accurately identifying the bearing health conditions with new advanced methods have become new issues. This paper presents a deep learning-based approach for bearing fault diagnosis. The presented approach preprocesses sensor signals using short-time Fourier transform (STFT). Based on a simple spectrum matrix obtained by STFT, an optimized deep learning structure, large memory storage retrieval (LAMSTAR) neural network, is built to diagnose the bearing faults. Acoustic emission signals acquired from a bearing test rig are used to validate the presented method. The validation results show the accurate classification performance on various bearing faults under different working conditions. The performance of the presented method is also compared with other effective bearing fault diagnosis methods reported in the literature. The comparison results have shown that the presented method gives much better diagnostic performance, even at relatively low rotating speeds. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
10. College English Intelligent Writing Score System Based on Big Data Analysis and Deep Learning Algorithm.
- Author
-
Qin, Fei
- Subjects
MACHINE learning ,DEEP learning ,DATABASES ,DATA analysis ,BIG data ,ARTIFICIAL intelligence - Abstract
With the development of technologies such as big data analysis and deep learning, various industries have begun to integrate with big data analysis and deep learning and continue to promote the development of the industry. This system is an intelligent writing scoring system for college English teaching. It uses popular big data analysis and deep learning to distinguish training algorithms. From 2015 to 2022, the number of college students taking exams has increased yearly, with an increase of more than 50%. Therefore, the system proposes a text vector calculation method that can find matching samples in the text set after the text is weighted by the weight function and uses deep learning to distinguish the algorithm evaluates the matched text, and finally can get the final score according to the content quality, semantic coherence, text readability, and other aspects of the text. Compared with traditional manual scoring, this technology is more convenient, quick, concise, and effective. This system is significant for improving the efficiency of teaching English writing in college. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Current Big Data Issues and Their Solutions via Deep Learning: An Overview.
- Author
-
Banka, Asif Ali and Mir, Roohie Naaz
- Subjects
BIG data ,DATA mining ,DATA analysis ,MACHINE learning ,INTERNET of things - Abstract
The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better. [ABSTRACT FROM AUTHOR]
- Published
- 2018
12. Deep learning for big weather data analyzing and forecasting.
- Author
-
Al-Nabi, Delveen L. Abd and Ahmed, Shereen Sh.
- Subjects
DEEP learning ,BIG data ,WEATHER forecasting ,DATA analysis ,HUMIDITY - Abstract
Weather prediction is vital in daily life routines, for risk mitigation and resource management such as flood risk forecasting. Quantitative prediction of weather changes depends on different parameters such as rainfall time, temporal, barometric pressure, humidity, precipitation, solar radiation and wind. Therefore, a highly accurate system or a model to forecast the highly nonlinear changing happening in the climate is required. The focus of this research is direct prediction of forecasting from weather-changing parameters, the forecasts are performed using collected data values recorded in a big dataset (the dataset collects the weather parameter data of the Canary Islands (Las Palmas, Tenerife a Palma, Fuerteventura, La Gomera, Lanzarote and Hierro). The forecasting system is performed by proposing a deep learning approach (CNN). The research goal is predication the weather condition. The acquired classification accuracy for the climate condition using CNN (ShuffleNet) structure is 98%, and the recall and Precision results are 97.5 and 96.9 respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Machine Learning Techniques to address classification issues in Reverse Engineering
- Author
-
Dekhtiar, Jonathan, Durupt, Alexandre, Kiritsis, Dimitris, Bricogne, Matthieu, Rowson, Harvey, Eynard, Benoit, Eynard, Benoit, editor, Nigrelli, Vincenzo, editor, Oliveri, Salvatore Massimo, editor, Peris-Fajarnes, Guillermo, editor, and Rizzuti, Sergio, editor
- Published
- 2017
- Full Text
- View/download PDF
14. Allergiedaten Analysieren: SOSALL als Beispiel für die interdisziplinäre Zusammenarbeit im DAViS-Zentrum.
- Author
-
Rölke, Heiko and Schmid BSc, Marco
- Abstract
The Center for Data Analysis, Visualization, and Simulation (DAViS) at the University of Applied Science of the Grisons and the Swiss Center of Allergy and Asthma Research supports research on applications in topics like machine learning, big data, visualization, and simulation. The paper illustrates complex data analysis of life science data in a common research project. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Acoustic Diversity Classification Using Machine Learning Techniques: Towards Automated Marine Big Data Analysis.
- Author
-
Hachicha Belghith, Emna, Rioult, François, and Bouzidi, Medjber
- Subjects
- *
BIG data , *DEEP learning , *MACHINE learning , *CONVOLUTIONAL neural networks , *K-nearest neighbor classification , *DATA analysis , *DATA mining - Abstract
During the last years, big data has become the new emerging trend that increasingly attracting the attention of the R&D community in several fields (e.g., image processing, database engineering, data mining, artificial intelligence). Marine data is part of these fields which accommodates this growth, hence the appearance of marine big data paradigm that monitoring advocates the assessment of human impact on marine data. Nonetheless, supporting acoustic sounds classification is missing in such environment, with taking into account the diversity of such data (i.e., sounds of living undersea species, sounds of human activities, and sounds of environmental effects). To overcome this issue, we propose in this paper an approach that efficiently allowing acoustic diversity classification using machine learning techniques. The aim is to reach an automated support of marine big data analysis. We have conducted a set of experiments, using a real marine dataset, in order to validate our approach and show its effectiveness and efficiency. To do so, three machine learning techniques are employed: (i) classic machine learning models (i.e., k-nearest neighbor and support vector machine), (ii) deep learning based on convolutional neural networks, and (iii) transfer learning based on the reuse of pretrained models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Pattern Detection Model Using a Deep Learning Algorithm for Power Data Analysis in Abnormal Conditions.
- Author
-
Lee, Jeong-Hee, Kang, Jongseok, Shim, We, Chung, Hyun-Sang, and Sung, Tae-Eung
- Subjects
DEEP learning ,MACHINE learning ,CONVOLUTIONAL neural networks ,FAST Fourier transforms ,DATA analysis ,MANUFACTURING processes ,BIG data - Abstract
Building a pattern detection model using a deep learning algorithm for data collected from manufacturing sites is an effective way for to perform decision-making and assess business feasibility for enterprises, by providing the results and implications of the patterns analysis of big data occurring at manufacturing sites. To identify the threshold of the abnormal pattern requires collaboration between data analysts and manufacturing process experts, but it is practically difficult and time-consuming. This paper suggests how to derive the threshold setting of the abnormal pattern without manual labelling by process experts, and offers a prediction algorithm to predict the potentials of future failures in advance by using the hybrid Convolutional Neural Networks (CNN)–Long Short-Term Memory (LSTM) algorithm, and the Fast Fourier Transform (FFT) technique. We found that it is easier to detect abnormal patterns that cannot be found in the existing time domain after preprocessing the data set through FFT. Our study shows that both train loss and test loss were well developed, with near zero convergence with the lowest loss rate compared to existing models such as LSTM. Our proposition for the model and our method of preprocessing the data greatly helps in understanding the abnormal pattern of unlabeled big data produced at the manufacturing site, and can be a strong foundation for detecting the threshold of the abnormal pattern of big data occurring at manufacturing sites. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Data Science Analysis Method Design via Big Data Technology and Attention Neural Network.
- Author
-
Ren, Yizhong
- Subjects
DEEP learning ,BIG data ,DATA science ,ARTIFICIAL neural networks ,DATA analysis ,TIME series analysis ,MACHINE learning - Abstract
Because of the rapid expansion of big data technology, time series data is on the rise. These time series data include a lot of hidden information, and mining and evaluating hidden information is very important in finance, medical care, and transportation. Time series data forecasting is a data science analysis application, yet present time series data forecasting models do not completely account for the peculiarities of time series data. Traditional machine learning algorithms extract data features through artificially designed rules, while deep learning learns abstract representations of data through multiple processing layers. This not only saves the step of manually extracting features, but also greatly improves generalization performance for model. Therefore, this work utilizes big data technology to collect corresponding time series data and then uses deep learning to study the problem of time series data prediction. This work proposes a time series data prediction analysis network (TSDPANet). First, this work improves the traditional Inception module and proposes a feature extraction module suitable for 2D time series data. In 2D convolution, this solves the inefficiency of time series. Second, the notion of feature attention method for time series features is proposed in this study. The model focuses the neural network's data on the effectiveness of several measures. The feature attention module is used to assign different weights to different features according to their importance, which can effectively enhance and weaken the features. Third, this work conducts multi-faceted experiments on the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Predicting Student Academic Performance Using Machine Learning
- Author
-
Opeyemi Peter Ojajuni, Foluso Ayeni, Victor Mbarika, Olagunju Akodu, Timothy Ayo, Femi Ekanoye, Samson Adewole, and Sanjay Misra
- Subjects
business.industry ,Computer science ,Deep learning ,Big data ,Decision tree ,Machine learning ,computer.software_genre ,Educational data mining ,Random forest ,Support vector machine ,Data analysis ,Artificial intelligence ,Gradient boosting ,business ,computer - Abstract
The introduction of the Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and Big Data have paved the way for research focused on improving the student learning experience and help to address challenges faced by the education system. Machine Learning technology analyzes data to recognize patterns and use them to make predictions. This paper introduces a ML model that classify and predict student academic success by utilizing supervised ML algorithms like Random Forest, Support Vector Machines, Gradient boosting, Decision Tree, Logistic Regression, Regression, Extreme Gradient Boosting (XGBoost), and Deep Learning. This paper aims to predict student’s academic success based on historical data and identify the key factors that affect student academic success. Thus, the proposed approach offers a solution to predict student academic performance efficiently and accurately by comparing several ML models to the Deep Learning model. Results show that the Extreme Gradient Boosting (XGBoost) can predict student academic performance with an accuracy of 97.12%. Furthermore, results showed significant social and demographic features that affect student academic success. This study concludes that applying Machine Learning technology in the classroom will help educators identify gaps in student learning and enable early detection of underperforming students, thus empowering educators with informed decision-making.
- Published
- 2021
19. A Survey of Unsupervised Generative Models for Exploratory Data Analysis and Representation Learning.
- Author
-
ABUKMEIL, MOHANAD, FERRARI, STEFANO, GENOVESE, ANGELO, PIURI, VINCENZO, and SCOTTI, FABIO
- Subjects
DEEP learning ,PROBABILISTIC generative models ,BIG data ,BLIND source separation ,DATA analysis ,DATA modeling ,MACHINE learning - Abstract
For more than a century, the methods for data representation and the exploration of the intrinsic structures of data have developed remarkably and consist of supervised and unsupervised methods. However, recent years have witnessed the flourishing of big data, where typical dataset dimensions are high and the data can come in messy, incomplete, unlabeled, or corrupted forms. Consequently, discovering the hidden structure buried inside such data becomes highly challenging. From this perspective, exploratory data analysis plays a substantial role in learning the hidden structures that encompass the significant features of the data in an ordered manner by extracting patterns and testing hypotheses to identify anomalies. Unsupervised generative learning models are a class of machine learning models characterized by their potential to reduce the dimensionality, discover the exploratory factors, and learn representations without any predefined labels; moreover, such models can generate the data from the reduced factors' domain. The beginner researchers can find in this survey the recent unsupervised generative learning models for the purpose of data exploration and learning representations; specifically, this article covers three families of methods based on their usage in the era of big data: blind source separation, manifold learning, and neural networks, from shallow to deep architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Big data analysis and artificial intelligence in epilepsy -- common data model analysis and machine learning-based seizure detection and forecasting.
- Author
-
Yoon Gi Chung, Yonghoon Jeon, Sooyoung Yoo, Hunmin Kim, and Hee Hwang
- Subjects
BIG data ,DATA analysis ,ARTIFICIAL intelligence ,DATA modeling ,EPILEPSY - Abstract
There has been significant interest in big data analysis and artificial intelligence (AI) in medicine. Ever-increasing medical data and advanced computing power have enabled the number of big data analyses and AI studies to increase rapidly. Here we briefly introduce epilepsy, big data, and AI and review big data analysis using a common data model. Studies in which AI has been actively applied, such as those of electroencephalography epileptiform discharge detection, seizure detection, and forecasting, will be reviewed. We will also provide practical suggestions for pediatricians to understand and interpret big data analysis and AI research and work together with technical expertise. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Machine Learning With Big Data: Challenges and Approaches
- Author
-
Miriam A. M. Capretz, Katarina Grolinger, Alexandra L'Heureux, and Hany F. ElYamany
- Subjects
Big Data ,General Computer Science ,Active learning (machine learning) ,Computer science ,Big data ,data analysis ,02 engineering and technology ,Machine learning ,computer.software_genre ,distributed computing ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,data analytics ,business.industry ,Data stream mining ,General Engineering ,Online machine learning ,deep learning ,Data science ,Variety (cybernetics) ,Statistical classification ,Computational learning theory ,Data analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data pre-processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,computer ,Big Data Vs ,lcsh:TK1-9971 - Abstract
The Big Data revolution promises to transform how we live, work, and think by enabling process optimization, empowering insight discovery and improving decision making. The realization of this grand potential relies on the ability to extract value from such massive data through data analytics; machine learning is at its core because of its ability to learn from data and provide data driven insights, decisions, and predictions. However, traditional machine learning approaches were developed in a different era, and thus are based upon multiple assumptions, such as the data set fitting entirely into memory, what unfortunately no longer holds true in this new context. These broken assumptions, together with the Big Data characteristics, are creating obstacles for the traditional techniques. Consequently, this paper compiles, summarizes, and organizes machine learning challenges with Big Data. In contrast to other research that discusses challenges, this work highlights the cause–effect relationship by organizing challenges according to Big Data Vs or dimensions that instigated the issue: volume, velocity, variety, or veracity. Moreover, emerging machine learning approaches and techniques are discussed in terms of how they are capable of handling the various challenges with the ultimate objective of helping practitioners select appropriate solutions for their use cases. Finally, a matrix relating the challenges and approaches is presented. Through this process, this paper provides a perspective on the domain, identifies research gaps and opportunities, and provides a strong foundation and encouragement for further research in the field of machine learning with Big Data.
- Published
- 2017
22. An automated learning model for sentiment analysis and data classification of Twitter data using balanced CA-SVM.
- Author
-
Cyril, C Pretty Diana, Beulah, J Rene, Subramani, Neelakandan, Mohan, Prakash, Harshavardhan, A, and Sivabalaselvamani, D
- Subjects
SENTIMENT analysis ,MICROBLOGS ,FEATURE extraction ,DATA analysis ,SOCIAL media ,SUPPORT vector machines ,DEEP learning - Abstract
The modern society runs over the social media for their most time of every day. The web users spend their most time in social media and they share many details with their friends. Such information obtained from their chat has been used in several applications. The sentiment analysis is the one which has been applied with Twitter data set toward identifying the emotion of any user and based on those different problems can be solved. Primarily, the data as of the Twitter database is preprocessed. In this step, tokenization, stemming, stop word removal, and number removal are done. The proposed automated learning with CA-SVM based sentiment analysis model reads the Twitter data set. After that they have been processed to extract the features which yield set of terms. Using the terms, the tweets are clustered using TGS-K means clustering which measures Euclidean distance according to different features like semantic sentiment score (SSS), gazetteer and symbolic sentiment support (GSSS), and topical sentiment score (TSS). Further, the method classifies the tweets according to support vector machine (CA-SVM) which classifies the tweet according to the support value which is measured based on the above two measures. The attained results are validated utilizing k-fold cross-validation methodology. Then, the classification is performed by utilizing the Balanced CA-SVM (Deep Learning Modified Neural Network). The results are evaluated and compared with the existing works. The Proposed model achieved 92.48 % accuracy and 92.05% sentiment score contrasted with the existing works. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. A Scoping Review of Artificial Intelligence and Machine Learning in Bariatric and Metabolic Surgery: Current Status and Future Perspectives.
- Author
-
Pantelis, Athanasios G., Stravodimos, Georgios K., and Lapatsanis, Dimitris P.
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,BARIATRIC surgery ,DEEP learning ,DATA analysis - Abstract
Artificial intelligence (AI) is a revolution in data analysis with emerging roles in various specialties and with various applications. The objective of this scoping review was to retrieve current literature on the fields of AI that have been applied to metabolic bariatric surgery (MBS) and to investigate potential applications of AI as a decision-making tool of the bariatric surgeon. Initial search yielded 3260 studies published from January 2000 until March 2021. After screening, 49 unique articles were included in the final analysis. Studies were grouped into categories, and the frequency of appearing algorithms, dataset types, and metrics were documented. The heterogeneity of current studies showed that meticulous validation, strict reporting systems, and reliable benchmarking are mandatory for ensuring the clinical validity of future research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. The basics of data, big data, and machine learning in clinical practice.
- Author
-
Soriano-Valdez, David, Pelaez-Ballestas, Ingris, Manrique de Lara, Amaranta, and Gastelum-Strozzi, Alfonso
- Subjects
MACHINE learning ,BIG data ,MEDICAL informatics ,DATA science - Abstract
Health informatics and biomedical computing have introduced the use of computer methods to analyze clinical information and provide tools to assist clinicians during the diagnosis and treatment of diverse clinical conditions. With the amount of information that can be obtained in the healthcare setting, new methods to acquire, organize, and analyze the data are being developed each day, including new applications in the world of big data and machine learning. In this review, first we present the most basic concepts in data science, including the structural hierarchy of information and how it is managed. A section is dedicated to discussing topics relevant to the acquisition of data, importantly the availability and use of online resources such as survey software and cloud computing services. Along with digital datasets, these tools make it possible to create more diverse models and facilitate collaboration. After, we describe concepts and techniques in machine learning used to process and analyze health data, especially those most widely applied in rheumatology. Overall, the objective of this review is to aid in the comprehension of how data science is used in health, with a special emphasis on the relevance to the field of rheumatology. It provides clinicians with basic tools on how to approach and understand new trends in health informatics analysis currently being used in rheumatology practice. If clinicians understand the potential use and limitations of health informatics, this will facilitate interdisciplinary conversations and continued projects relating to data, big data, and machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Big Data Analytics by CrowdLearning: Architecture and Mechanism Design.
- Author
-
Zhan, Yufeng, Li, Peng, Wang, Kun, Guo, Song, and Xia, Yuanqing
- Subjects
ARCHITECTURAL design ,BIG data ,DEEP learning ,ACQUISITION of data ,DATA analysis ,TASK analysis - Abstract
Crowdsensing has emerged as a powerful tool to collect IoT big data. Moving big data to the cloud for analysis is time consuming and has the risk of data privacy leakage. An alternative is to leave the training data distributed on mobile devices, and learn a shared model by aggregating locally computed updates. In this article, we propose a CrowdLearning system, which employs MUs for big data collection and deep learning training. We propose a game-based incentive mechanism to optimize the utilities of MUs and accuracy of the training model by exploiting the various sensing and training capabilities of MUs. Experiments have been conducted to evaluate the performance of proposed CrowdLearning system and the results validate the effectiveness of the proposed mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Large scale data based audio scene classification.
- Author
-
Sophiya, E. and Jothilakshmi, S.
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,DATA analysis ,BIG data ,ARTIFICIAL neural networks - Abstract
Artificial Intelligence and Machine learning has been used by many research groups for processing large scale data known as big data. Machine learning techniques to handle large scale complex datasets are expensive to process computation. Apache Spark framework called spark MLlib is becoming a popular platform for handling big data analysis and it is used for many machine learning problems such as classification, regression and clustering. In this work, Apache Spark and the advanced machine learning architecture of a Deep Multilayer Perceptron (MLP), is proposed for Audio Scene Classification. Log Mel band features are used to represent the characteristics of the input audio scenes. The parameters of the DNN are set according to the DNN baseline of DCASE 2017 challenge. The system is evaluated with TUT dataset (2017) and the result is compared with the baseline provided. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Deep learning based big medical data analytic model for diabetes complication prediction
- Author
-
R. Shanmugalakshmi and K. Vidhya
- Subjects
Data collection ,General Computer Science ,Computer science ,business.industry ,Medical record ,Deep learning ,Big data ,Feature extraction ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,Deep belief network ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
The revolution in digitization makes the health care sector as a prime source of big data. The analysis of these data could be a great supporting source for deriving new insights, which increases the care and awareness about health. Diabetes together with its complications has been recognized worldwide as a chief public health threat. Predicting diabetic complications is considered as a highly effectual technique for augmenting the survival rate of diabetic patients. While many studies currently use medical images and structured medical records, very limited efforts have been dedicated for applying Data Mining (DM) techniques for unstructured textual medical records, for instance, admission and discharge records. Many DM techniques have been generated for envisaging diabetic complications. But in existing methods, the classification as well as prediction accuracy is not so high. So this paper proposes a model centered on Deep Learning (DL) for predicting complications of Type 2 Diabetes Mellitus. The proposed model follows data collection, pre-training, feature extraction, Deep Belief Network (DBN), validation process, and classification steps for predicting diabetic complications. Finally, the performances proffered by the proposed DL based Big Medical Data Analytics model using DBN as well as the prevailing techniques are contrasted with respect to Precision, accuracy, and Recall. The Training, as well as the Testing process, delineates the pervasiveness of risk with an accuracy of 81.20%. This realistic prediction model will be very much useful for effectively managing diabetes.
- Published
- 2020
28. Machine Learning Models for Secure Data Analytics: A taxonomy and threat model
- Author
-
Sudeep Tanwar, Sudhanshu Tyagi, Rajesh Gupta, and Neeraj Kumar
- Subjects
Computer Networks and Communications ,business.industry ,End user ,Computer science ,Deep learning ,Big data ,Vulnerability ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,SQL injection ,Threat model ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
In recent years, rapid technological advancements in smart devices and their usage in a wide range of applications exponentially increases the data generated from these devices. So, the traditional data analytics techniques may not be able to handle this extreme volume of data known as Big Data (BD) generated by different devices. However, this exponential increase of data opens the doors for the different type of attackers to launch various attacks by exploiting various vulnerabilities (SQL injection, OS fingerprinting, malicious code execution, etc.) during data analytics. Motivated from the aforementioned discussion, in this paper, we explored Machine Learning (ML) and Deep Learning (DL)-based models and techniques which are capable off to identify and mitigate both the known as well as unknown attacks. ML and DL-based techniques have the capabilities to learn from the traffic pattern using training and testing datasets in the extensive network domains to make intelligent decisions concerning attack identification and mitigation. We also proposed a DL and ML-based Secure Data Analytics (SDA) architecture to classify normal or attack input data. A detailed taxonomy of SDA is abstracted into a threat model. This threat model addresses various research challenges in SDA using multiple parameters such as-efficiency, latency, accuracy, reliability, and attacks launched by the attackers. Finally, a comparison of existing SDA proposals with respect to various parameters is presented, which allows the end users to select one of the SDA proposals in comparison to its merits over the others.
- Published
- 2020
29. ASN-ASAS SYMPOSIUM: FUTURE OF DATA ANALYTICS IN NUTRITION: Mathematical modeling in ruminant nutrition: approaches and paradigms, extant models, and thoughts for upcoming predictive analytics1,2
- Author
-
Luis O Tedeschi
- Subjects
Sociology of scientific knowledge ,Systems Analysis ,Computer science ,Process (engineering) ,Big data ,Virtual representation ,Terminology ,Machine Learning ,03 medical and health sciences ,computer program ,mathematical modeling and simulation ,Genetics ,Animals ,Computer Simulation ,Systems thinking ,Featured Collection ,030304 developmental biology ,0303 health sciences ,business.industry ,Data Science ,0402 animal and dairy science ,deep learning ,prediction ,Ruminants ,04 agricultural and veterinary sciences ,General Medicine ,Models, Theoretical ,artificial intelligence ,040201 dairy & animal science ,Data science ,Paradigm shift ,Data analysis ,Animal Nutritional Physiological Phenomena ,Animal Science and Zoology ,business ,Food Science - Abstract
This paper outlines typical terminology for modeling and highlights key historical and forthcoming aspects of mathematical modeling. Mathematical models (MM) are mental conceptualizations, enclosed in a virtual domain, whose purpose is to translate real-life situations into mathematical formulations to describe existing patterns or forecast future behaviors in real-life situations. The appropriateness of the virtual representation of real-life situations through MM depends on the modeler’s ability to synthesize essential concepts and associate their interrelationships with measured data. The development of MM paralleled the evolution of digital computing. The scientific community has only slightly accepted and used MM, in part because scientists are trained in experimental research and not systems thinking. The scientific advancements in ruminant production have been tangible but incipient because we are still learning how to connect experimental research data and concepts through MM, a process that is still obscure to many scientists. Our inability to ask the right questions and to define the boundaries of our problem when developing models might have limited the breadth and depth of MM in agriculture. Artificial intelligence (AI) has been developed in tandem with the need to analyze big data using high-performance computing. However, the emergence of AI, a computational technology that is data-intensive and requires less systems thinking of how things are interrelated, may further reduce the interest in mechanistic, conceptual MM. Artificial intelligence might provide, however, a paradigm shift in MM, including nutrition modeling, by creating novel opportunities to understand the underlying mechanisms when integrating large amounts of quantifiable data. Associating AI with mechanistic models may eventually lead to the development of hybrid mechanistic machine-learning modeling. Modelers must learn how to integrate powerful data-driven tools and knowledge-driven approaches into functional models that are sustainable and resilient. The successful future of MM might rely on the development of redesigned models that can integrate existing technological advancements in data analytics to take advantage of accumulated scientific knowledge. However, the next evolution may require the creation of novel technologies for data gathering and analyses and the rethinking of innovative MM concepts rather than spending resources in collecting futile data or amending old technologies.
- Published
- 2019
30. Research on Drug Response Prediction Model Based on Big Data
- Author
-
Minzhu Xie and Guijin Li
- Subjects
business.industry ,Computer science ,Deep learning ,Big data ,Precision medicine ,Machine learning ,computer.software_genre ,Medical research ,Variety (cybernetics) ,Data analysis ,Personalized medicine ,Artificial intelligence ,business ,computer ,Personal genomics - Abstract
Personalized medicine, also known as precision medicine, refers to a medical model of providing the best treatment plan for a patient according to his or her personal genomic information. The research and practice of personalized medicine have become a hot topic in current medical research, and predicting the response of cell lines to specific drugs is one of the core problems. Using computer algorithms to predict the responses of cell lines to drugs based on huge amounts of existing omics information is currently one focus of bioinformatics. A variety of predictive methods have been proposed. The paper introduces the baseline analysis data, surveys some classical prediction methods and models, and details on the application of matrix decomposition, heterozygous network and deep learning at the drug response prediction. At last, some existing problems and future development trend and prospects are discussed.
- Published
- 2021
31. Predicting Covid-19 Trajectory Using Machine Learning
- Author
-
Abdullahi Abdu Ibrahim, Zainab Abbas Abdulhussein Alwaeli, Ibrahim, Abdullahi Abdu, and Alwaeli, Zainab Abbas Abdulhussein
- Subjects
Component ,Styling ,Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,Deep learning ,Formatting ,Big data ,Early detection ,Machine learning ,computer.software_genre ,Insert ,Test (assessment) ,Component (UML) ,Trajectory ,Data analysis ,Artificial intelligence ,business ,computer ,Style - Abstract
4th International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2020 -- 22 October 2020 through 24 October 2020 -- -- 165025 2-s2.0-85097676060 The pandemic caused by COVID-19 in 2020 triggered a devastating effect on the economy and health of the world population, whose social implications for the next few years are still uncertain. Two types of standard tests are used to detect COVID-19: the viral test that indicates whether the patient is infected and the antibody test that allows us to observe if the patient has previously had an infection. These tests employ techniques such as reverse transcription and polymerase chain reaction (RT-PCR), immunochromatographic lateral flow or rapid test, and ELISA-type immunoassay In this paper we have designed and implemented a system whose main purpose is to detect the rise of Covid-19 cases using disruptive technologies such as artificial intelligence and intelligent computing, manifested through machine learning (Machine Learning) and deep learning (Deep Learning). Combined with data science, Big Data and advanced data analytics, among others that present various research and development options, it can help the early detection of COVID-19 through the search for relevant characteristics that allow the scientific community identify biochemical, molecular and cellular factors that facilitate the early detection of the virus in its different states of infection, incubation, propagation and treatments to be used © 2020 IEEE.
- Published
- 2020
32. Machine Learning Algorithms for Food Intelligence: Towards a Method for More Accurate Predictions
- Author
-
Giannis Stoitsis, Nikos Manouselis, Panagis Katsivelis, Mihalis Papakonstantinou, Ioanna Polychronou, Agroknow [Athens], Ioannis N. Athanasiadis, Steven P. Frysinger, Gerald Schimak, Willem Jan Knibbe, TC 5, and WG 5.11
- Subjects
Computer science ,Big data ,030508 substance abuse ,02 engineering and technology ,Variation (game tree) ,Machine learning ,computer.software_genre ,Experimentation method ,03 medical and health sciences ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Application Context ,business.industry ,Deep learning ,Price prediction ,Analytics ,Data analytics ,Benchmark (computing) ,Data analysis ,Artificial intelligence ,0305 other medical science ,business ,Algorithm ,computer - Abstract
International audience; It is evident that machine learning algorithms are being widely impacting industrial applications and platforms. Beyond typical research experimentation scenarios, there is a need for companies that wish to enhance their online data and analytics solutions to incorporate ways in which they can select, experiment, benchmark, parameterise and choose the version of a machine learning algorithm that seems to be most appropriate for their specific application context. In this paper, we describe such a need for a big data platform that supports food data analytics and intelligence. More specifically, we introduce Agroknow’s big data platform and identify the need to extend it with a flexible and interactive experimentation environment where different machine learning algorithms can be tested using a variation of synthetic and real data. A typical usage scenario is described, based on our need to experiment with various machine learning algorithms to support price prediction for food products and ingredients. The initial requirements for an experimentation environment are also introduced.
- Published
- 2020
33. On Present Use of Machine Learning based Automation in Finance
- Author
-
Vibha Tripathi
- Subjects
business.industry ,Computer science ,Deep learning ,Big data ,computer.software_genre ,Machine learning ,Automation ,Data analysis ,Artificial intelligence ,Algorithmic trading ,business ,Transfer of learning ,computer ,Financial services ,Interpretability - Abstract
In this paper, we survey the current known applications of Machine Learning based Data Analytics and automation in finance industry. We look into the challenges involved in furthering this technology, particularly in employing more Deep Learning approaches proven for successful automation in other domains. We enumerate observations on some of the barriers faced by the industry in effectively adopting and accelerating use of AI techniques, and finally propose more areas that we believe could further benefit from application of Machine Learning.
- Published
- 2019
34. Artificial Intelligence and Big Data in Public Health
- Author
-
Kurt K. Benke and Geza Benke
- Subjects
0301 basic medicine ,Big Data ,medicine.medical_specialty ,vision ,Health, Toxicology and Mutagenesis ,precision medicine ,Big data ,lcsh:Medicine ,Context (language use) ,algorithms ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,medicine ,030212 general & internal medicine ,visualization ,ComputingMilieux_THECOMPUTINGPROFESSION ,business.industry ,Public health ,Deep learning ,lcsh:R ,Social change ,Public Health, Environmental and Occupational Health ,deep learning ,data mining ,Predictive analytics ,Precision medicine ,wearable AI ,predictive analytics ,030104 developmental biology ,machine learning ,Data analysis ,Commentary ,epidemiology ,Artificial intelligence ,Public Health ,business ,Psychology - Abstract
Artificial intelligence and automation are topics dominating global discussions on the future of professional employment, societal change, and economic performance. In this paper, we describe fundamental concepts underlying AI and Big Data and their significance to public health. We highlight issues involved and describe the potential impacts and challenges to medical professionals and diagnosticians. The possible benefits of advanced data analytics and machine learning are described in the context of recently reported research. Problems are identified and discussed with respect to ethical issues and the future roles of professionals and specialists in the age of artificial intelligence.
- Published
- 2018
35. Applications of Deep Learning for Smart Water Networks
- Author
-
Zheng Yi Wu, Sudipta Pathak, and Mahmoud El-Maghraby
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,Big data ,Distribution management system ,General Medicine ,Sensor fusion ,Machine learning ,computer.software_genre ,Data analysis ,Carbon footprint ,Artificial intelligence ,Time series ,business ,computer ,Engineering(all) - Abstract
Deep Learning (DL) is the state-of-art paradigm of Artificial Neural Network (ANN) computing. It is a new breakthrough in machine learning, and differentiates from the conventional or shallow learning algorithms by emulating the six-layer human neocortex, which is unique for human brain containing billions of interconnected neurons. Unlike canonical ANN, DL is capable of self-learning data features by mimicking the self-learning functions layer by layer in human cortex and creating a data-driven model with the given dataset. This paper reports the initial applications of deep learning for simulation, optimization and operation control of water distribution systems. It elaborates the development of efficient deep learning framework with potential applications of facilitating the data fusion, system simulation and predictive analysis, detection of abnormal events from the recorded time series data (pressures, flows and consumptions etc.), water usage prediction, construction of a meta-model as a surrogate to the physics-based models (hydraulic and water quality), and acceleration of the solution search for smart water distribution management, which aims at improving operation efficiency, reducing carbon footprint, and exceling customers’ expectation.
- Published
- 2015
- Full Text
- View/download PDF
36. A deep learning approach to flight delay prediction
- Author
-
Simon I. Briceno, Sun Choi, Dimitri N. Mavris, and Young Jin Kim
- Subjects
050210 logistics & transportation ,Machine translation ,Computer science ,business.industry ,Deep learning ,05 social sciences ,Big data ,Online machine learning ,02 engineering and technology ,Air traffic control ,Machine learning ,computer.software_genre ,Traffic flow ,Recurrent neural network ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Deep learning has achieved significant improvement in various machine learning tasks including image recognition, speech recognition, machine translation and etc. Inspired by the huge success of the paradigm, there have been lots of tries to apply deep learning algorithms to data analytics problems with big data including traffic flow prediction. However, there has been no attempt to apply the deep learning algorithms to the analysis of air traffic data. This paper investigates the effectiveness of the deep learning models in the air traffic delay prediction tasks. By combining multiple models based on the deep learning paradigm, an accurate and robust prediction model has been built which enables an elaborate analysis of the patterns in air traffic delays. In particular, Recurrent Neural Networks (RNN) has shown its great accuracy in modeling sequential data. Day-to-day sequences of the departure and arrival flight delays of an individual airport have been modeled by the Long Short-Term Memory RNN architecture. It has been shown that the accuracy of RNN improves with deeper architectures. In this study, four different ways of building deep RNN architecture are also discussed. Finally, the accuracy of the proposed prediction model was measured, analyzed and compared with previous prediction methods. It shows best accuracy compared with all other methods.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.