154 results
Search Results
2. Impact of TCP-SYN Flood Attack in Cloud
- Author
-
Md. Ruhul Islam, Dhruba Ningombam, and Anurag Sharma
- Subjects
Service (business) ,Resource (project management) ,Computer science ,business.industry ,Data_MISCELLANEOUS ,Denial-of-service attack ,Cloud computing ,SYN flood ,business ,Computer security ,computer.software_genre ,computer - Abstract
The given paper is focused on experimental study of one particular category of DoS attack (Denial of Service) in cloud computing network known as TCP SYN flood attack, and its effect on the resource availability and the cloud service factors. The attack typically takes up the general services of the cloud and the resources of the cloud, thus denying of proper services to the genuine users. The resources of the cloud server are drained and hence any other incoming requests would not be responded, thereby denying the access to the legitimate users. The assumption here is that the data being transmitted between client, and server is a multimedia data. In this paper, we have tried an experimental study on TCP SYN Flood attack and tried to see out the various parameters affected by the attack.
- Published
- 2021
3. A Meteorological Public Opinion Method Research Base on Deep Random Forest
- Author
-
Sheng Yan, Feng Zhou, Xiaonan Hu, Lina Zhu, and Xia Fang
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Public opinion ,computer.software_genre ,Random forest ,Scarcity ,Set (abstract data type) ,Classifier (linguistics) ,Word2vec ,The Internet ,Data mining ,business ,Natural disaster ,computer ,media_common - Abstract
In recent years, Weibo and news commentary information on the Internet have shown a trend of massive growth. It has become very difficult for people to accurately analyze public opinion from massive data. In this paper, a new classification method of meteorological public opinion based on deep random forest is proposed, aiming at the difficulties of public opinion analysis and the scarcity of data features in Weibo meteorological domain data. The method is used to analyses the evaluation data of Weibo on the basis of the meteorological data of Sina Weibo, thus judge the tendency of network meteorological public opinion comments when natural disasters occur. Firstly, we acquired and trained meteorological commentary data. Then used the method of word2vec to vectorize comment information. At last, the vectors were classified by the deep random forest algorithm, meanwhile, continuously optimized the classifier until the training set size reaches the set threshold. The experimental results show that the method proposed in this paper can better judge the public opinion tendency of network meteorological comments and has achieved better results compared with traditional classification methods.
- Published
- 2021
4. Employee Attrition Prediction Using Machine Learning Algorithms
- Author
-
Lok Sundar Ganthi, Yaswanthi Nallapaneni, Deepalakshmi Perumalsamy, and Krishnakumar Mahalingam
- Subjects
Artificial neural network ,Computer science ,business.industry ,Decision tree ,Machine learning ,computer.software_genre ,medicine.disease ,Random forest ,Statistical classification ,medicine ,Feature (machine learning) ,Attrition ,Artificial intelligence ,Precision and recall ,Set (psychology) ,business ,Algorithm ,computer - Abstract
In any corporation, if a significant number of employees leave their job with a short notice period, it may lead to a reduction in overall throughput which in turn will certainly have an impact on the turnover. Companies need to spend additional efforts in terms of time and cost to fill up the vacant position without any substantial loss to the ongoing business. To avoid these situations, we can use machine learning techniques to predict employees who are planning to leave the company with the help of some related data. One more way is to identify the features which inspire employees to leave their job. Refining such features in the company also will result in reducing the employee attrition rate of the company. In this paper, we attempted to predict employee attrition rate using the classification algorithms, namely Decision tree, Random forest, K-Nearest Neighbourhood, Neural Networks, extreme gradient boosting and Ada-Boosting. We also have applied regularization for every algorithm to find the precise parameters to predict the employee’s attrition rate considering the HR-data set from the Kaggle website which consists of 35 features including 34 independent and one dependent feature which is our attrition feature with Yes/No values in it. In this paper, we are going through different steps to finally obtain an accuracy of 88% with good precision and recall values.
- Published
- 2021
5. A Comprehensive Survey on Content-Based Image Retrieval Using Machine Learning
- Author
-
Milind V. Lande and Sonali Ridhorkar
- Subjects
Feature engineering ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Machine learning ,computer.software_genre ,Content-based image retrieval ,Convolutional neural network ,Automatic image annotation ,Feature (computer vision) ,Classifier (linguistics) ,Artificial intelligence ,business ,computer ,Image retrieval - Abstract
In computer vision systems, however, retrieving a necessary image for a normal user is difficult. Over the last two decades, several studies have been conducted to improve the efficiency of automated image annotation, which has mainly concentrated on content-based image retrieval (CBIR) and attempts to identify specific images from a large dataset that are close to a query image. Various hybrid feature descriptors based on image cues such color, texture, shape, and so on that depict images were examined. For more than a decade, machine learning has been gaining traction as a viable alternative to hand-designed feature engineering. Feature extraction techniques and mathematical models are used to improve the performance and complexity of the image retrieval process. Furthermore, using the existing AlexNet convolutional neural network classifier, to increase the accuracy and performance of retrieval of Corel and Wang Datasets. Present state-of-the-art methods are described from different viewpoints for a deeper understanding of the development. Over the last period, the survey paper has given a comprehensive overview of hybrid feature extraction methods and machine learning-based enhancements for CBIR. In this paper, we hope to include a succinct overview of recent developments in CBIR and hybrid feature extraction methods, as well as machine learning-based enhancements for content-based image retrieval.
- Published
- 2021
6. Machine Learning, Deep Learning and Image Processing for Healthcare: A Crux for Detection and Prediction of Disease
- Author
-
Charu Chhabra and Meghna Sharma
- Subjects
Decision support system ,business.industry ,Computer science ,Deep learning ,Decision tree ,Machine learning ,computer.software_genre ,Health informatics ,Random forest ,Support vector machine ,Naive Bayes classifier ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,business ,computer - Abstract
Machine learning has rapidly gained traction in a variety of fields, including science, healthcare, engineering, and biotechnology, in recent years owing to its effective functioning mechanism. Health care has always been a key priority for any government as the industry has made significant progress by using machine learning, artificial intelligence, and deep learning for disease prediction and diagnosis. The central aspect of this paper is to evaluate different machine learning algorithms and classification techniques in order to detect and predict different chronic diseases. The paper discusses supervised classification strategies for detecting diseases such as cancer, psychological disorders, and cardiac disorders, as well as various bioinformatics and biomedical research challenges. The comparison between classification techniques like support vector machines, logistic regression, decision trees, random forest, and Naive Bayes classifiers has been observed in numerous diseases. The algorithms that are specifically applied in the medical applications and in healthcare sector enabled the clinical experts and physicians to watchdog, diagnose, and monitor the disease effectively and perform appropriate measures in the shortest possible duration. Decision support systems benefit the physicians for effective and timely decision-making capabilities in case of chronic diseases. The paper reviews the implementation of machine learning and deep learning which has undoubtedly contributed toward health informatics, healthcare systems including bioinformatics. After discussing the techniques and comparison of numerous classification algorithms in several diseases, the ones which have efficiently produced an appropriate result in an early detection of diseases have been highlighted. In order to emphasize the issues that must be considered while implementing the methodologies and classification algorithms for an early illness detection system, several future directions are described.
- Published
- 2021
7. Stress Prediction Using Machine Learning and IoT
- Author
-
Soham Taneja, Bhawna Gupta, Preeti Nagrath, Paras Gupta, Vividha, and Drishti Agarwal
- Subjects
Heartbeat ,Computer science ,business.industry ,Logit ,Stressor ,Novelty ,Probit ,Machine learning ,computer.software_genre ,Random forest ,Stress (linguistics) ,Artificial intelligence ,Ordered logit ,business ,computer - Abstract
Stress is a mental condition that affects every aspect of life leading to sleep deprivation and various other diseases. Thus, it's necessary to analyse one's vitals to stay updated about one's mental health. This paper presents an effective method for detecting cognitive stress levels using data from a physical activity tracker device. The main goal of this system is to use sensor technology to detect stress using a machine learning approach. Individually, the impact of each stressor is assessed using ML models, followed by the construction of a NN model and assessment using ordinal logistic regression models such as logit, probit and complementary log-log. The paper uses heartbeat rate as one of the features to recognise stress and the Internet of Things (IoT) and Machine Learning (ML) to alert the situation when the person is in real danger. The patient's condition is predicted using machine learning and the patient's acute stress condition is relayed using IoT. Based on the heartbeat, a prediction of whether a person is under stress or not can be made. The paper presents a model that can predict stress levels based entirely on electrocardiogram (ECG) data, which can be measured with consumer-grade heart monitors The ECG's spectral power components, as well as time and frequency domain features of heart rate variability, are included in the model. The stress detector system takes the real-time data from the IoT device (sensor), then applies a machine learning model on the data to detect stress levels in an individual and eventually informs/alerts the individual about their stress condition. By collecting data, this system is tested and evaluated in a real-time environment with different machine learning models. Finally, a comprehensive comparative analysis has been depicted amongst the applied models with Random Forest Classifier showing the highest accuracy. The novelty of this work lies in the fact that a stress detection framework should be as unobtrusive to the user as possible.
- Published
- 2021
8. Concept of Hybrid Models in Background Subtraction: A Review of Recent Trends
- Author
-
Saumya Maurya and Mahipal Singh Choudhry
- Subjects
Background subtraction ,Identification (information) ,Point (typography) ,Descriptive statistics ,Computer science ,Motion estimation ,Dynamic mode decomposition ,Data mining ,Object (computer science) ,computer.software_genre ,computer ,Field (computer science) - Abstract
Background subtraction (BGS) is a widely used technique in the field of computer vision for non-stationary object identification and tracking, especially in video surveillance. Hybrid models are one of the many types of approaches that can be found in the BGS literature as a result of extensive ongoing studies. This paper provides a comprehensive analysis of some of the most recent hybrid models in the BGS literature. Hybrid models are created by combining two or more models, allowing them to benefit from each other's strengths while overcoming the weaknesses of the original models. In this paper, some of the recently developed hybrid models like Hierarchical Modeling and Alternating Optimization (HMAO), randomized dynamic mode decomposition (rDMD), Adaptive Motion Estimation and Sequential Outline Separation (AME + SOS), etc. are reviewed based on their algorithms, datasets, challenges, limitations, and advantages. Descriptive analysis is done using a tabular form of review for a clear and easy understanding in addition to the comparative analysis which is performed based on f-m values of the models for a video sequence from the very popular CDnet dataset. Concluding remarks point towards the future direction of research.
- Published
- 2021
9. Electric Motor Drive Using Single Input Fuzzy Logic Controller for Husk Extraction in Rice Mill Industry
- Author
-
A. Jagadeesh, K. Sireesha, and K. Deepa
- Subjects
Electric motor ,Electronic speed control ,Computer science ,Control theory ,Boost converter ,PID controller ,MATLAB ,DC motor ,computer ,Fuzzy logic ,computer.programming_language - Abstract
This paper deals with the Single Input Fuzzy Logic Controller (SFLC) deployed for controlling the speed of a DC motor used in Rice mills for separating the husk layer surrounding the paddy to get the rice grain with perfect shape and size. System utilizes the power from the Photo voltaic (PV) array with a boost converter to meet desired specifications of DC motor. Different operating procedures like Proportional Integral Derivative (PID) logic controller procedures can be deployed for the speed control of DC motor. This paper illustrates the fuzzy operation technique with single input variable. PID logic controller requires knotty mathematical models. While Fuzzy Logic Controller (FLC) function is based on rule base knowledge. The DC motor’s speed can be tuned to a reasonable extend to function in a settled manner. In summary, this paper demonstrated the speed regulation of both controllers (SFLC and Two input FLC) designed for the DC motor and results are discussed with the help of SIMULINK program in MATLAB.
- Published
- 2021
10. Analysis of Web Application Firewalls, Challenges, and Research Opportunities
- Author
-
Subhash Pingale and Sanjay R. Sutar
- Subjects
Password ,Computer science ,business.industry ,Cross-site scripting ,Intrusion detection system ,Man-in-the-middle attack ,Computer security ,computer.software_genre ,SQL injection ,Web application ,Malware ,Application firewall ,business ,computer - Abstract
According to survey in January 2020, over internet almost 1,295,973,827 web sites are hosted. Among them 72% websites are vulnerable to different attack like SQL Injection, Cross Site Scripting, Brute forcing attack, Phishing attack, password attack, birth day attack, malware attack and man in middle attack now days many hackers/attackers are trying to bypass the security mechanism by using new techniques so it’s challenging task to respond newly and unknown attack with an effective solution. In this paper i am trying to find out the techniques, tools, and solutions used to detect attacks, such as intrusion detection systems (IDS), Web Application Firewall (WAF), machine learning (ML) techniques, this tools and techniques is used to discuss analysis of traditional technologies, drawback and produce more effective solutions. In this paper we compared the different web application firewall correspond to its policy control and one proposed a new web application firewall. In the proposed web application firewall we have discussed three approaches.
- Published
- 2021
11. A Novel Integrated Teaching, Learning and Practicing Mode for Navigation and Control Curriculum
- Author
-
Pengbo Wang, Xiaojun Yu, Jianxin Ren, and Zeming Fan
- Subjects
Multimedia ,Computer science ,Process (engineering) ,business.industry ,media_common.quotation_subject ,Teaching method ,Information technology ,computer.software_genre ,Mode (computer interface) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Key (cryptography) ,Dimension (data warehouse) ,business ,Function (engineering) ,computer ,Curriculum ,media_common - Abstract
With the rapid advancements of various information technologies in recent years, many new forms of e-learning schemes have emerged. Due to the various issues with such e-learning schemes, however, the commonly adopted teaching method is still the traditional face-to-face classroom in practice. In such a teaching mode, the three key elements of all teaching modes, i.e., theory, experiment, and practicing, are separate and independent of each other, which largely degrades the learning efficiency. To address such a practical issue, based on the objectives of professional personnel training for navigation and control, this paper presents a curriculum teaching model that integrates the three learning key elements together. Specifically, this paper shows the architecture of the curriculum construction mode first, and then presents the composition and function of the student dimension, teacher platform dimension, laboratory server dimension and laboratory equipment dimension in the navigation and control course group, respectively. Finally, with the course of “principle of inertial sensor” as a case study, this paper introduces the teaching process of using the teaching mode, which integrates the three key teaching elements together in the course. The practical teaching results show that this proposed teaching mode could not only make the teaching process of engineering courses intuitive and easy to understand, but also enable students to participate in the theorical, experimental, practical teaching and learning process throughout the presented mode. This mode could largely enhance both students’ learning enthusiasm and the efficiency.
- Published
- 2021
12. Aero-Engine Performance Evaluation and Prediction Based on FDR Data (Flight Data Recorder)
- Author
-
Pan Wang, Kui Liu, and Zijie Tang
- Subjects
Computer science ,Process (computing) ,Performance prediction ,Sample (statistics) ,Data mining ,Aero engine ,computer.software_genre ,computer ,Flight data - Abstract
This paper studies the problem of aero-engine performance evaluation and prediction by using FDR data in application. It establishes the performance evaluation and prediction process based on the multiple measurement parameters. The process contains three parts: sample construction, performance evaluation and performance prediction. The paper proposes a distance-based method to quantify the aero-engine performance using multiple parameters during evaluation. Then it constructs the performance prediction model by ANN method which can predict aero-engine performance accurately.
- Published
- 2021
13. A Survey of Recent Abstract Summarization Techniques
- Author
-
Diyah Puspitaningrum
- Subjects
Computer science ,business.industry ,Process (engineering) ,media_common.quotation_subject ,computer.software_genre ,Automatic summarization ,language.human_language ,Indonesian ,language ,Artificial intelligence ,business ,Function (engineering) ,computer ,Natural language processing ,media_common - Abstract
This paper surveys several recent abstract summarization methods: T5, Pegasus, and ProphetNet. We implement the systems in two languages: English and Indonesian languages. We investigate the impact of pre-training models (one T5, three Pegasuses, three ProphetNets) on several Wikipedia datasets in English and Indonesian language and compare the results to the Wikipedia systems’ summaries. The T5-Large, the Pegasus-XSum, and the ProphetNet-CNNDM provide the best summarization. The most significant factors that influence ROUGE performance are coverage, density, and compression. The higher the scores, the better the summary. Other factors that influence the ROUGE scores are the pre-training goal, the dataset's characteristics, the dataset used for testing the pre-trained model, and the cross-lingual function. Several suggestions to improve this paper's limitation are: (1) assure that the dataset used for the pre-training model must be sufficiently large, contains adequate instances for handling cross-lingual purpose; (2) advanced process (fine-tuning) shall be reasonable. We recommend using the large dataset consisting of comprehensive coverage of topics from many languages before implementing advanced processes such as the train-infer-train procedure to the zero-shot translation in the training stage of the pre-training model.
- Published
- 2021
14. Designing an Intelligent Agents for E-Bookstore System Web-Based System
- Author
-
Bassant Mohamed Elbagoury, Mostafa Aref, and Waleed Hassanin
- Subjects
Computer science ,business.industry ,Multi-agent system ,Autonomous agent ,Complex system ,Intelligent decision support system ,computer.software_genre ,User requirements document ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Field (computer science) ,Intelligent agent ,Web application ,business ,Software engineering ,computer - Abstract
Artificial Intelligence (AI) is a new field triggered in computer science in order to solve complex problems like cooperation, distribution, and communication. AI has many research areas for the development of complex systems. Multi-Agent Systems (MAS) are one of reach of using autonomous systems which have actors’ acts on their own based on user requirements to solve their problems. A new development of programming language called Agent-Oriented Programming (AOP) for programming autonomous agents and multi-agent system MAS. The agent in MAS acts in an autonomous way without any human or system interaction. The agent has the ability of automated action in order to reach its goal. An agent has the Belief-Desire-Intension (BDI) architecture in order to define the behavior of the system using Agent-Oriented Programming. This paper shows a proposed model which is an intelligent multi-agent system for e-bookstore that communicates with each other using a Foundation of Intelligent Physical Agent (FIPA). Also this paper shows the search performance within the communication results worked on Amazon Dataset and evaluates the quality of intelligent agents of E-bookstore system.
- Published
- 2021
15. An Interview Transcriber Using Natural Language Processing
- Author
-
G. R. Deeba Lakshmi, Anshika Shukla, Rahul, and Jayavrinda Vrindavanam
- Subjects
Point (typography) ,Interview ,business.industry ,Computer science ,Process (engineering) ,Context (language use) ,computer.software_genre ,Field (computer science) ,Order (business) ,Key (cryptography) ,Word2vec ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
During the challenging times of COVID 19, major shifts in the learning and interaction process have been observed especially in the field of education and platforms of wider interactions like seminars. Online medium has been emerging as the order of the day. In this context, this paper looks at a new challenge that is being faced by various institutions while carrying out the interview process or interactions online or on a telephone. The recruiter might miss certain points when evaluating a candidate or sometimes the finer details of an interview are lost in communication, the interviewer might also not be able to recall the details of a particular interviewee at a later point in time. These are some of the challenges that would ultimately reduce the effectiveness of an interview process. We are developing a platform that extracts all the essential information delivered by the person who is being interviewed from the data received by the candidate during the process. The paper makes use of Natural Language Processing (NLP) to extract the key information from an interaction by extracting certain features or important key points by making suitable algorithms with the help of language dependencies. The paper also dealt with the outcome of the interview in the sense that, whether the candidate got selected or not based on her/his answers and how similar they are with the data provided by the company which clearly puts forward what they are looking for in an employee.
- Published
- 2021
16. Analysis of Healthcare Industry Using Machine Learning Approach: A Case Study in Bengaluru Region
- Author
-
Poornima Taranath, S. Gowrishankar, and Sweta Das
- Subjects
Computer science ,business.industry ,Big data ,Analogy ,computer.software_genre ,Machine learning ,Health informatics ,Domain (software engineering) ,Human health ,Health care ,Healthcare industry ,Artificial intelligence ,business ,computer ,Web scraping - Abstract
The huge collection of data under the domain of health informatics has always been of crucial importance in giving insights into human health and its sundry causes. With technology rising day after day, this data can be visualized under different lights which are depicted in the following paper. Data analysis is the answer to the challenges of the healthcare industry because of the plasticity offered in implementing its techniques in various frameworks and technologies. A notion about Machine learning and its association with big data has also been discussed here. The Machine learning techniques have always made analysis better; with a similar analogy, the paper gives a glimpse of ameliorating the patient’s lives who are looking for healthcare facilities in the Bengaluru region.
- Published
- 2021
17. Management of IoT Devices Security Using Blockchain—A Review
- Author
-
Rachana Yogesh Patil, Hrishikesh Nikam, Omkar Loka, Gaurav Pattewar, and Nachiket Mahamuni
- Subjects
Consensus algorithm ,Information privacy ,Blockchain ,business.industry ,Order (exchange) ,Computer science ,The Internet ,Data loss ,Computer security ,computer.software_genre ,business ,Internet of Things ,computer - Abstract
Internet of things (IoT) refers to a network where the devices included in that network are connected with each other through a common medium, in this case, the Internet, in order to share or exchange data with other devices in the network. The paper concentrates on Management of Security of these devices in the IoT network along with their maintenance, accessibility, etc. The main problems faced are data leakage, data alteration/modification, access to private data or important transactions, data loss, etc. This review paper refers to various ways to improve the existing IoT system with the use of different consensus algorithms and techniques. It also covers the security and data privacy of systems like smart homes and smart cities through modified blockchain systems.
- Published
- 2021
18. Language and Era Prediction of Digitized Indian Manuscripts Using Convolutional Neural Networks
- Author
-
Tejsvi Juj, Anukriti Garg, Laghima Tiwari, N. Jayanthi, and S. Indu
- Subjects
business.industry ,Computer science ,Subject (documents) ,Image processing ,computer.software_genre ,Convolutional neural network ,Image (mathematics) ,Writing style ,restrict ,Indian language ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
With an increasing number of Indian manuscripts being digitized, the subject of their era prediction is readdressed to interpret the socio-economic fabric of different periods. This paper describes a novel approach to estimate the era of Indian manuscripts from their scanned images using convolutional neural networks (CNN). The method primarily uses image processing to harness visual features from small image patches and classifies them based on the difference in writing styles in terms of strokes and letter formation. We follow a two-step approach of language prediction followed by a separate era prediction model for each language to achieve optimal results. For this paper, we restrict consideration to six Indian language manuscripts written between the sixteenth and twentieth centuries. Conclusively, our model outperforms other well-known architectures and gave over 90% and 80% accuracy on the training and validation data, respectively.
- Published
- 2021
19. Prediction and Classification of Cardiac Arrhythmia
- Author
-
Hema Raut, Aashuli Gupta, Arnob Banerjee, Kunal Lotlikar, and Disha Babaria
- Subjects
Computer science ,business.industry ,Decision tree ,Cardiac arrhythmia ,Machine learning ,computer.software_genre ,Neural network classifier ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Classification result ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,User interface ,MATLAB ,business ,computer ,computer.programming_language - Abstract
Due to advancement of new edge medical technologies, many methods have been applied to solve medical issues including machine learning approach. Cardiac Arrhythmia is one of the common diseases which can be solved using various machine learning approaches. There are many approaches which have already been introduced to classify arrhythmia and abnormality detection. This paper has a solution, introduces supervised and unsupervised models in which supervised models generate a good classification result. However, in this paper, we have also introduced a deep neural network classifier and used for prediction of arrhythmia if present based on some predefined value. In this paper, we have also connected it to the user interface to which the native users can check the level of arrhythmia.
- Published
- 2021
20. IOT-Based Remote Patient Monitoring System Using Microservices Architecture
- Author
-
C. D. Prajwal, K. S. Shreyas, B. A. Sujatha Kumari, Manoj Kumar, and M. S. Skanda
- Subjects
business.industry ,Computer science ,Remote patient monitoring ,Microservices ,Computer security ,computer.software_genre ,Variety (cybernetics) ,System requirements ,InformationSystems_GENERAL ,Scalability ,Web application ,Medical prescription ,business ,computer ,Protocol (object-oriented programming) - Abstract
Medical and healthcare monitoring system has had rapid growth in not only hospitals but other health-related centers as well. Effect of a wrong prescription leads to death of a patient, also unnecessary medicine prescription is the burden for the guardians or caretakers. IoT (Internet of things) allows a variety of devices to interact with each other to enable monitoring, storage, analysis and display. This paper explores the developments taken in the field of patient monitoring systems to avoid wrong and unnecessary prescription, and through this research, we found that selecting IoT protocol to design the system is important and depends on the system requirements like number of data sending nodes (Data Producers), network traffic hitting the server, scalability, network latency and real-time requirement. This paper proposes the design of a real-time patient monitoring system with microservices architecture to send patient’s data from ICU to the doctors and guardians. The proposed system also helps to reduce unnecessary prescription and billing.
- Published
- 2021
21. Security Magnification in Supply Chain Management Using Blockchain Technology
- Author
-
Bharat Bhushan, Lucky Katiyar, Abhishek Kumar, and Anushka
- Subjects
Supply chain management ,Blockchain ,Smart contract ,Computer science ,Supply chain ,Transparency (graphic) ,Data integrity ,Key (cryptography) ,Data breach ,Computer security ,computer.software_genre ,computer - Abstract
In supply chain management the necessity for data transparency is very essential as it is key to create trust between retailers and customers. But data managed by centralized controllers face several vulnerabilities and security threats like data breaches, data confidentiality, and many more. Blockchain is a digital and distributed ledger has acquired great popularity in recent years, due to its security, immutability, and transparency in data. It solves many challenges like keeping the data secure by using cryptographic algorithms. It is a decentralized ledger for recording, managing, storing, and transmitting data in a peer-to-peer network. This paper aims to provide a brief survey on the magnification of security in supply chain operations using blockchain, further indicating the challenges encountered during the integration. The work presents a descriptive study of past literature on blockchain for intensifying security in supply chain operations by examining the features provided by blockchain technology. Further, the paper provides an insight into how blockchain is transforming the business by providing safe and automated solutions. Additionally, this paper highlights the motivation behind using blockchain technology in supply chain management. Further, the work investigates how leveraging blockchain can help in overcoming vulnerabilities and avoiding fraudulent activities in the traditional supply chain. Finally, the paper highlights the uses of the blockchain-based business and enumerates the related future research directions.
- Published
- 2021
22. Study of Micro-Strip Patch Antenna for Applications in Contact-less Door Bell Looking at the COVID-19 Pandemic Situation
- Author
-
Maitreyi Ray Kanjilal, Moumita Mukherjee, Arnima Das, and Arpita Santra
- Subjects
Patch antenna ,ALARM ,Work (electrical) ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Visitor pattern ,Pandemic ,Doorbell ,Antenna (radio) ,Computer security ,computer.software_genre ,computer - Abstract
As the technology advances, the modern trend of lifestyle also advances. The doorbell has an important responsibility in home safety; it is one of the competent and steady systems needs to be developed for better safety which could be access at a low cost. In this era, there are many doorbells systems doing different operation. This paper focuses on touchless type automatic doorbell systems which will ring the bell automatically when a visitor approaches near the door. This system is intended to people, and due to the spread of COVID-19 pandemic situation, it would be one of the safety steps that can be taken against corona. People are now more careful about their everyday work and their family. In the year 2020, the whole world is trapped in unprecedented COVID-19 pandemic. The situation takes away all our normal lifestyle, and all the researches are going on in controlling the situation and finding a new way of life. In this work, the author is trying to establish a contactless door alarm for the household application. Motivation behind the work is that due to the Corona virus spread around the world, we have to take utmost care in every step of our life. If we use the normal door alarm, then there will be the issue of contact for every people who will arrive in. But if there will be a replacement of the conventional door alarm with the help of antenna technology, then it can solve the issue with a contactless alarm. In this paper, the author have used the HFSS software for the proposed antenna.
- Published
- 2021
23. Smart Irrigation Monitoring System for Multipurpose Solutions
- Author
-
Nikhila M. Santhoshlal, Vykha Pradeep, Vipina Valsan, and Krishna Rajesh
- Subjects
Irrigation ,Computer science ,Compost ,business.industry ,Agricultural engineering ,engineering.material ,Application software ,computer.software_genre ,Natural resource ,Water conservation ,Agriculture ,engineering ,business ,computer ,Water content ,Vermicompost - Abstract
The last two decades of the Information Age have been characterized by widespread proliferation of the Internet of things (IoT) technology. Besides diverse applications in various consumer, industrial, agriculture, and health care, IoT has enabled solutions for better management of natural resources. This paper illustrates the productive use of the IoT concept to automate the irrigation of vermicompost, encompassing three prime domains—waste management, smart irrigation, and (mobile) app. The soil moisture content and temperature of the compost bed are critical factors that delimit the earthworms’ life expectancy. This paper includes an intelligent monitoring system for effective irrigation of the compost bed, at precise time intervals. The irrigation status and the compost bed's moisture content can be monitored ubiquitously through the Amrita Sparsham mobile application software, minimizing human intervention, facilitating water conservation. Thus, the multipurpose solution is the convergence of Amrita waste management through vermicompost and the automated smart irrigation system monitored using Amrita Sparsham mobile application.
- Published
- 2021
24. Deep Learning-Based Legal System Architecture for Africa: An Architectural Study
- Author
-
V. Lakshmi Narasimhan, Moemedi Lefoane, and L. Rajesh
- Subjects
Glossary ,Computer science ,business.industry ,Information processing ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Data dictionary ,computer.software_genre ,Data science ,Metadata ,Information extraction ,Knowledge base ,Systems architecture ,Architecture ,business ,computer - Abstract
Legal information processing attracts attention from a number of organizations globally which includes research institutions; specific areas of interest ranges from representation of legal data, such as prior court cases, for countries which adopt common law system. These legal datasets typically include legislative acts or statutes, which vary from one country to another. Mining this legal data in order to extract useful information poses formidable challenges, which include coming up with ways for storing datasets, which are continuously generated and growing exponentially every year. Additional challenges include keeping up with amendments of the statutes and invalidating old statutes. This paper details a system architecture containing several key subsystems toward design and development of Legal Humanities for Africa, which is digital, query-able and easy to use and navigate by both lay user and experienced professionals. The Legal Humanities of Africa architecture has three modules, namely, knowledge base, knowledge engine and HCI module. The knowledge base handles the legal data dictionary, glossary and metadata in a domain-specific manner, while the knowledge engine handles processing data from the legal cases or statutes. Various approaches to computational linguistics, such as natural language processing or information extraction, have been used for natural language processing, including finding part of speech in text that contains most informative terms that are useful in computing similarity between prior cases to user queries. Submodules in knowledge base are also employed as needed to help optimize the process of extracting useful information to users usually in terms of relevant cases relating to specific legal matters at hand. Other techniques employed include several aspects of machine learning, such as unsupervised learning approaches to identify and cluster prior cases so that similar cases are clustered together, thus, making the process of matching user queries to prior cases easy. The HCI module provides user-specific (lay vs. experts), domain-specific and other anchor desks as required in a typical large application that can be commonly used by many types of users. A parametric evaluation of the performance of the Cloud-based Legal Architecture indicates that the system can enhance its use by both professionals and commoners; details are provided in this paper. It is hoped that the Legal Humanities of Africa architecture will become the benchmark architecture for Africa at large.
- Published
- 2021
25. CL-GCN: Malware Familial Similarity Calculation Based on GCN and Topic Model
- Author
-
Yusen Wang, Shan Liao, Lei Zhang, Kai Liu, Yang Tan, and Liang Liu
- Subjects
Topic model ,Theoretical computer science ,Similarity (geometry) ,Computer science ,Graph embedding ,Graph (abstract data type) ,Malware ,Function (mathematics) ,Layer (object-oriented design) ,Call graph ,computer.software_genre ,computer ,Computer Science::Cryptography and Security - Abstract
With the problem of the explosive growth of malware, many malicious samples are variants of the samples previously encountered. The explosive variant growth seems to be challenging the malware familial classification. This paper proposes a malware similarity calculation model based on Graph Convolutional Networks (GCN) and topic model. In this approach, we extract the function call graph and function instruction distribution of malware for processing. Firstly, the function call graph is processed by the model based on the GCN and attention mechanism to obtain graph embedding. Then, the function instruction distribution is transformed into the function topic distribution through the topic model, and the topic distribution of the malware is obtained through the pooling layer. Finally, this paper uses the fully connected layer and the neural tensor network to combine the graph embedding and topic distribution. The experimental results demonstrate that the proposed approach makes the distinction among families more obvious than earlier work.
- Published
- 2021
26. VDNet: Vehicle Detection Network Using Computer Vision and Deep Learning Mechanism for Intelligent Vehicle System
- Author
-
Apoorva Ojha, Satya Prakash Sahu, and Deepak Kumar Dewangan
- Subjects
business.industry ,Computer science ,Deep learning ,Process (computing) ,Python (programming language) ,Convolutional neural network ,Object detection ,Region of interest ,Minimum bounding box ,Benchmark (computing) ,Computer vision ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
Computer vision using deep learning has revolutionized the detection system and vehicle detection is no exception to it. The importance of vehicle classification and detection is in Intelligent Vehicle and Transportation systems which require making critical decisions based on this information; therefore it is a prominent area of work. This paper presents a vehicle detection model predicated on convolution neural network using bounding box annotations for marking the region of interest. The model is tuned to give best performance by evaluating the different parameter configurations. The implementation is done using Python and OpenCV and is trained upon Google Colab’s free GPU access. The paper presents an efficient stepwise explanation of the process flow in detecting cars in various real life scenes. The proposed model is trained on the combination of two benchmark dataset and attains 94.66% accuracy and 95.13% precision.
- Published
- 2021
27. An Ensemble Approach for Modeling Process Behavior and Anomaly Detection
- Author
-
Gigi Joseph, Vineet Sharma, Ashok Kumar, C. S. Sajeesh, Vinod K. Boppanna, Ajay Chouhan, and Gopika Vinod
- Subjects
Ensemble forecasting ,Computer science ,business.industry ,Process (computing) ,Pattern recognition ,computer.software_genre ,Autoencoder ,Tree (data structure) ,Malware ,Anomaly detection ,Artificial intelligence ,False positive rate ,Anomaly (physics) ,business ,computer - Abstract
In recent years, malware has become more sophisticated. Security solutions based on signature-based detection and known behavior-based detection are not capable of detecting unknown malware. Anomaly detection is an effective strategy against unknown malware. In this paper we present a method to model the normal behavior of processes that run on a computer and detect whether a new process instance conforms with this normal behavior. In this paper we present unsupervised tree-based anomaly detector (UTAD) which is used in combination with autoencoder to detect anomalous process behavior. We evaluated the performance of model in separating behavior of genuine processes from one another. The ensemble model achieved good accuracy with low false positive rate.
- Published
- 2021
28. Missing Data Imputation for Solar Radiation Using Generative Adversarial Networks
- Author
-
Priyanshi Khare, Rajesh Wadhvani, and Sanyam Shukla
- Subjects
business.industry ,Computer science ,Process (engineering) ,computer.software_genre ,Solar energy ,Missing data ,Renewable energy ,Data mining ,Imputation (statistics) ,Time series ,Gradient descent ,business ,computer ,Solar power - Abstract
Solar power is among one of the major renewable energy sources, and there are various applications depending on solar energy. These applications require continuous time series radiation data to access its usefulness, so missing values in solar dataset can have a significant effect on the performance of solar energy systems. This paper is focused on investigating a machine learning-based data imputation method using generative adversarial networks (GANs). Inspired by the success of GANs in other domains, it is being used for data imputation. The incomplete data adversely effects any time series study and forms hindrance in the path of advancements. The solar insolation missing values in the solar dataset are imputed in this work. The paper uses GAN with the concept of mask and hint matrix to ease the process of imputation. The work is demonstrated on the real-world solar dataset from three different regions. The competence of the model is judged by using the new imputed dataset for training different time series forecasting models. The performance of the model is also compared against frequently used imputation methods.
- Published
- 2021
29. NLP-Based Tools for Decoding the Language of Life
- Author
-
Aparna Chauhan and Yasha Hasija
- Subjects
Emerging technologies ,business.industry ,Computer science ,computer.software_genre ,Health informatics ,Field (computer science) ,Data point ,Artificial intelligence ,Impossibility ,business ,Turing ,computer ,Natural language ,Natural language processing ,computer.programming_language ,Meaning (linguistics) - Abstract
As the scientific know-how of the people around the world is expanding, the requirement of new technologies is also growing rapidly. This is evident by the number of papers being published and the new discoveries of scientists that are changing the definition of impossibility day by day. This paper explains one such technology which has made possible not only the recognition of natural language (i.e., human language) by computers but generation of speech and text which is natural language processing (NLP). When machine learning came into picture for assaying large amount data (statistical), deriving meaning from data became easy. Statistical prediction could be made for data containing millions of data points. However, analysis and prediction from textual data still remained a challenge. In 1950s, Alan Turing’s publication—Computing Machinery and Intelligence, introduced NLP in computational field which dealt with conversion of human language to machine-readable form and generated written or spoken output. NLP can be further be applied in bioinformatics for deducing the structure and function of a protein from its primary chain sequence or deriving end products of functional genes from their basal sequences as many researchers have found the sequences to be similar to human language calling it the ‘language of life’. Current studies are based upon using the rules of NLP in analyzing gene and protein sequences. This article is aimed at exploring the various applications of natural language processing in the field of bioinformatics and medical informatics.
- Published
- 2021
30. Smart Contracts and NFTs: Non-Fungible Tokens as a Core Component of Blockchain to Be Used as Collectibles
- Author
-
Kanisk, Shailender Kumar, and Akash Arora
- Subjects
Decentralized computing ,Cryptocurrency ,Blockchain ,Smart contract ,Scripting language ,Computer science ,Interface (Java) ,Use case ,computer.software_genre ,Computer security ,Security token ,computer - Abstract
Non-fungible tokens are one of the most important future application domains for smart contracts. Ethereum is the pioneer of a blockchain-based decentralized computing platform that has ultimately standardized these types of tokens into a well-defined interface, now known as ERC-721. Blockchain-based cryptocurrencies have received extensive attention recently. Massive data has been stored on permissionless blockchains. This paper aims to analyze blockchain and cryptocurrencies’ technical underpinnings, specifically non-fungible tokens or “crypto-collectibles,” with the help of a blockchain-based image matching game. While outlining the theoretical implications and use cases of NFTs, this paper also gives a glimpse into their possible use in the domain of human user verification to prevent misuse of public data by automated scripts. This demonstrates the interaction of the ERC-721 token with the Ethereum-based decentralized application. Further, we aim to reach a definitive conclusion on the benefits and challenges of NFTs and thus reach a solution that would be beneficial to both researchers and practitioners.
- Published
- 2021
31. Performance of Optimization Algorithms in Attention-Based Deep Learning Model for Fake News Detection System
- Author
-
S. P. Ramya and R. Eswari
- Subjects
Fallacy ,Optimization algorithm ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Machine learning ,computer.software_genre ,Benchmark (computing) ,The Internet ,Social media ,Artificial intelligence ,Fake news ,business ,computer - Abstract
Automatic fake news detection for categorizing the news as either fake or real is a complicated problem. Generally, the fake news does not fully contain false information. It is usually mixed with a substantial portion of genuine information. The accessibility to the Internet and willingness to distribute the data through social media is quick and easy. This makes it very easy to propagate fake news worldwide leading to dangerous and obnoxious impacts on society. Most of the current methods to tackle false news detection are based on deep learning approaches. But, these fake detection approaches did not exhibit remarkable improvement in identifying the fallacy because of the insufficiency of datasets. So, LIAR which is an efficient and benchmark dataset that is publicly available for carrying out research on fake news detection and has been used in the paper for detection of fake news. In this paper, CNN-based deep learning neural network model with attention mechanism has been considered for fake news detection system and the performance of the attention mechanism with seven different training optimization algorithms Stochastic, Adam, Nadam, Adamax, etc., has been evaluated and compared. Performance evaluation has been carried out in terms of accuracy, precision, recall, and f1-score with LIAR dataset.
- Published
- 2021
32. Phishing Websites, Detection and Analysis: A Survey
- Author
-
Shreedevi Subrahmanya Bhat, Madhuri Kulkarni, Pushpalatha S. Nikkam, Leena I. Sakri, Priyanka Kamath, and Swati Kamat
- Subjects
Password ,business.operation ,Computer science ,Phishing detection ,Computer security ,computer.software_genre ,Phishing ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,MasterCard ,Information mining ,ComputingMilieux_COMPUTERSANDSOCIETY ,Charge card ,business ,computer ,Extreme learning machine - Abstract
Phishing is the despicable utilization of electronic interchanges to trick clients. Phishing assaults resolve to increase delicate data like usernames, passwords, MasterCard information, network qualifications, and the sky is the limit from there. Phishing assaults endeavor to increase touchy, secret data, for example, usernames, passwords, charge card data, network qualifications, and then some. Phishing Websites copy the first sites so clients believe that they are utilizing the first sites. On account of phishing assaults, each individuals and associations are at threat. Phishing assaults might be forestalled by identifying the sites and serving to clients to detect the phishing sites. To distinguish the phishing sites, there have been various strategies applied. Diverse machine learning methods, information mining procedures, neural organization and different calculations have been utilized for anticipating or ordering or distinguishing the phishing sites. This paper aims at surveying on recently proposed phishing detection techniques.
- Published
- 2021
33. An IPS Approach to Secure V-RSU Communication from Blackhole and Wormhole Attacks in VANET
- Author
-
Arjun Rajput, Mahendra Ku. Jhariya, Gaurav Soni, and Kamlesh Chandravanshi
- Subjects
Scheme (programming language) ,Vehicular ad hoc network ,Network packet ,Computer science ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Swarm behaviour ,Particle swarm optimization ,Task (computing) ,Data exchange ,Routing (electronic design automation) ,business ,computer ,computer.programming_language ,Computer network - Abstract
Vehicles or nodes in Vehicular Ad hoc Network (VANET) are forwarding the traffic information for validation route information. Attacker vehicles are sending false messages of route information and not accepting traffic status packets or data packets. The abnormal behavior of the malicious nodes (Blackhole Attacker) and wormhole attacker is recognized by the reliable security mechanism. This paper proposes an intrusion detection and prevention (IPS) scheme to secure vehicle to RSU (V-RSU) communication from malicious (Blackhole) as well as wormhole attack in VANET. The IPS algorithm is applied to the RSU to recognize the malicious actions of an attacking vehicle by swarm optimization approach. The particle swarm optimization (PSO) confirms the attacker’s presence and delivers effective traffic information. In the proposed IPS scheme, vehicles also receive traffic data from the leading vehicles and also forward traffic information to other vehicles. The traffic data exchange is monitored by RSU to identified malicious actions. The main task of the proposed security system is the effective management of vehicles in the presence of an intruder. Simulation results confirm that the proposed IPS scheme with PSO provides better performance in the presence of both attackers in VANET. The performance of previous IDS, attacker and proposed IPS is measure through performance metrics.
- Published
- 2021
34. An Overview of 51% Attack Over Bitcoin Network
- Author
-
Prativa Rai, Sandeep Gurung, and Raja Siddharth Raju
- Subjects
Cryptocurrency ,Computer science ,Currency ,Digital currency ,Proof-of-work system ,Computer security ,computer.software_genre ,computer ,Protocol (object-oriented programming) ,Anonymity - Abstract
Cryptocurrencies are a new paradigm in terms of digital currency due to various features but mostly because of secure in nature and provide anonymity. Bitcoins are the most prominent example of cryptocurrency based on the proof of work (PoW) consensus mechanism. Bitcoins have been initiated as the exchangeable currency in some parts of the world, but the question arises that what are the major challenges that Bitcoins face even if it is secure in nature. The paper highlights the important attributes of a Bitcoin network, how the mining of the Bitcoin is done, and the consensus protocol on which Bitcoins are based on. It also gives insight into the impact of the “51%” attack on the Bitcoin network and the countermeasures that can be applied as a remedial solution to prevent such an attack.
- Published
- 2021
35. Mobile Cloud-Based Framework for Health Monitoring with Real-Time Analysis Using Machine Learning Algorithms
- Author
-
Ambarish Dutta, Venktesh Kumar, Suman Mohanty, Utsav Kumar, Md. Ruhul Islam, and Ravi Anand
- Subjects
Artificial neural network ,business.industry ,Mobile computing ,Cloud computing ,Machine learning ,computer.software_genre ,Field (computer science) ,Mobile cloud computing ,Computer data storage ,Scalability ,Artificial intelligence ,Architecture ,business ,Algorithm ,computer - Abstract
Cloud computing in the field medical sciences has made remarkable progress and has been a boon for medical firms. The motive being to provide health consultancy remotely as well as quickly. It mostly emphasizes on proper diagnosis of the patient as and when required. In contrast to the existing system which is at times prone to errors leading to many deaths due to faulty diagnosis and monitoring. Cloud-based system provides much fluidity by providing quick assistance for patient irrespective of their location. Cloud infrastructure has greater computational power and can analyze patient’s data remotely helping the medical practitioner to provide diagnosis rapidly, and greater precision is achieved by deploying machine learning algorithms. To make this system accessible from anywhere, in our paper, we propose the use of mobile cloud computing-based architecture for health monitoring. Mobile cloud computing relies on cloud computing to deliver applications to monitoring devices. Real-time monitoring is possible where data can be fetched with the use of mobile cloud applications. Mobile computing provides a platform for making the use of high-end cloud infrastructure to use powerful computation ability to deploy forecast models. Mobile cloud computing plays a vital role since it extracts the advantages of integrating both cloud and mobile computing to provide healthcare assistance. The proposed architecture is scalable as data storage can be increased/decreased by health institutions, reliable as it implements MCC and affordable as it works as a subscription model.
- Published
- 2021
36. Data Mining Techniques in the Agricultural Sector
- Author
-
N. S. Rashmi and B. G. Mamatha Bai
- Subjects
DBSCAN ,Work (electrical) ,Agriculture ,business.industry ,Production (economics) ,Data mining ,Soil parameters ,Cluster analysis ,computer.software_genre ,business ,computer ,Variety (cybernetics) - Abstract
Data mining denotes discovering the useful information from large volume of data. It has useful areas of implementation in many sectors. In this work, we are mainly concentrating on Data Mining Techniques in the Agricultural sector (DMTA). Agriculture is a fundamental human need. The economy is greatly affected by the Agricultural Sector in a Nation like India. Agricultural sector's success or failure depends on the weather conditions and soil parameters. Presently, farmers are growing crops based on their knowledge acquired from the past generation. Since the traditional technique of farming is practiced, plants are excessive or scarce without meeting the real necessity. No scheme is in place to educate the farmers, and there is a variety of new techniques available to solve such issues. This paper presents the results obtained by analyzing the trends followed in the past 10 years using DMTA Model to forecast optimal parameters required to get highest production for Ragi, Groundnut and Paddy. Techniques used for analysis are Bisecting K-Means, DBSCAN, OPTICS, Hierarchical Complete Linkage and STING. All the Districts of Karnataka and various parameters of individual crops are considered.
- Published
- 2021
37. Chinese Text Emotional Analysis Based on Bi-LSTM Model Fusing Emotional Features
- Author
-
Hao Li and Jian-cong Fan
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Sentiment analysis ,Lexicon ,Machine learning ,computer.software_genre ,Construction method ,Learning methods ,Artificial intelligence ,business ,F1 score ,computer - Abstract
With the development of neural network, the method based on neural network is widely used in affective analysis. This model can get better results than the traditional machine-based learning method. However, it ignores a large number of existing emotional knowledge, such as emotional lexicon. In order to solve the problem, this paper proposes a features construction method based on emotional lexicon, and incorporates features into Bi-LSTM. The experimental results show that the improved Bi-LSTM can obtain higher F1 score.
- Published
- 2021
38. Cloud–Fog–Edge Computing Framework for Combating COVID-19 Pandemic
- Author
-
Shreya Ghosh and Anwesha Mukherjee
- Subjects
Cover (telecommunications) ,business.industry ,Computer science ,Latency (audio) ,Context (language use) ,Cloud computing ,Computer security ,computer.software_genre ,Bandwidth (computing) ,Enhanced Data Rates for GSM Evolution ,Architecture ,business ,computer ,Edge computing - Abstract
In the past few decades, Internet of things (IoT)-based devices and applications have shown a rapid growth in various sectors including healthcare. The ability of low-cost connected sensors to cover large areas makes it a potent weapon in the fight against pandemics such as COVID-19. The huge amount of data generated by these sensors in a cloud architecture has led to challenges in terms of network bandwidth usage, latency, computation cost, etc. In this paper, we have proposed a cloud–fog–edge-based healthcare model that can not only help in preliminary diagnosis but can also monitor patients while they are in quarantine or home based treatment. The fog architecture ensures that the model is suited for real-time scenarios while keeping the bandwidth requirements low. Edge architecture ensures that the application is capable to collect and accumulate several contextual information from varied sensors. The proposed framework yields encouraging results in taking decisions based on the COVID-19 context and assisting users effectively.
- Published
- 2021
39. Impact Evaluation of Deep Learning Models in the Context of Plant Disease Detection
- Author
-
Gyanesh Shrivastava and Punitha Kartikeyan
- Subjects
Computer science ,business.industry ,Deep learning ,media_common.quotation_subject ,fungi ,food and beverages ,Context (language use) ,Disease ,Machine learning ,computer.software_genre ,Field (computer science) ,Plant disease ,Agriculture ,Quality (business) ,Artificial intelligence ,Medical diagnosis ,business ,computer ,media_common - Abstract
Automatically and accurately identifying the disease of plants is still a major challenge in the field of agriculture. Early diagnosis helps to control the disease and prevent the loss of agriculture produce. Many researchers identified different diseases of plant parts such as flower, leaf, stem, and fruits. Nowadays, Deep Learning is a subset of Machine Learning and plays a vital role in the research area of plant disease detection. Its functions are similar to neural structure of the human brain. This concept has several layers and provide higher accuracy for detection of plant diseases. It has self-learning technique to accommodate larger data in order to visualize symptoms and to locate disease regions in leaf, flowers, fruits, and stem. This model quickly and accurately diagnoses the disease to prevent the loss and quality of agriculture produce. In this paper, the impact evaluation of various Deep Learning architecture network, viz. AlexNet, GoogLeNet, VGGNet, DenseNet, SqueezeNet, ResNet, and MobileNet employed for plant disease was done. The DenseNet gave 99.75% and GoogLeNet resulted 98.78% average accuracies. Due to increased accuracy levels and greater efficiency, these models are considered better than other models for detection of plant diseases.
- Published
- 2021
40. Generating Data for Real World Time Series Application with GRU-Based Conditional GAN
- Author
-
Priyanshi Khare, Rajesh Wadhvani, Manasi Gyanchandani, and Banalaxmi Brahma
- Subjects
Series (mathematics) ,Wilcoxon signed-rank test ,Artificial neural network ,Computer science ,Test data generation ,Component (UML) ,Nonparametric statistics ,Data mining ,Autoregressive integrated moving average ,Time series ,computer.software_genre ,computer - Abstract
The access to sufficient amount of data has always challenged researchers to productively effectuate their solutions. One of the promising solutions can be generation of data using generative adversarial networks (GAN). This paper is focused on generating realistic data for different time series applications using GAN. The approach adopted here uses GRU-based GAN with conditional input for data generation. The data generated using GAN can contribute in the formation of larger datasets. The time component plays a major role in forecasting in various domains so it is crucial to target data related to time series. The competence of the data generated has been judged by using it to train the most prominent time series forecasting models and then testing it using real data. The linear regression model, ARIMA model, and GRU-based forecasting model are chosen for carrying out the experiment. The similarity between the actual data and generated data is also demonstrated using Wilcoxon signed-rank test as the datasets used here are nonparametric. The experimentation has been executed on three real world datasets from different domains.
- Published
- 2021
41. BlockSIoT: a Blockchain-Based Secure Data Sharing in SIoT
- Author
-
K. Suresh Kumar, J. Chandra Priya, R. N. Karthika, and P. Valarmathie
- Subjects
business.industry ,Smart objects ,Computer science ,Information sharing ,Cloud computing ,Computer security ,computer.software_genre ,Data sharing ,Elasticity (cloud computing) ,Software deployment ,business ,computer ,Edge computing ,Data transmission - Abstract
The advancements of the Internet of Things (IoT) lead to the deployment of smart objects on online social networks to offer a different paradigm, referred to as the Social Internet of Things (SIoT). The social networking things evolve with intelligence in autonomously establishing social links to explore the objects. Nevertheless, the streaming data originated from billions of linked gadgets pose a challenge to render elasticity and security to the users. Cloud and edge computing provides seamless services toward the elasticity of data for SIoT users. However, security in the cloud is a point of debate. Among the varied security and storage aspects, we concentrate on secure data transmission, storage mapping, and their purpose in service provision to improve information sharing. This paper concentrates on offering a secure and reliable transmission among the Things of social networks to withstand single-point failure. We introduced the blockchain module on top of the SIoT network to come up with the novel framework referred to as BlockSIoT to achieve a level of integrity and steer the intercommunication and community interest between the Things to crowd SIoT.
- Published
- 2021
42. Machine Learning Techniques for Keystroke Dynamics
- Author
-
Devershi Pallavi Bhatt and Kirty Shekhawat
- Subjects
Password ,Authentication ,Biometrics ,business.industry ,Computer science ,Context (language use) ,Machine learning ,computer.software_genre ,Random forest ,Support vector machine ,Keystroke dynamics ,Artificial intelligence ,F1 score ,business ,computer - Abstract
Conventional security mechanisms such as token-based and knowledge-based authentication mechanisms are losing importance in the present era of immense technological development in cyber threats. Password and pin are examples of these mechanisms. Keystroke biometrics is a promising solution for ensuring cybersecurity in both standalone and connected systems. Keystroke biometrics is a subset of behavioral biometrics and distinguishes users based on their typing patterns. The performance of a user authentication system utilizing keystroke biometrics depends on the extracted features and classification techniques. The objective of this paper is to compare three different learning techniques namely support vector machine, random forest and logistic regression, in the context of keystroke biometrics. Time-based features are extracted from a publicly available dataset. These features are analyzed with above mentioned machine learning algorithms, and the performance of these algorithms is compared. Hyperparameter tuning and cross-validation are performed to further enhance the performance. Experimental results demonstrate that Random forest is the most efficient with accuracy of 0.85 and F1 score of 0.74. The accuracy obtained with support vector machine and logistic regression is 0.76 and 0.63, respectively.
- Published
- 2021
43. Development of Data Set for Automatic News Telecast System for Deaf Using ISL Videos
- Author
-
Lalit Goyal, Vishal Goyal, and Annu Rani
- Subjects
Translation system ,Computer science ,Scripting language ,Indian sign language ,Data set (IBM mainframe) ,Verb ,Sign language ,computer.software_genre ,computer ,Linguistics ,Word (computer architecture) ,Sign (mathematics) - Abstract
Sign Deaf people use sign language to communicate with others. There are numerous languages because of countries and their cultural variations. So, every country carries its sign language to serve the impaired people. In this paper, we have outlined data collections and research challenges for our system. The data is collected related to “DD news with hearing-impaired people” from YouTube. To get the news data into script format, we used the Downsub site. This site supports script format into different languages script but according to our requirements, we downloaded news scripts into English. Then scripts are converted into unique unigram words by using the wordlist online tool. We have analyzed each word one by one and eliminated unwanted (articles, helping verb, inflections, etc.) words and finally, unique words are converted into Sign Language with the help of ISL experts.
- Published
- 2021
44. Identification of Characters (Digits) Through Customized Convolutional Neural Network
- Author
-
Sagar Pande, Nikhil E. Karale, Aditya Khamparia, and Swati C. Tawalare
- Subjects
business.industry ,Computer science ,Feature extraction ,computer.software_genre ,Convolutional neural network ,Numerical digit ,Object detection ,language.human_language ,Identification (information) ,Handwriting ,Devanagari ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,language ,Urdu ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Digitalization is showing a greater impact across the fields. The improved digitalization has brought various technicalities to object detection. In this aspect, the proposed framework deals with the identification of characters. This paper highlights the issue of identification of characters or digits in the CHARS74K data from handwriting, printed, natural image samples. This framework also illustrates the study performed on handwritten character recognition that describes the various manuscripts of various languages such as Urdu, English, Devanagari, Arabic. A customized CNN model has been proposed for identifying the various forms of digits in the dataset and the precision, the recall has been mentioned for individual digit forms.
- Published
- 2021
45. Early Prognosis of Acute Myocardial Infarction Using Machine Learning Techniques
- Author
-
Harsh Gunwant, Moolchand Sharma, Abhisht Joshi, and Vikas Chaudhary
- Subjects
Learning classifier system ,Cardiovascular Complication ,business.industry ,medicine.disease ,Machine learning ,computer.software_genre ,medicine ,Myocardial infarction complications ,Myocardial infarction ,Artificial intelligence ,Risk factor ,business ,Acute mi ,computer - Abstract
A cardiovascular complication, such as a heart attack, is among the most serious contemporary health issues to confront. People diagnosed with acute myocardial infarction (AMI) have a considerably higher risk of dying in the first year after their diagnosis. People in more and more parts of the globe are being diagnosed with myocardial infarction (MI). A report by the CDC states that statistically, someone dies every 36 s from a heart attack, and the cause of three out of four of those fatalities is a heart attack. Accurate prediction of long-term negative outcomes after an AMI may assist decide the amount of care delivered and form part of informed patient choice-making and emerging approaches that hold up the possibility of extracting new information from the present data. In this paper, we have applied different machine learning algorithms on the myocardial infarction complications database which can process and incorporate an exponentially greater number of variables and identify the intricate correlations between risk factors and ultimate outcomes. Ridge classifier and SVC have the best F1 score of 90.29% on test data which would help in the early diagnosis of acute MI. These advancements will make risk factor identification far more valuable for myocardial infarction prediction, and medical experts will concentrate on the key variables selected by the machine learning model.
- Published
- 2021
46. Regulated Energy Harvesting Scheme for Self-Sustaining WSN in Precision Agriculture
- Author
-
Amit Kumar Bindal and Kunal Goel
- Subjects
Scheme (programming language) ,Computer science ,Energy current ,Energy consumption ,Reliability engineering ,ComputerApplications_MISCELLANEOUS ,Computer Science::Networking and Internet Architecture ,Precision agriculture ,Residual energy ,Throughput (business) ,Energy harvesting ,computer ,Energy (signal processing) ,computer.programming_language - Abstract
In case of precision agriculture-based WSN, energy consumption may vary due to different parameters (i.e., dynamic computational overload/sensor density variations) and traditional energy harvesting scheme does not consider these conditions during harvesting and thus may reduce the overall lifespan of the network. In order to meet the current energy requirements of WSN, an energy harvesting scheme can regulate itself as per the current energy requirements of the WSN. In this paper, a regulated energy harvester will be introduced to overcome the above-discussed constraint and its performance will be analyzed using various performance parameters (throughput/residual energy/harvested energy, etc.).
- Published
- 2021
47. Machine Learning Based Data Quality Model for COVID-19 Related Big Data
- Author
-
P. V. Kumar, K. Chandrasekaran, and A. Chandrashekar
- Subjects
2019-20 coronavirus outbreak ,Coronavirus disease 2019 (COVID-19) ,Artificial neural network ,business.industry ,Computer science ,media_common.quotation_subject ,Big data ,Machine learning ,computer.software_genre ,Autoencoder ,Data quality ,Quality (business) ,Artificial intelligence ,business ,License ,computer ,media_common - Abstract
Big Data is being used in various aspects of technology. The quality of the data being used is essential and needs to be accurate, reliable, and free of defects. The difficulty in improving the quality of big data can be overcome by leveraging computing resources and advanced techniques. In this paper, we propose a solution that utilizes a machine learning (ML) model combined with a data quality model to improve the quality of data. An auto encoder neural network that detects the anomalies in the data is used as the Machine Learning model. This is followed by using the data quality model to ensure the data meets appropriate data quality characteristics. The results obtained from our solution show that the quality of data can be improved efficiently and effortlessly which in turn aids researchers to achieve better results. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
- Published
- 2021
48. A Simplified Beginner’s Guidelines for Design and Fabrication of Prototype Electrical Vehicle
- Author
-
P. Ramesh Babu, S. Tanweer Ahamed, S. Vengatesh, P. Vigneshwar, V. Vijay, and R. Udaya Simha
- Subjects
business.product_category ,Fabrication ,Computer science ,Frame (networking) ,3d model ,computer.software_genre ,Automotive engineering ,Suspension (motorcycle) ,Design phase ,Mode (computer interface) ,Electric vehicle ,Computer Aided Design ,business ,computer - Abstract
The aim of this paper is to build a prototype electric vehicle out of structural materials. It is influential in the development of a modern, safe, and environmentally sustainable mode of public mobility. The objective of its design is to create a lightweight, compact three-wheeled electric vehicle frame. The design phase entails the creation of a 3D model, a practical prototype, and frame refinement using CAD software and the material parameters. The electrical and mechanical study is performed, the results recorded 125 km per charge, and the weight of the vehicle is 180 kg. The top speed is 40kmph along with >80% efficiency of the BLDC hub motor.
- Published
- 2021
49. Corrupted Image Enhancement Through WaveNet: A Hybrid Approach
- Author
-
P. Aruna Priya and C. Vimala
- Subjects
Artificial neural network ,Computer science ,business.industry ,Noise reduction ,Noise spectral density ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image (mathematics) ,symbols.namesake ,Wavelet ,Gaussian noise ,Computer Science::Computer Vision and Pattern Recognition ,Benchmark (computing) ,symbols ,Artificial intelligence ,MATLAB ,business ,computer ,computer.programming_language - Abstract
A denoising method for medical images through hybrid technique is presented in this paper. The hybrid technique is the combination of wavelet and neural network (WaveNet). Proposed algorithm has been validated through benchmark image, and medical images both are degraded by the variety of noise density through Gaussian noise and visual property. The performances of denoised images are also analyzed with wavelet techniques and compare the results with proposed technique. The proposed strategy is developed in MATLAB platform. Simulation results are evidence for the proposed work
- Published
- 2021
50. Comparison of PI Based and ANN Based Dynamic Voltage Restorer Controller for Voltage Sag Mitigation in Distribution System
- Author
-
N. Rathina Prabha and T. Jane Tracy
- Subjects
Distribution system ,Artificial neural network ,Control theory ,Computer science ,Voltage sag ,Power quality ,MATLAB ,computer ,Voltage ,computer.programming_language ,Power (physics) - Abstract
In recent years, one of the major concerns in the distribution system is the quality of power at the consumer side. Out of all power quality issues, voltage sag is the most frequent one. The Dynamic Voltage Restorer (DVR) is one of the effective ways for protecting sensitive loads from Voltage sag/swell conditions. In this paper design and analysis of DVR for mitigating voltage sag is done by means of MATLAB/SIMULINK. The results of conventional DVR controller using PI are compared with Artificial Neural Network (ANN).
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.