999 results
Search Results
2. Towards a Ubiquitous Child Emergency App – Ideas to Simplify and Ensure the Machine-2-Machine Communication
- Author
-
Martin Haag and Michael Schmucker
- Subjects
Service (business) ,020205 medical informatics ,Multimedia ,Computer science ,business.industry ,Short paper ,02 engineering and technology ,computer.software_genre ,Machine to machine ,0202 electrical engineering, electronic engineering, information engineering ,Rare events ,business ,computer ,mHealth ,Wearable technology - Abstract
Medical emergencies involving children are rare events. The experience of emergency physicians is therefore low and the results are correspondingly poor. Assistance services to help in emergencies are regularly requested. The use case is thus very complicated, a complex system consisting of multiple devices is necessary to provide the most efficient and effective service. This short paper presents prototypically tested ideas on how such a ubiquitous approach can be designed and how communication between devices can be simplified and ensured.
- Published
- 2021
3. Theory of Constructive Semigroups with Apartness – Foundations, Development and Practice.
- Author
-
Mitrović, Melanija, Hounkonnou, Mahouton Norbert, and Baroni, Marian Alexandru
- Subjects
COMPUTER science ,MATHEMATICAL economics - Abstract
This paper has several purposes. We present through a critical review the results from already published papers on the constructive semigroup theory, and contribute to its further development by giving solutions to open problems. We also draw attention to its possible applications in other (constructive) mathematics disciplines, in computer science, social sciences, economics, etc. Another important goal of this paper is to provide a clear, understandable picture of constructive semigroups with apartness in Bishop's style both to (classical) algebraists and the ones who apply algebraic knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. An improved clustering method using particle swarm optimization algorithm and mitochondrial fusion model (PSO-MFM).
- Author
-
Nasef, Mohammed M., El Kafrawy, Passent M., and Hashim, Amal
- Subjects
TIME complexity ,MITOCHONDRIA ,DISTRIBUTED computing ,PARTICLE swarm optimization ,COMPUTER science ,PARALLEL programming ,LOTKA-Volterra equations - Abstract
Computational models are foundational concepts in computer science; many of these models such as P systems are based on natural biological processes. P systems represent a wide framework for a variety of concepts of data mining, as models of data clustering approaches. Data clustering is a technique for analyzing data based on its structure that is widely utilized for many applications. In this paper, the proposed model (PSO-MFM) has combined the Particle Swarm Optimization algorithm (PSO) with Mitochondrial Fusion Model to overcome some constraints of clustering techniques. The solving of clustering problem based on particle swarm is investigated in the proposed model when mutual dynamic rules are used. It can find the best cluster centers for a data set and improve clustering performance by utilizing the distributed parallel computing concept of mutual dynamic rules of mitochondrial fusion model. The comparative results demonstrate that the proposed strategy outperforms competition models when it comes to clustering accuracy, stability and the most efficient in time complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Special issue on electromagnetic fields in mechatronics, computer sciences, electrical and electronic engineering.
- Author
-
Barba, Paolo Di, Mognaschi, Maria Evelina, Wiak, Sławomir, and Mognaschi, Evelina
- Subjects
ELECTRICAL engineering ,COMPUTER science ,ELECTROMAGNETIC fields ,MECHATRONICS ,SCIENTIFIC computing ,SYNCHRONOUS generators ,PERMANENT magnet generators - Abstract
IJAEM Guest Editors Paolo Di Barba SP 1 sp , Maria Evelina Mognaschi SP 1 sp , Slawomir Wiak SP 2 sp SP 1 sp University of Pavia, Pavia, Italy SP 2 sp Lód'z University of Technology, Lód'z, Poland Accordingly, in this special issue (SI) of the IJAEM, a collection of 14 papers, selected after a peer review procedure, covers the main subjects of interest for the community. The worldwide community active in the broad area of computational electromagnetics - which grew up substantially in the last decades - gathers academic and industrial researchers who utilize field based numerical models with the manyfold scope of designing new devices, predicting the behavior of prototypes, or interpreting measurements. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
6. Analyzing the generalizability of the network-based topic emergence identification method.
- Author
-
Alam, Mehwish, Buscaldi, Davide, Cochez, Michael, Osborne, Francesco, Recupero, Diego Reforgiato, Sack, Harald, Jung, Sukhwan, Segev, Aviv, and Refogiato Recupero, Diego
- Subjects
MACHINE learning ,COMPUTER science ,NEIGHBORHOODS ,IDENTIFICATION - Abstract
Topic evolution helps the understanding of current research topics and their histories by automatically modeling and detecting the set of shared research fields in academic publications as topics. This paper provides a generalized analysis of the topic evolution method for predicting the emergence of new topics, which can operate on any dataset where the topics are defined as the relationships of their neighborhoods in the past by extrapolating to the future topics. Twenty sample topic networks were built with various fields-of-study keywords as seeds, covering domains such as business, materials, diseases, and computer science from the Microsoft Academic Graph dataset. The binary classifier was trained for each topic network using 15 structural features of emerging and existing topics and consistently resulted in accuracy and F1 over 0.91 for all twenty datasets over the periods of 2000 to 2019. Feature selection showed that the models retained most of the performance with only one-third of the tested features. Incremental learning was tested within the same topic over time and between different topics, which resulted in slight performance improvements in both cases. This indicates there is an underlying pattern to the neighbors of new topics common to research domains, likely beyond the sample topics used in the experiment. The result showed that network-based new topic prediction can be applied to various research domains with different research patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. A top-level model of case-based argumentation for explanation: Formalisation and experiments
- Author
-
Prakken, Henry, Ratsma, Rosa, Sub Intelligent Systems, and Intelligent Systems
- Subjects
Linguistics and Language ,Explaining machine learning ,Computer science ,06 humanities and the arts ,02 engineering and technology ,0603 philosophy, ethics and religion ,Computer Science Applications ,Epistemology ,Argumentation theory ,Computational Mathematics ,Case-based reasoning ,Artificial Intelligence ,Argumentation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,060301 applied ethics - Abstract
This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to this paper’s factor- or dimension format. The model is top-level in that it can be extended with more refined accounts of similarities and differences between cases. It is shown to overcome several limitations of similar argumentation-based explanation models, which only have binary features and do not represent the tendency of features towards particular outcomes. The results of the experimental evaluation studies indicate that the model may be feasible in practice, but that further development and experimentation is needed to confirm its usefulness as an explanation model. Main challenges here are selecting from a large number of possible explanations, reducing the number of features in the explanations and adding more meaningful information to them. It also remains to be investigated how suitable our approach is for explaining non-linear models.
- Published
- 2022
8. Fuzzy based inference system with ensemble classification based intrusion detection system in MANET.
- Author
-
Arthi, A., Beno, A., Sharma, S., and Sangeetha, B.
- Subjects
AD hoc computer networks ,FUZZY logic ,DENIAL of service attacks ,ROUTING algorithms ,SUPPORT vector machines ,COMPUTER science - Abstract
Mobile ad hoc networks (MANET) have become one of the hottest research areas in computer science, including in military and civilian applications. Such applications have formed a variety of security threats, particularly in unattended environments. An Intrusion detection system (IDS) must be in place to ensure the security and reliability of MANET services. These IDS must be compatible with the characteristics of MANETs and competent in discovering the biggest number of potential security threats. In this work, a specialized dataset for MANET is implemented to identify and classify three types of Denial of Service (DoS) attacks: Blackhole, Grayhole and Flooding Attack. This work utilized a cluster-based routing algorithm (CBRA) in MANET.A simulation to gather data, then processed to create eight attributes for creating a specialized dataset using Java. Mamdani fuzzy-based inference system (MFIS) is used to create dataset labelling. Furthermore, an ensemble classification technique is trained on the dataset to discover and classify three types of attacks. The proposed ensemble classification has six base classifiers, namely, C4.5, Fuzzy Unordered Rule Induction Algorithm (FURIA), Multilayer Perceptron (MLP), Multinomial Logistic Regression (MLR), Naive Bayes (NB) and Support Vector Machine (SVM). The experimental results demonstrate that MFIS with the Ensemble classification technique enables an enhancing security in MANET's by modeling the interactions among a malicious node with number of legitimate nodes. This is suitable for future works on multilayer security problem in MANET. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. SDG progress assessment; comparing apples with what?
- Author
-
Arman Bidarbakhtnia
- Subjects
Economics and Econometrics ,Computer science ,Statistics, Probability and Uncertainty ,Management Information Systems - Abstract
This paper looks at a selected number of metrics developed by different international organizations for measuring progress towards the Sustainable Development Goals (SDGs) and aims to shed light on differences and highlight where harmonization is most necessary. It shows that inconsistency in results is more likely to be driven by different interpretations of concepts not methodologies, emphasizing that this has to be a priority in order for any harmonization to be successful. The paper provides a set of principles for orchestrating SDG progress assessment efforts across international organizations.
- Published
- 2022
10. An extended evidential reasoning approach with confidence interval belief structure
- Author
-
Jing Wang and Liying Yu
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,Computer science ,business.industry ,Belief structure ,General Engineering ,Evidential reasoning approach ,02 engineering and technology ,Confidence interval ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
In Dempster-Shafer theory, belief structure plays a key role, which provides a useful framework for information representation of uncertain variables. Basic Probability Assignment (BPA) is the most important component, which is difficult to be determined due to the uncertainty of information. Generally, there are two ways to get BPA of evidential theory: One is a subjective judgment of the expert’s experience, Interval Belief Structure (IBS) can solve the fuzziness and uncertainty of expert’s judgment. The other is an objective calculation by sampling existing data, in which BPA is viewed as the point estimate. Therefore, one of the contributions of this paper is that the definitions and theories of Confidential Interval Belief Structure (CIBS) is developed to describe BPA in Dempster-Shafer theory, which can give a range of population parameter values and contain more information to deal with the uncertainty and fuzziness of existing data. And then, based on evidential reasoning rule for counter-intuitive behavior, another contribution of this paper is that the extended evidential reasoning approach with CIBS is proposed to obtain the combined belief degree. The proposed method can be flexibly adjusted by appropriate errors and confidence levels, which is the main advantage. Finally, a case of sustainable operation of Shanghai rail transit system to verify the feasibility of proposed method and great performance of the extended method is shown.
- Published
- 2022
11. An improved low-complexity DenseUnet for high-accuracy iris segmentation network
- Author
-
Huafang Huang, Daqiang Zhang, Chang Sheng, Weibin Zhou, Tao Chen, Yang Wang, and Yangfeng Wang
- Subjects
Statistics and Probability ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Low complexity ,medicine.anatomical_structure ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,Iris (anatomy) ,business - Abstract
Iris segmentation is one of the most important steps in iris recognition. The current iris segmentation network is based on convolutional neural network (CNN). Among these methods, there are still problems with the segmentation networks such as high complexity, insufficient accuracy, etc. To solve these problems, an improved low complexity DenseUnet is proposed to this paper based on U-net for acquiring a high-accuracy iris segmentation network. In this network, the improvements are as follows: (1) Design a dense block module that contains five convolutional layers and all convolutions are dilated convolutions aimed at enhancing feature extraction; (2) Except for the last convolutional layer, all convolutional layers output feature maps are set to the number 64, and this operation is to reduce the amounts of parameters without affecting the segmentation accuracy; (3) The solution proposed to this paper has low complexity and provides the possibility for the deployment of portable mobile devices. DenseUnet is used on the dataset of IITD, CASIA V4.0 and UBIRIS V2.0 during the experimental stage. The results of the experiments have shown that the iris segmentation network proposed in this paper has a better performance than existing algorithms.
- Published
- 2022
12. Decoupling control of a bearingless switched reluctance motor with hybrid-rotor
- Author
-
Zeyuan Liu, Chen Mei, and Liang Zhi
- Subjects
Mechanics of Materials ,Control theory ,Computer science ,Rotor (electric) ,law ,Mechanical Engineering ,Electrical and Electronic Engineering ,Condensed Matter Physics ,Decoupling (electronics) ,Switched reluctance motor ,Electronic, Optical and Magnetic Materials ,law.invention - Abstract
In order to solve the coupling between torque and suspended force of the traditional bearingless switched reluctance motor (BSRM), a bearingless switched reluctance motor with hybrid-rotor (HBSRM) is proposed in this paper. The HBSRM discussed in the paper has a twelve-pole stator and an eight-pole hybrid-rotor composed of a cylindrical rotor and a salient rotor. The magnetic pulling force between cylindrical rotor and stator is used to independently levitate the shaft, and that between salient rotor and stator is used to separately rotate the rotor. So, the HBSRM not only breaks the restriction of the effective output region between torque and suspended force in the traditional BSRM, but also facilitates the decoupling algorithm design and simplifies the levitation control of this bearingless motor. Firstly, the topology, operating mechanism and mathematical model of the proposed HBSRM are introduced respectively. Then the no-load decoupling control and torque ripple of the traditional BSRM and HBSRM are compared. Moreover, the load decoupling control characteristics of HBSRM are presented and verified by simulation analysis.
- Published
- 2022
13. Performance measurement of decision making units through interval efficiency with slacks-based measure: an application to tourist hotels in Taipei
- Author
-
Ruiyi Zhang, Qingxian An, and Yongchang Shen
- Subjects
Statistics and Probability ,Measure (data warehouse) ,Artificial Intelligence ,Computer science ,Statistics ,General Engineering ,Performance measurement ,Interval (mathematics) - Abstract
Data envelopment analysis (DEA) is widely used to evaluate the performance of a group of homogeneous decision making units (DMUs). Considering the uncertainty, interval DEA has been introduced to fit into more situations. In this paper, an interval efficiency method based on slacks-based measure is proposed to solve the uncertain problems in DEA. Firstly, the maximum and minimum efficiency values of the evaluated DMU are calculated by the furthest and closest distance from the evaluated DMU to the projection points on the Pareto-efficient frontier, respectively. Then, the AHP method is used for the full ranking of DMUs. The paper uses the pairwise comparison relationship between each pair of DMUs to construct the interval multiplicative preference relations (IMPRs) matrix. If the matrix does not meet the consistency condition, a method to obtain consistency IMPRs is introduced. According to the consistency judgment matrix, the full ranking of DMUs can be obtained. Finally, we apply our method to the performance evaluation of 12 tourist hotels in Taipei in 2019.
- Published
- 2022
14. An object detection network based on YOLOv4 and improved spatial attention mechanism
- Author
-
Long Yu, Liqiang Zhang, Shengwei Tian, Xinyu Zhang, and Zhixiong Chen
- Subjects
Statistics and Probability ,Artificial Intelligence ,business.industry ,Computer science ,General Engineering ,Computer vision ,Artificial intelligence ,business ,Mechanism (sociology) ,Object detection - Abstract
In recent years, the research on object detection has been intensified. A large number of object detection results are applied to our daily life, which greatly facilitates our work and life. In this paper, we propose a more effective object detection neural network model ENHANCE_YOLOV4. We studied the effects of several attention mechanisms on YOLOV4, and finally concluded that spatial attention mechanism had the best effect on YOLOV4. Therefore, based on previous studies, this paper introduces Dilated Convolution and one-by-one convolution into the spatial attention mechanism to expand the receptive field and combine channel information. Compared with CBAM and BAM, which are composed of spatial attention and channel attention, this improved spatial attention module reduces model parameters and improves detection capabilities. We built a new network model by embedding improved spatial attention module in the appropriate place in YOLOV4. And this paper proves that the detection accuracy of this network structure on the VOC data set is increased by 0.8%, and the detection accuracy on the coco data set is increased by 7%when the calculation performance is increased a little.
- Published
- 2022
15. Random Transformation of image brightness for adversarial attack
- Author
-
Hengjun Wang, Kaiyong Xu, Bo Yang, and Hengwei Zhang
- Subjects
Statistics and Probability ,Brightness ,business.industry ,Computer science ,Transferability ,Computer Science - Computer Vision and Pattern Recognition ,General Engineering ,Overfitting ,Machine learning ,computer.software_genre ,Image (mathematics) ,Adversarial system ,Transformation (function) ,Artificial Intelligence ,Robustness (computer science) ,Deep neural networks ,Artificial intelligence ,business ,computer - Abstract
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.
- Published
- 2022
16. CAMGAN: Combining attention mechanism generative adversarial networks for cartoon face style transfer
- Author
-
Shengwei Tian, Long Yu, and Tao Zhang
- Subjects
Statistics and Probability ,Adversarial system ,Artificial Intelligence ,Computer science ,Human–computer interaction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Face (sociological concept) ,Mechanism (sociology) ,Generative grammar ,Style (sociolinguistics) - Abstract
In this paper, we presents an apporch for real-world human face close-up images cartoonization. We use generative adversarial network combined with an attention mechanism to convert real-world face pictures and cartoon-style images as unpaired data sets. At present, the image-to-image translation model has been able to successfully transfer style and content. However, some problems still exist in the task of cartoonizing human faces:Hunman face has many details, and the content of the image is easy to lose details after the image is translated. the quality of the image generated by the model is defective. The model in this paper uses the generative adversarial network combined with the attention mechanism, and proposes a new generative adversarial network combined with the attention mechanism to deal with these problems. The channel attention mechanism is embedded between the upper and lower sampling layers of the generator network, to avoid increasing the complexity of the model while conveying the complete details of the underlying information. After comparing the experimental results of FID, PSNR, MSE three indicators and the size of the model parameters, the new model network proposed in this paper avoids the complexity of the model while achieving a good balance in the conversion task of style and content.
- Published
- 2022
17. Research Strategy in 4D Printing: Disruptive vs Incremental?
- Author
-
Jean-Claude André and Frédéric Demoly
- Subjects
Computer science ,General Engineering ,4d printing ,Manufacturing engineering - Abstract
The paper aims at presenting 4D printing as a research-intensive technology from a critical external perspective. It provides a comprehensive discussion on the possible future of this emerging domain and also highlights weaknesses and strengths of applying a disruptive or incremental research strategy. Most scientific research efforts in 4D printing contribute to developing the spectrum of possible changes by investigating stimulus/smart materials combinations with additive manufacturing technologies. Although the current results are spectacular, the performances are still far from the basic requirements expected in the industry. The paper highlights the current limitations and trends towards incremental research strategies and argues in favor of risk-taking and the disruptive nature of research to make leaps that benefit society. Even if transgressive promises are associated with this technology with high growth potential in academic research, where creativity is involved and related invention derived, targeted applications are far from being achieved leading to a risk of the slow death of the field and unsatisfactory innovation. Based on this assessment, it appears that close fields in a situation of possible disciplinary porosity can – with a little openness and some creativity – move away from the current highly self-centered work to try to rekindle 4D printing, provided that risk-taking in interdisciplinary research is better supported. If creativity and interdisciplinary project management for innovation are to be promoted, the organizational context must be conducive to risk-taking for this redeployment.
- Published
- 2022
18. Engineering project control of comprehensive unit price fluctuation-time limit fluctuation-process adjustment
- Author
-
Bo Wang, Jianyou Shi, Xiangtian Nie, and Zhirui Cui
- Subjects
Computational Mathematics ,Unit price ,Computer science ,General Engineering ,Process (computing) ,Project control ,Time limit ,Industrial engineering ,Computer Science Applications - Abstract
When an engineering project requires a lot of time and the construction environment is complex, the unit price of materials and personnel will change, the project construction will be hindered, and the construction plan of the project should be adjusted. These uncertain interference factors will cause the earned value analysis results are quite different from or even contrary to the actual situation, so there will not be enough impartial data to help managers to control project. To this end, this paper analyzes how the price fluctuations, process adjustment and the abnormal construction period will affect the construction schedule and cost effect respectively, and further studies the corresponding methods of earned value correction. Based on case analysis, this paper studies the comprehensive correction method of earned value when cost unit price changes, working procedure is adjusted and construction period fluctuates in an abnormal manner. The method improves the theory of earned value and provides reference for engineering practices on project management.
- Published
- 2022
19. CNN-based Multimodal Touchless Biometric Recognition System using Gait and Speech
- Author
-
Anirudh Chugh, Smriti Srivastava, Sumit Sarin, and Antriksh Mittal
- Subjects
Statistics and Probability ,Gait (human) ,Biometrics ,Artificial Intelligence ,Computer science ,Speech recognition ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,Recognition system ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology - Abstract
Person identification using biometric features is an effective method for recognizing and authenticating the identity of a person. Multimodal biometric systems combine different biometric modalities in order to make better predictions as well as for achieving increased robustness. This paper proposes a touchless multimodal person identification model using deep learning techniques by combining the gait and speech modalities. Separate pipelines for both the modalities were developed using Convolutional Neural Networks. The paper also explores various fusion strategies for combining the two pipelines and shows how various metrics get affected with different fusion strategies. Results show that weighted average and product fusion rules work best for the data used in the experiments.
- Published
- 2022
20. Development of wide area monitoring system for smart grid application
- Author
-
Majed A. Alotaibi, Hasmat Malik, Abdulaziz Almutairi, and Waseem Ahmad
- Subjects
Statistics and Probability ,Smart grid ,Wide area ,Artificial Intelligence ,Computer science ,020209 energy ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,Systems engineering ,Monitoring system ,02 engineering and technology - Abstract
PMU can directly measure positive sequence voltage, phase and system frequency. In this paper, the design and implementation for optimum placement of PMU in power system network (PSN) has been performed using 5 different intelligent approaches at an emulation platform. Different case studies based on IEEE 7, 14 and 30 bus system have been performed and analyzed. In the studies, PMU device is used for the measurement of voltage and current magnitude as well as its phase and its performance has been compared with measured real signals of PSN. PMU measurement gives the accurate results and reliability to PSN. But PMUs are not economical, so PSN operator needs to install a minimum number of PMU in PSN so that system should be fully observable in a real-time scenario. In this paper for optimal placement of PMU, five different intelligent methods have been analyzed for three different bus systems and obtained results are compared. For the further validation of selected PMUs for the PSN, a state estimation using WLS algorithm has been performed using conventional data and PMU data on IEEE14 and IEEE30 bus systems. The obtained results for voltage estimation error and phase estimation error with and without PMU data are compared.
- Published
- 2022
21. Sentiment classification using hybrid feature selection and ensemble classifier
- Author
-
Vanita Jain and Achin Jain
- Subjects
Statistics and Probability ,business.industry ,Computer science ,General Engineering ,Feature selection ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,0101 mathematics ,business ,Classifier (UML) - Abstract
This paper presents a Hybrid Feature Selection Technique for Sentiment Classification. We have used a Genetic Algorithm and a combination of existing Feature Selection methods, namely: Information Gain (IG), CHI Square (CHI), and GINI Index (GINI). First, we have obtained features from three different selection approaches as mentioned above and then performed the UNION SET Operation to extract the reduced feature set. Then, Genetic Algorithm is applied to optimize the feature set further. This paper also presents an Ensemble Approach based on the error rate obtained different domain datasets. To test our proposed Hybrid Feature Selection and Ensemble Classification approach, we have considered four Support Vector Machine (SVM) classifier variants. We have used UCI ML Datasets of three domains namely: IMDB Movie Review, Amazon Product Review and Yelp Restaurant Reviews. The experimental results show that our proposed approach performed best in all three domain datasets. Further, we also presented T-Test for Statistical Significance between classifiers and comparison is also done based on Precision, Recall, F1-Score, AUC and model execution time.
- Published
- 2022
22. A data-driven intelligent hybrid method for health prognosis of lithium-ion batteries
- Author
-
Sandeep Kumar Sunori, Mashhood Hasan, Vimal Singh Bisht, and Hasmat Malik
- Subjects
Statistics and Probability ,chemistry ,Artificial Intelligence ,Computer science ,020209 energy ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,chemistry.chemical_element ,Lithium ,Nanotechnology ,02 engineering and technology ,Ion - Abstract
For estimation of the RUL (Remaining useful life) of Lithium ion battery we are required to do its health assessment using online facilities. For identifying the health of a battery its internal resistance and storage capacity plays the major role. However the estimation of both these parameters is not an easy job and requires lot of computational work to be done. So to overcome this constraint an easy alternate way is simulated in the paper through which we can estimate the RUL. For formation of a linear relationship between health index of the battery (HI) and its actual capacity used of power transformation method is done and later on to validate the result a comparison study is done with Pearson & Spearman methods. Transformed value of Health Index is used for developing a neural network. The results demonstrated in the paper shows the feasibility of the proposed technique resulting in great saving of time
- Published
- 2022
23. Machine learning based accident prediction in secure IoT enable transportation system
- Author
-
Somula Ramasubbareddy, Bharat S. Rawal, Bhabendu Kumar Mohanta, Debasish Jena, and Niva Mohapatra
- Subjects
Statistics and Probability ,050210 logistics & transportation ,business.industry ,Computer science ,05 social sciences ,General Engineering ,02 engineering and technology ,Computer security ,computer.software_genre ,Artificial Intelligence ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Internet of Things ,business ,computer ,Accident (philosophy) - Abstract
Smart city has come a long way since the development of emerging technology like Information and communications technology (ICT), Internet of Things (IoT), Machine Learning (ML), Block chain and Artificial Intelligence. The Intelligent Transportation System (ITS) is an important application in a rapidly growing smart city. Prediction of the automotive accident severity plays a very crucial role in the smart transportation system. The main motive behind this research is to determine the specific features which could affect vehicle accident severity. In this paper, some of the classification models, specifically Logistic Regression, Artificial Neural network, Decision Tree, K-Nearest Neighbors, and Random Forest have been implemented for predicting the accident severity. All the models have been verified, and the experimental results prove that these classification models have attained considerable accuracy. The paper also explained a secure communication architecture model for secure information exchange among all the components associated with the ITS. Finally paper implemented web base Message alert system which will be used for alert the users through smart IoT devices.
- Published
- 2022
24. Deterministic and probabilistic occupancy detection with a novel heuristic optimization and Back-Propagation (BP) based algorithm
- Author
-
Mashhood Hasan, Nuzhat Fatema, Saeid Gholami Farkoush, and Hitendra K. Malik
- Subjects
Statistics and Probability ,Occupancy ,Artificial Intelligence ,Heuristic (computer science) ,Computer science ,020209 energy ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,Probabilistic logic ,020201 artificial intelligence & image processing ,02 engineering and technology ,Algorithm ,Backpropagation - Abstract
In this paper, a novel hybrid approach for deterministic and probabilistic occupancy detection is proposed with a novel heuristic optimization and Back-Propagation (BP) based algorithms. Generally, PB based neural network (BPNN) suffers with the optimal value of weight, bias, trapping problem in local minima and sluggish convergence rate. In this paper, the GSA (Gravitational Search Algorithm) is implemented as a new training technique for BPNN is order to enhance the performance of the BPNN algorithm by decreasing the problem of trapping in local minima, enhance the convergence rate and optimize the weight and bias value to reduce the overall error. The experimental results of BPNN with and without GSA are demonstrated and presented for fair comparison and adoptability. The demonstrated results show that BPNNGSA has outperformance for training and testing phase in form of enhancement of processing speed, convergence rate and avoiding the trapping problem of standard BPNN. The whole study is analyzed and demonstrated by using R language open access platform. The proposed approach is validated with different hidden-layer neurons for both experimental studies based on BPNN and BPNNGSA.
- Published
- 2022
25. Blockchain technology based decentralized energy management in multi-microgrids including electric vehicles
- Author
-
Chandrasekhar Yammani, Pulimamidi Meghana, and Surender Reddy Salkuti
- Subjects
Statistics and Probability ,Blockchain ,Artificial Intelligence ,business.industry ,Computer science ,020209 energy ,Distributed generation ,Distributed computing ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,02 engineering and technology ,business - Abstract
This paper proposes an energy scheduling mechanism among multiple microgrids (MGs) and also within the individual MGs. In this paper, electric vehicle (EV) energy scheduling is also considered and is integrated in the operation of the microgrid (MG). With the advancements in the battery technologies of EVs, the significance of Vehicle-to-Grid (V2G) is increasing tremendously. So, designing the strategies for energy management of electric vehicles (EVs) is of paramount importance. The battery degradation cost of an EV is also taken into account. Vickrey second price auction is used for truthful bidding. To enhance the security and trust, blockchain technology can be incorporated. The market is shifted to decentralized state by using blockchain. To encourage the MGs to generate more, contribution index is allotted to each prosumer of a MG and to the MGs as a whole, depending on which priority is given during auction. The system was simulated using IEEE 118 bus feeder which consists of 5 MGs, which in turn contain EVs and prosumers.
- Published
- 2022
26. Real-time harmonics analysis of digital substation equipment based on IEC-61850 using hybrid intelligent approach
- Author
-
Hasmat Malik, Abdul Azeem, and Majid Jamil
- Subjects
Statistics and Probability ,business.industry ,Computer science ,020209 energy ,General Engineering ,Electrical engineering ,Digital substation ,02 engineering and technology ,IEC 61850 ,Artificial Intelligence ,Harmonics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
This paper proposed a hybrid intelligent approach based on empirical mode decomposition (EMD), artificial neural network (ANN) and J48 algorithm of machine learning for real-time harmonics analysis of digital substation’s equipment based on IEC-61850 using explanatory input variables based on laboratory proto-type real-time recorded database. In the proposed hybrid model, these variables are first extracted then diagnostic of power transformer harmonics of digital substation is evaluated/analyzed to perform the long term as well as the short term goal and planning in the electrical power network. In this paper, firstly, experimental analysis is performed to validate the laboratory prototype setup using FFT (fast Fourier transform), STFT (short-time Fourier transform) and CWT (continuous wavelet transform). Then, features are extracted from experimental dataset using EMD (empirical mode decomposition) method. The IMFs (intrinsic mode functions) have generated from EMD, which are used as an input variable to the two different diagnostic models, i.e., ANN and J48 algorithm. In order to validate the performance and accuracy of the proposed hybrid model, a comparative analysis is performed by using ANN and J48 method (with and without EMD method) and the results are compared. Obtained results shows that the proposed hybrid diagnostics approach for harmonics analysis has outperformance characteristics.
- Published
- 2022
27. Kernel fuzzy C- means clustering with teaching learning based optimization algorithm (TLBO-KFCM)
- Author
-
Smriti Srivastava and Saumya Singh
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,Optimization algorithm ,Computer science ,business.industry ,General Engineering ,Pattern recognition ,02 engineering and technology ,Fuzzy logic ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Artificial Intelligence ,Kernel (statistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Teaching learning ,business ,Cluster analysis - Abstract
In the field of data analysis clustering is considered to be a major tool. Application of clustering in various field of science, has led to advancement in clustering algorithm. Traditional clustering algorithm have lot of defects, while these defects have been addressed but no clustering algorithm can be considered as superior. A new approach based on Kernel Fuzzy C-means clustering using teaching learning-based optimization algorithm (TLBO-KFCM) is proposed in this paper. Kernel function used in this algorithm improves separation and makes clustering more apprehensive. Teaching learning-based optimization algorithm discussed in the paper helps to improve clustering compactness. Simulation using five data sets are performed and the results are compared with two other optimization algorithms (genetic algorithm GA and particle swam optimization PSO). Results show that the proposed clustering algorithm has better performance. Another simulation on same set of data is also performed, and clustering results of TLBO-KFCM are compared with teaching learning-based optimization algorithm with Fuzzy C- Means Clustering (TLBO-FCM).
- Published
- 2022
28. Hybrid optimization based PID control of ball and beam system
- Author
-
Smriti Srivastava and Vishal Srivastava
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,020901 industrial engineering & automation ,Artificial Intelligence ,Control theory ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,PID controller ,020201 artificial intelligence & image processing ,02 engineering and technology ,Ball and beam - Abstract
Ball and beam is a popular benchmark problem in control engineering. Various control strategies have been proposed on ball & beam system in literature, In this paper, hybrid optimization algorithms have been implemented on PID controller to control ball position and beam angle. Hybrid algorithms combine exploration and exploitation ability of individual algorithm and find optimized value of performance index. In this paper, two hybrid algorithms namely PSO-GSA and PSO-GWO are used to tune controller parameters which in turn improve the system performance. Simulation results show effective and efficient improvement in system performance with these hybrid algorithms. To analyze the performance of these algorithms, time domain parameters and mean square error (MSE) has been taken as performance index. A comparative study of these algorithms with that of individual algorithms namely PSO, GWO, GSA has also been done.
- Published
- 2022
29. Product lifecycle management application selection framework based on interval-valued spherical fuzzy COPRAS
- Author
-
Mete Omerali and Tolga Kaya
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,Mathematical optimization ,Computer science ,General Engineering ,02 engineering and technology ,Fuzzy logic ,Interval valued ,020901 industrial engineering & automation ,Product lifecycle ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Selection (genetic algorithm) - Abstract
Digitalization is the key trend of the Industry 4.0 revolution. Industrial companies are transforming the way they design and maintain their products and solutions. The user requirements become more demanding. Competition among the manufacturing companies is at its limits and transforms the products to be more complex. Yet, other challenges such as faster time to market, higher quality requirements and legislation force enterprises to provide new ways of design, manufacture and service their end products. Product Lifecycle Management (PLM) is a key solution to track the entire lifespan of the product from idea to design, design to manufacture and manufacture to service. Besides the complexity of products and production, the selection of the right PLM solution which will become the backbone of enterprises is an open problem. In this paper, a thorough literature review is conducted to analyze the most important features for selecting the right PLM solution for manufacturing firms. Moreover, to overcome the challenge of decision makers’ (DM) subjective judgments, a novel interval value spherical fuzzy COPRAS (IVSF-COPRAS) multi-criteria decision making (MCDM) method is introduced. The paper aims to help enterprises rapidly identify the best alternative vendor/solution to be selected based on the need of the organization. In order to show the applicability, DM inputs are collected from a leading defense company where the PLM selection process is ongoing. The industrial case study is provided to demonstrate the success of the proposed selection framework.
- Published
- 2021
30. A state-of-the-art survey on spherical fuzzy sets1
- Author
-
Metin Dağdeviren, Barış Özkan, Mehmet Kabak, and Eren Özceylan
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,020901 industrial engineering & automation ,Artificial Intelligence ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,Applied mathematics ,020201 artificial intelligence & image processing ,02 engineering and technology ,State (functional analysis) ,Fuzzy logic - Abstract
In addition to the well-known fuzzy sets, a novel type of fuzzy set called spherical fuzzy set (SFS) is recently introduced in the literature. SFS is the generalized structure over existing structures of fuzzy sets (intuitionistic fuzzy sets-IFS, Pythagorean fuzzy sets-PFS, and neutrosophic fuzzy sets-NFS) based on three dimensions (truth, falsehood, and indeterminacy) to provide a wider choice for decision-makers (DMs). Although the SFS has been introduced recently, the topic attracts the attention of academicians at a remarkable rate. This study is the expanded version of the authors’ earlier study by Ozceylan et al. [1]. A comprehensive literature review of recent and state-of-the-art papers is studied to draw a framework of the past and to shed light on future directions. Therefore, a systematic review methodology that contains bibliometric and descriptive analysis is followed in this study. 104 scientific papers including SFS in their titles, abstracts and keywords are reviewed. The papers are then analyzed and categorized based on titles, abstracts, and keywords to construct a useful foundation of past research. Finally, trends and gaps in the literature are identified to clarify and to suggest future research opportunities in the fuzzy logic area.
- Published
- 2021
31. Fuzzy modelling and control of project cash flows
- Author
-
Adam Zabor and Dorota Kuchta
- Subjects
Statistics and Probability ,Fuzzy modelling ,0209 industrial biotechnology ,020901 industrial engineering & automation ,Operations research ,Artificial Intelligence ,Computer science ,Control (management) ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,020201 artificial intelligence & image processing ,Cash flow ,02 engineering and technology - Abstract
An analysis of the scientific literature on project cash flow control and fuzzy modelling shows that project cash flows are modelled using only basic approaches drawn from fuzzy theory, which may distort the credibility of the model. In this paper, we therefore propose to use the whole spectrum of fuzzy arithmetic, and to select operations that suit the nature of the cash flows in question, their dependencies and the preferences of the project manager. An analysis of the literature also shows that in practically all existing models of project costs and cash flow management, project costs and cash flows are treated at a very high level of generality (without considering the various types of project, factors influencing their variability and signals warning of imminent cash-related problems), and estimations are not updated on an ongoing basis throughout the duration of the project. The results of a survey performed with the participation of 100 project managers show that this simplistic view of project cash flows may be distorting, and cannot guarantee the development of an efficient project cost and cash flow control system. We propose an approach that at least partially compensates for these drawbacks: it differentiates between types of project cash flows and the factors and triggers affecting changes in cash flows. Two case studies are used for a an initial verification of the approach. The paper concludes with suggestions for further research perspectives.
- Published
- 2021
32. Process design and capability analysis using penthagorean fuzzy sets: surgical mask production machines comparison
- Author
-
Elif Haktanır and Cengiz Kahraman
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,Computer science ,Fuzzy set ,General Engineering ,Process design ,02 engineering and technology ,computer.software_genre ,Surgical mask ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Production (economics) ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
Process capability analysis (PCA) is a tool for measuring a process’s ability to meet specification limits (SLs), which the customers define. Process capability indices (PCIs) are used for establishing a relationship between SLs and the considered process’s ability to meet these limits as an index. PCA compares the output of a process with the SLs through these capability indices. If the customers’ needs contain vague or imprecise terms, the classical methods are inadequate to solve the problem. In such cases, the information can be processed by the fuzzy set theory. Recently, ordinary fuzzy sets have been extended to several new types of fuzzy sets such as intuitionistic fuzzy sets, Pythagorean fuzzy sets, picture fuzzy sets, and spherical fuzzy sets. In this paper, a new extension of intuitionistic fuzzy sets, which is called penthagorean fuzzy sets, is proposed, and penthagorean fuzzy PCIs are developed. The design of production processes for COVID-19 has gained tremendous importance today. Surgical mask production and design have been chosen as the application area of the penthagorean fuzzy PCIs developed in this paper. PCA of the two machines used in surgical mask production has been handled under the penthagorean fuzzy environment.
- Published
- 2021
33. Forecasting serve performance in professional tennis matches
- Author
-
Jacob Gollub
- Subjects
Computer science ,Data science - Abstract
Many research papers on tennis match prediction use a hierarchical Markov Model. To predict match outcomes, this model requires input parameters for each player’s serving ability. While these parameters are often computed directly from each player’s historical percentages of points won on serve and return, doing so fails to address bias due to limited sample size and differences in strength of schedule. In this paper, we explore a handful of novel approaches to forecasting serve performance that specifically address these limitations. By applying an Efron-Morris estimator, we provide a means to robustly forecast outcomes when players have limited match data over the past year. Next, through tracking expected serve and return performance in past matches, we account for strength of schedule across all points in a player’s match history. Finally, we demonstrate a new way to synthesize historical serve data with the predictive power of Elo ratings. When forecasting serve performance across 7,622 ATP tour-level matches from 2014-2016, all three of these proposed methods outperformed Barnett and Clarke’s standard approach.
- Published
- 2021
34. An improved loop subdivision to coordinate the smoothness and the number of faces via multi-objective optimization
- Author
-
Xiantao Zeng, Yaqian Liang, Jinkun Luo, and Fazhi He
- Subjects
Mathematical optimization ,Loop subdivision ,Smoothness (probability theory) ,Computer science ,020207 software engineering ,02 engineering and technology ,Multi-objective optimization ,Computer Science Applications ,Theoretical Computer Science ,Computational Theory and Mathematics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
3D mesh subdivision is essential for geometry modeling of complex surfaces, which benefits many important applications in the fields of multimedia such as computer animation. However, in the ordinary adaptive subdivision, with the deepening of the subdivision level, the benefits gained from the improvement of smoothness cannot keep pace with the cost caused by the incremental number of faces. To mitigate the gap between the smoothness and the number of faces, this paper devises a novel improved mesh subdivision method to coordinate the smoothness and the number of faces in a harmonious way. First, this paper introduces a variable threshold, rather than a constant threshold used in existing adaptive subdivision methods, to reduce the number of redundant faces while keeping the smoothness in each subdivision iteration. Second, to achieve the above goal, a new crack-solving method is developed to remove the cracks by refining the adjacent faces of the subdivided area. Third, as a result, the problem of coordinating the smoothness and the number of faces can be formulated as a multi-objective optimization problem, in which the possible threshold sequences constitute the solution space. Finally, the Non-dominated sorting genetic algorithm II (NSGA-II) is improved to efficiently search the Pareto frontier. Extensive experiments demonstrate that the proposed method consistently outperforms existing mesh subdivision methods in different settings.
- Published
- 2021
35. mHealth wearables and smartphone health tracking apps: A changing privacy landscape
- Author
-
Christine Suver and Ellen Kuwana
- Subjects
Computer science ,business.industry ,Internet privacy ,Wearable computer ,Tracking (education) ,Library and Information Sciences ,business ,mHealth ,Computer Science Applications ,Information Systems - Abstract
The use of digital health technologies is changing the ways people monitor and manage their health and well-being. There is increasing interest in using wearables and smartphone health apps to collect health-related data, a domain within digital health referred to as mHealth. Wearables and health apps can continuously monitor metrics such as physical activity, sleep, and heart rate, to name a few. These mHealth data can supplement the measures taken by healthcare professionals during regular doctor’s visits, with mHealth having the advantage of a much greater frequency of collection. But what are the privacy considerations with mHealth? This paper explores global data privacy protections, enumerates principles to guide regulations, discusses the tension between anonymity and data utility, and proposes ways to improve how we as a society talk about and safeguard data privacy. We include brief discussions about inadvertent or unintended consequences of digital data collection and the trade-off between privacy and public health interests, such as is illustrated by COVID-19 contract tracing apps. This paper concludes by offering suggestions for consideration about improving privacy and confidentiality notices.
- Published
- 2021
36. CycleGAN based confusion model for cross-species plant disease image migration
- Author
-
Cui Xiaohui, Ying Yongzhi, and Chen Zhi-bo
- Subjects
Statistics and Probability ,business.industry ,Computer science ,General Engineering ,food and beverages ,Pattern recognition ,Plant disease ,Image (mathematics) ,Artificial Intelligence ,medicine ,Artificial intelligence ,medicine.symptom ,business ,Confusion - Abstract
The identification and classification of plant diseases is of great significance to ecological protection and deep learning methods have made a great of progress in the common plant diseases identification for specific plant. While faced with the same plant disease of other plants, due to the insufficient or low quality training data, current deep learning methods will be difficult to identify the diseases effectively and accurately. Inspired by the advantages of GAN in dataset expansion, we propose the CycleGAN based confusion model in this paper. In this paper, GAN framework is improved by adding noise label and learn together during training stage, which migrates the data of common plant diseases to the plants with insufficient or low quality data. In order to evaluate the quality of the migrated training dataset among different GAN approaches, we introduce the quality indicators of the migration images such as MMD, FID, EMD etc. We compare our model with other GANs model, and the experimental results show that the proposed model obtains better results in the migration process, which make it more effective for the identification of cross species plant diseases.
- Published
- 2021
37. Combination of improved Harris’s hawk optimization with fuzzy to improve clustering in wireless sensor network
- Author
-
A. Gopi Saminathan, V. Eswaramoorthy, V. Nivedhitha, and P. Thirumurugan
- Subjects
Statistics and Probability ,Artificial Intelligence ,Computer science ,Real-time computing ,General Engineering ,Harris's Hawk ,Cluster analysis ,Fuzzy logic ,Wireless sensor network - Abstract
A Wireless Sensor Network (WSN) is divided into groups of sensor nodes for efficient transmission of data from the point of measuring to sink. By performing clustering, the network remains energy-efficient and stable. An intelligent mechanism is needed to cluster the sensors and find an organizer node, the cluster head. The organizer node assembles data from its constituent nodes called member nodes, finds an optimal route to the sink of the network, and transfers the same. The nomination of cluster head is crucial since energy utilization is a major challenge of sensor nodes deployed over a hostile environment. In this paper, a fuzzy-based Improved Harris’s Hawk Optimization Algorithm (IHHO) is proposed to select an able cluster head for data communication. The fuzzy inference model ponders balance energy, distance from self to sink node, and vicinity of nodes from cluster head as input factors and decides if a candidate node is eligible for becoming a cluster head. The IHHO tunes the logic into an energy-efficient network with less complexity and more ease. The novelty of the paper lies in applying the hawk-pack technique based on fuzzy rules. Simulations show that the combination of Fuzzy based IHHO reduces the death of nodes through which network lifetime is enhanced.
- Published
- 2021
38. Hybrid spectrum management using integrated fuzzy and femtocells in cognitive domain
- Author
-
K. Revathy, Rengarajan Amirtharajanr, Padmapriya Praveenkumar, and K. Thenmozhi
- Subjects
Statistics and Probability ,Artificial Intelligence ,Cognitive domain ,business.industry ,Computer science ,General Engineering ,Femtocell ,Artificial intelligence ,business ,Spectrum management ,Fuzzy logic - Abstract
In Today’s pandemic situation, ‘Spectrum accessing and smart usage’ is the sacred Mantra uttered by every individual citizen in the world. Work from home for techies, online classes for students, games for kids, webinar for teaching fraternity etc., are going almost on indoor coverage without any limit in pace because of the smart spectrum coverage by the network service providers. This paper provides an add-on facility to the existing wireless infrastructure to provide a better user experience in this highly regrettable routine. In this paper, a cognitive domain unused spectrum holes are efficiently handled by (i) adaptive spectrum management technique; (ii) Fuzzy Inference System based spectrum administration and (iii) Hybrid Cognitive Femtocell approaches based on the user demand and their applications. The proposed integrated cognitive femtocell and Fuzzy-based approach reduces the indoor coverage problems and enhances the throughput of the macrocell users by allowing adaptive spectrum management based on the demand, thereby eliminating spectrum underlay and overlay problems during critical conditions. In cognitive femtocell networks, the access points are prepared and installed with Cognitive Radio which can determine spectrum dynamically by macrocells and nearby Femto Access Points. It adjusts its radiating parameters to evade the macrocells’ interferences and the neighbouring femtocells, thereby maximising the spectrum band’s overall utility.
- Published
- 2021
39. How subjective information with AI for digital revolution
- Author
-
Wei Zhu and Shaopei Lin
- Subjects
Statistics and Probability ,Artificial Intelligence ,Computer science ,General Engineering ,Digital Revolution ,Visual arts - Abstract
This paper summarizes the relationship of subjective information with artificial intelligence (AI) technology and points out how the role of subjective information and its position in AI. Eventually, the characteristic of digital era is the “softening of the theories and hardening of the experiences”. Subjective information is widely used in digital revolution for transforming the qualitative estimations into quasi-quantitative solutions, such as the empirical methods in decision making for quantitative management, etc., it will be the transferor for realizing it. The theoretical formulation of how subjective information is digitized through “Fuzzy-AI Model” for digital revolution is presented in this paper; it has becoming a universal problem solver of utilizing AI technology for quantizing the degree uncertainties in decision-making and fuzzy estimation. Besides, the “Big Data” searching will heavily depend on the completeness of its source information, yet “subjective information” approach can directly predict human thinking or the internal law of complicated objective events into an explicit digital form, for the completeness of source information to make the correct and comprehensive “Big Data” prediction possible. Practical case studies are presented.
- Published
- 2021
40. End-to-end dehazing of traffic sign images using reformulated atmospheric scattering model
- Author
-
Zhaohui Liu, Chao Wang, and Runze Song
- Subjects
Statistics and Probability ,End-to-end principle ,Artificial Intelligence ,Computer science ,General Engineering ,Diffuse sky radiation ,Traffic sign ,Computational physics - Abstract
As an advanced machine vision task, traffic sign recognition is of great significance to the safe driving of autonomous vehicles. Haze has seriously affected the performance of traffic sign recognition. This paper proposes a dehazing network, including multi-scale residual blocks, which significantly affects the recognition of traffic signs in hazy weather. First, we introduce the idea of residual learning, design the end-to-end multi-scale feature information fusion method. Secondly, the study used subjective visual effects and objective evaluation metrics such as Visibility Index (VI) and Realness Index (RI) based on the characteristics of the real-world environment to compare various traditional dehazing and deep learning dehazing method with good performance. Finally, this paper combines image dehazing and traffic sign recognition, using the algorithm of this paper to dehaze the traffic sign images under real-world hazy weather. The experiments show that the algorithm in this paper can improve the performance of traffic sign recognition in hazy weather and fulfil the requirements of real-time image processing. It also proves the effectiveness of the reformulated atmospheric scattering model for the dehazing of traffic sign images.
- Published
- 2021
41. Non-diacritized Arabic speech recognition based on CNN-LSTM and attention-based models
- Author
-
Abdelaziz A. Abdelhamid, Zaki Taha Fayed, Islam Hegazy, and Hamzah A. Alsayadi
- Subjects
Statistics and Probability ,Artificial Intelligence ,Computer science ,Speech recognition ,General Engineering ,Arabic speech recognition - Abstract
Arabic language has a set of sound letters called diacritics, these diacritics play an essential role in the meaning of words and their articulations. The change in some diacritics leads to a change in the context of the sentence. However, the existence of these letters in the corpus transcription affects the accuracy of speech recognition. In this paper, we investigate the effect of diactrics on the Arabic speech recognition based end-to-end deep learning. The applied end-to-end approach includes CNN-LSTM and attention-based technique presented in the state-of-the-art framework namely, Espresso using Pytorch. In addition, and to the best of our knowledge, the approach of CNN-LSTM with attention-based has not been used in the task of Arabic Automatic speech recognition (ASR). To fill this gap, this paper proposes a new approach based on CNN-LSTM with attention based method for Arabic ASR. The language model in this approach is trained using RNN-LM and LSTM-LM and based on nondiacritized transcription of the speech corpus. The Standard Arabic Single Speaker Corpus (SASSC), after omitting the diacritics, is used to train and test the deep learning model. Experimental results show that the removal of diacritics decreased out-of-vocabulary and perplexity of the language model. In addition, the word error rate (WER) is significantly improved when compared to diacritized data. The achieved average reduction in WER is 13.52%.
- Published
- 2021
42. Trends in web data extraction using machine learning
- Author
-
Sudhir Kumar Patnaik and C. Narendra Babu
- Subjects
Data extraction ,Artificial Intelligence ,Computer Networks and Communications ,business.industry ,Computer science ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer ,Software - Abstract
Web data extraction has seen significant development in the last decade since its inception in the early nineties. It has evolved from a simple manual way of extracting data from web page and documents to automated extraction to an intelligent extraction using machine learning algorithms, tools and techniques. Data extraction is one of the key components of end-to-end life cycle in web data extraction process that includes navigation, extraction, data enrichment and visualization. This paper presents the journey of web data extraction over the last many years highlighting evolution of tools, techniques, frameworks and algorithms for building intelligent web data extraction systems. The paper also throws light into challenges, opportunities for future research and emerging trends over the years in web data extraction with specific focus on machine learning techniques. Both traditional and machine learning approaches to manual and automated web data extraction are experimented and results published with few use cases demonstrating the challenges in web data extraction in the event of changes in the website layout. This paper introduces novel ideas such as self-healing capability in web data extraction and proactive error detection in the event of changes in website layout as an area of future research. This unique perspective will help readers to get deeper insights in to the present and future of web data extraction.
- Published
- 2021
43. Application of an artificial intelligence algorithm model of memory retrieval and roaming in sorting Chinese medicinal materials
- Author
-
Chengbing Tan and Qun Chen
- Subjects
Computational Mathematics ,Computer science ,business.industry ,General Engineering ,Sorting ,Artificial intelligence ,Roaming ,business ,Computer Science Applications - Abstract
In order to capture autobiographical memory, inspired by the development of human intelligence, a computational AM model for autobiographical memory is proposed in this paper, which is a three-layer network structure, in which the bottom layer encodes the event-specific knowledge comprising 5W1H, and provides retrieval clues to the middle layer, encodes the related events, and the top layer encodes the event set. According to the bottom-up memory search process, the corresponding events and event sets can be identified in the middle layer and the top layer respectively; At the same time, AM model can simulate human memory roaming through the process of rule-based memory retrieval. The computational AM model proposed in this paper not only has robust and flexible memory retrieval, but also has better response performance to noisy memory retrieval cues than the commonly used memory retrieval model based on keyword query method, and can also imitate the roaming phenomenon in memory.
- Published
- 2021
44. A prison term prediction model based on fact descriptions by capturing long historical information
- Author
-
Xuetao Mao, Jianwei Zhang, Wei Duan, and Lin Li
- Subjects
Artificial Intelligence ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,Prison ,Data science ,Software ,Term (time) ,media_common - Abstract
The legal judgments are always based on the description of the case, the legal document. However, retrieving and understanding large numbers of relevant legal documents is a time-consuming task for legal workers. The legal judgment prediction (LJP) focus on applying artificial intelligence technology to provide decision support for legal workers. The prison term prediction(PTP) is an important task in LJP which aims to predict the term of penalty utilizing machine learning methods, thus supporting the judgement. Long-Short Term Memory(LSTM) Networks are a special type of Recurrent Neural Networks(RNN) that are capable of handling long term dependencies without being affected by an unstable gradient. Mainstream RNN models such as LSTM and GRU can capture long-distance correlation but training is time-consuming, while traditional CNN can be trained in parallel but pay more attention to local information. Both have shortcomings in case description prediction. This paper proposes a prison term prediction model for legal documents. The model adds causal expansion convolution in general TextCNN to make the model not only limited to the most important keyword segment, but also focus on the text near the key segments and the corresponding logical relationship of this paragraph, thereby improving the predicting effect and the accuracy on the data set. The causal TextCNN in this paper can understand the causal logical relationship in the text, especially the relationship between the legal text and the prison term. Since the model uses all CNN convolutions, compared with traditional sequence models such as GRU and LSTM, it can be trained in parallel to improve the training speed and can handling long term. So causal convolution can make up for the shortcomings of TextCNN and RNN models. In summary, the PTP model based on causality is a good solution to this problem. In addition, the case description is usually longer than traditional natural language sentences and the key information related to the prison term is not limited to local words. Therefore, it is crucial to capture substantially longer memory for LJP domains where a long history is required. In this paper, we propose a Causality CNN-based Prison Term Prediction model based on fact descriptions, in which the Causal TextCNN method is applied to build long effective history sizes (i.e., the ability for the networks to look very far into the past to make a prediction) using a combination of very deep networks (augmented with residual layers) and dilated convolutions. The experimental results on a public data show that the proposed model outperforms several CNN and RNN based baselines.
- Published
- 2021
45. A novel gateway node reconfiguration method of IOT based on hierarchical coding particle swarm optimization
- Author
-
Shu Wei, Jun Shu, and Dajiang He
- Subjects
Hierarchical coding ,Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Control reconfiguration ,Particle swarm optimization ,Gateway (computer program) ,Artificial Intelligence ,Computer Science::Networking and Internet Architecture ,Internet of Things ,business ,Software ,Computer network - Abstract
In order to overcome the problems of low channel utilization, low transmission success rate and high data transmission delay in current gateway node reconfiguration methods of IOT, this paper proposes a novel gateway node reconfiguration method of IOT based on hierarchical coding particle swarm optimization. Based on the IOT network model, this paper analyzes the delay characteristics of the IOT, and constructs the object function of the gateway node reconfiguration of IOT. By monotone decreasing inertia weight strategy, the coding particle swarm optimization is optimized, and the reconfiguration objective function of the gateway node of IOT by using the optimized particle swarm optimization algorithm is solved. Experimental results show that the channel utilization ratio of the proposed method is higher than 90%, the success rate of information transmission is more than 80%, and the data transmission delay is less than 0.5 s, which indicates that the proposed method has high channel utilization, high transmission success rate and low data transmission delay.
- Published
- 2021
46. Data integration using statistical matching techniques: A review
- Author
-
Mohamed Ali Ismail, Israa Lewaa, and Mai Sherif Hafez
- Subjects
Economics and Econometrics ,Computer science ,Data mining ,Statistics, Probability and Uncertainty ,computer.software_genre ,computer ,Management Information Systems ,Data integration - Abstract
In the era of data revolution, availability and presence of data is a huge wealth that has to be utilized. Instead of making new surveys, benefit can be made from data that already exists. As enormous amounts of data become available, it is becoming essential to undertake research that involves integrating data from multiple sources in order to make the best use out of it. Statistical Data Integration (SDI) is the statistical tool for considering this issue. SDI can be used to integrate data files that have common units, and it also allows to merge unrelated files that do not share any common units, depending on the input data. The convenient method of data integration is determined according to the nature of the input data. SDI has two main methods, Record Linkage (RL) and Statistical Matching (SM). SM techniques typically aim to achieve a complete data file from different sources which do not contain the same units. This paper aims at giving a complete overview of existing SM methods, both classical and recent, in order to provide a unified summary of various SM techniques along with their drawbacks. Points for future research are suggested at the end of this paper.
- Published
- 2021
47. Centralised management of reference metadata and its application
- Author
-
Tina Steenvoorden and Rudi Seljak
- Subjects
Metadata ,World Wide Web ,Economics and Econometrics ,Computer science ,Statistics, Probability and Uncertainty ,Management Information Systems - Abstract
The Statistical Office of the Republic of Slovenia (SURS) has a long tradition of producing and using standardized reference and quality-related metadata. Most of the surveys have to collect, analyse and disseminate this kind of information and a lot of this information is made available to different (internal and external) users. The workload became burdensome and inefficient, since this information was scattered around in different locations and in different forms, which significantly reduced its analytical power. Therefore, SURS started to develop a new, multipurpose application that would enable easier and more effective usage of reference metadata produced through the statistical process and would support the evaluation phase of our statistical business model. The basic methodological fundaments that we have been building upon are presented in the paper. The paper describes the individual steps in the design of the application, details on the functionalities of the application and points out main challenges that we met during the development.
- Published
- 2021
48. Designing for unpredictable uses: A case study on cargo handling
- Author
-
Pascal Béguin, Barbara de Macedo Passos Oggioni, Tharcisio Cotta Fontainha, Mateus Pereira Abraçado, William Silva Santana de Almeida, and Francisco Duarte
- Subjects
Lifting ,Process management ,Computer science ,Process (engineering) ,Rehabilitation ,Public Health, Environmental and Occupational Health ,Job design ,Phase (combat) ,Task (project management) ,Work (electrical) ,Situated ,Humans ,Design process ,Ergonomics ,Work systems ,Occupational Health - Abstract
BACKGROUND: Activity ergonomics aims to include work variability into design process to enable various dimensions of use in projects. As design evolves with use, understanding its characteristics is essential to decipher real working requirements. However, situated design can be pluralistic and may lead to different interpretations than initially intended. OBJECTIVE: This paper aims at understanding the relationship between the designing phase of work systems and the situated task design in high uncertainty operations. METHODS: In an ergonomic work analysis, cargo handling operations were observed at offshore platforms, followed by discussions with workers. Two case studies were selected for the intervention process to demonstrate how workers dealt with high uncertainty tasks on site. RESULTS: Situated task design exhibited three main characteristics: (1) the project emerges from the situation; (2) it has an intentional and original character; and (3) it is situated in time and space to solve local problems. CONCLUSIONS: This combination is the essence of a microproject, which is a concept proposed in this paper. The design must provide resources not only to execute work but also to redesign the task on site.
- Published
- 2021
49. Secure and efficient WBANs algorithm with authentication mechanism
- Author
-
Karan Singh and Vinay Pathak
- Subjects
Statistics and Probability ,021110 strategic, defence & security studies ,Authentication ,Computer science ,business.industry ,0211 other engineering and technologies ,General Engineering ,020206 networking & telecommunications ,02 engineering and technology ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,business ,Mechanism (sociology) ,Computer network - Abstract
Due to the rapid growth in sensor technology and embedded technology, wireless body area network WBANs plays a vital role in monitoring the human body system and the surrounding environment. It supports many healthcare applications on the one hand and are very much help full in pandemic scenarios. It has become the most innovative health care area, which is intriguing to many researchers because of its vast future prospective and potential. Data collected by different wireless sensors or nodes is very personal, critical, and important because of human life involvement. WBANs can minimize human to human contact, which helps stop the spread of severe infectious diseases. The biggest concern is the maintenance of privacy and accuracy of data is still a hot area of research due to nature of attacks, which are changing day by day and increasing, as well as for the sake of better performance. A suitable security mechanism is a way to address above issues, for achieving data security, it is expedient to propose a mechanism. It is essential to update the patient’s regular data. WBANs help to deliver truthful reports related to the patient’s health regularly and individually. This paper proposes an algorithm that shows a better result than the existing algorithm in their previous works. This work is all about proposing a mechanism which needs comparatively less resource. Only authentic entities can interact with the server, which has become obligatory for both sides, keeping data safe. Several authentication schemes have been proposed or discussed by different researchers. This paper has proposed a Secure and Efficient WBANs Authentication Mechanism (SEAM). This security framework will take care of the authentication and the security of transmitted data.
- Published
- 2021
50. A robust and high capacity data hiding method for JPEG compressed images with SVD-based block selection and advanced error correcting techniques
- Author
-
Kusan Biswas
- Subjects
Statistics and Probability ,Computer science ,General Engineering ,020207 software engineering ,High capacity ,02 engineering and technology ,computer.file_format ,JPEG ,Artificial Intelligence ,Block (telecommunications) ,Information hiding ,Singular value decomposition ,0202 electrical engineering, electronic engineering, information engineering ,Error correcting ,020201 artificial intelligence & image processing ,computer ,Algorithm ,Selection (genetic algorithm) - Abstract
In this paper, we propose a frequency domain data hiding method for the JPEG compressed images. The proposed method embeds data in the DCT coefficients of the selected 8 × 8 blocks. According to the theories of Human Visual Systems (HVS), human vision is less sensitive to perturbation of pixel values in the uneven areas of the image. In this paper we propose a Singular Value Decomposition based image roughness measure (SVD-IRM) using which we select the coarse 8 × 8 blocks as data embedding destinations. Moreover, to make the embedded data more robust against re-compression attack and error due to transmission over noisy channels, we employ Turbo error correcting codes. The actual data embedding is done using a proposed variant of matrix encoding that is capable of embedding three bits by modifying only one bit in block of seven carrier features. We have carried out experiments to validate the performance and it is found that the proposed method achieves better payload capacity and visual quality and is more robust than some of the recent state-of-the-art methods proposed in the literature.
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.