64 results
Search Results
2. Particular matter prediction using synergy of multiple source urban big data in smart cities
- Author
-
Ali Reza Honarvar and Ashkan Sami
- Subjects
Human-Computer Interaction ,Artificial Intelligence ,business.industry ,Computer science ,Big data ,Computer Vision and Pattern Recognition ,Multiple source ,business ,Data science ,Software - Abstract
At present, the issue of air quality in populated urban areas is recognized as an environmental crisis. Air pollution affects the sustainability of the city. In controlling air pollution and protecting its hazards from humans, air quality data are very important. However, the costs of constructing and maintaining air quality registration infrastructure are very expensive and high, and air quality data recording at one point will not be generalizable to even a few kilometers. Some of the gains come from the integration of multiple data sources, which can never be achieved through independent single-source processing. Urban organizations in each city independently produce and record data relevant to the organization’s goals and objectives. These issues create separate data silos associated with an urban system. These data are varied in model and structure, and the integration of such data provides an appropriate opportunity to discover knowledge that can be useful in urban planning and decision making. This paper aims to show the generality of our previous research, which proposed a novel model to predict Particulate Matter (PM) as the main factor of air quality in the regions of the cities where air quality sensors are not available through urban big data resources integration, by extending the model and experiments with various configuration for different settings in smart cities. This work extends the evaluation scenarios of the model with the extended dataset of city of Aarhus, in Denmark, and compare the model performance against various specified baselines. Details of removing the heterogeneity of multiple data sources in the Multiple Data Set Aggregator & Heterogeneity Remover (MDA&HR) and improving the operation of Train Data Splitter (TDS) part of the model by focusing on the finding more similar pattern of air quality also are presented in this paper. The acceptable accuracy of the results shows the generality of the model.
- Published
- 2021
3. Comparative investigation of machine learning algorithms for detection of epileptic seizures
- Author
-
Ayush Kumar, Kusum Tharani, Neeraj Kumar, Akash Sharma, Karan Dikshit, and Bharat Singh
- Subjects
business.industry ,Computer science ,0206 medical engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,020601 biomedical engineering ,Human-Computer Interaction ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Software - Abstract
In modern day Psychiatric analysis, Epileptic Seizures are considered as one of the most dreadful disorders of the human brain that drastically affects the neurological activity of the brain for a short duration of time. Thus, seizure detection before its actual occurrence is quintessential to ensure that the right kind of preventive treatment is given to the patient. The predictive analysis is carried out in the preictal state of the Epileptic Seizure that corresponds to the state that commences a couple of minutes before the onset of the seizure. In this paper, the average value of prediction time is restricted to 23.4 minutes for a total of 23 subjects. This paper intends to compare the accuracy of three different predictive models, namely – Logistic Regression, Decision Trees and XGBoost Classifier based on the study of Electroencephalogram (EEG) signals and determine which model has the highest rate of detection of Epileptic Seizure.
- Published
- 2021
4. An intelligent unsupervised technique for fraud detection in health care systems
- Author
-
Aditya Khamparia, Rahul Malik, Sagar Pande, Aman Bhaskar, and Kanksha
- Subjects
020203 distributed computing ,Computer science ,business.industry ,02 engineering and technology ,medicine.disease ,Human-Computer Interaction ,Artificial Intelligence ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Medical emergency ,business ,Software - Abstract
Healthcare is an essential part of people’s lives, particularly for the elderly population, and also should be economical. Medicare is one particular healthcare plan. Claims fraud is a significant contributor to increased healthcare expenses, though the effect of it could be lessened by fraud detection. In this paper, an analysis of various machine learning techniques was done to identify Medicare fraud. The isolated forest an unsupervised machine learning algorithm which improves overall performance while detecting fraud based upon outliers. The goal of this specific paper is generally to show probable dishonest providers on the ground of their allegations. Obtained results were found more promising compared to existing techniques. Around 98.76% accuracy is obtained using an isolated forest algorithm.
- Published
- 2021
5. Whale optimization algorithm fused with SVM to detect stress in EEG signals
- Author
-
Parul Agarwal, Richa Gupta, and M. Afshar Alam
- Subjects
biology ,Optimization algorithm ,medicine.diagnostic_test ,Whale ,Computer science ,business.industry ,Pattern recognition ,02 engineering and technology ,Electroencephalography ,030218 nuclear medicine & medical imaging ,Human-Computer Interaction ,Stress (mechanics) ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,biology.animal ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
Identifying stress and its level has always been a challenging area for researchers. A lot of work is going on around the world on the same. An attempt has been made by the authors in this paper as they present a methodology for detecting stress in EEG signals. Electroencephalogram (EEG) is commonly used to acquire brain signal activity. Though there exist other techniques to extract the same like Functional magnetic resonance imaging (fMRI), positron emission tomography (PET) we have used EEG as it is economical. We have used an open-source dataset for EEG data. Various images are used as the target stressor for collecting EEG signals. After feature selection and extraction, a support vector machine (SVM) with a whale optimization algorithm (WOA) in its kernel function for classification is used. WOA is a bio-inspired meta-heuristic algorithm, based on the hunting behavior of humpback whales. Using this method, we had obtained 91% accuracy for detecting the stress. The paper also compared the previous work done in detecting stress with the work proposed in this paper.
- Published
- 2021
6. Sensory motor imagery EEG classification based on non-dyadic wavelets using dynamic weighted majority ensemble classification
- Author
-
Rashmi Agrawal and Poonam Chaudhary
- Subjects
Sensory motor ,business.industry ,Computer science ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Eeg classification ,Human-Computer Interaction ,Wavelet ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
The classification accuracy has become a significant challenge and an important task in sensory motor imagery (SMI) electroencephalogram (EEG) based Brain Computer interface (BCI) system. This paper compares ensemble classification framework with individual classifiers. The main objective is to reduce the inference of non-stationary and transient information and improves the classification decision in BCI system. The framework comprises the three phases as follows: (1) the EEG signal first decomposes into triadic frequency bands: low pass band, band pass filter and high pass filter to localize α, β and high γ frequency bands within the EEG signals, (2) Then, Common spatial pattern (CSP) algorithm has been applied on the extracted frequencies in phase I to heave out the important features of EEG signal, (3) Further, an existing Dynamic Weighted Majiority (DWM) ensemble classification algorithm has been implemented using features extracted in phase II, for final class label decision. J48, Naive Bayes, Support Vector Machine, and K-Nearest Neighbor classifiers used as base classifiers for making a diverse ensemble of classifiers. A comparative study between individual classifiers and ensemble framework has been included in the paper. Experimental evaluation and assessment of the performance of the proposed model is done on the publically available datasets: BCI Competition IV dataset IIa and BCI Competition III dataset IVa. The ensemble based learning method gave the highest accuracy among all. The average sensitivity, specificity, and accuracy of 85.4%, 86.5%, and 85.6% were achieved with a kappa value of 0.59 using DWM classification.
- Published
- 2021
7. Usability evaluation of component based software system using software metrics
- Author
-
Sanjay Kumar Dubey, Rajdev Tiwari, and Jyoti Agarwal
- Subjects
Computer science ,business.industry ,020101 civil engineering ,Usability ,02 engineering and technology ,Software metric ,0201 civil engineering ,Human-Computer Interaction ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Artificial Intelligence ,Component (UML) ,Computer Vision and Pattern Recognition ,Software system ,Software engineering ,business ,Software - Abstract
Component Based Software Engineering (CBSE) provides a way to create a new Component Based Software System (CBSS) by utilizing the existing components. The primary reason for that is to minimize the software development time, cost and effort. CBSS also increases the component reusability. Due to these advantages, software industries are working on CBSS and continuously trying to provide quality product. Usability is one of the major quality factors for CBSS. It should be measured before delivering the software product to the customer, so that if there are any usability flaws, it can be removed by software development team. In this paper, work has been done to evaluate the usability of CBSS based on major usability sub-factors (learnability, operability, understandability and configurability). For this purpose, firstly software metrics are identified for each usability sub-factor and the value of each sub-factor is evaluated for a component based software project. Secondly, overall usability of the software project is evaluated by using the calculated value of each usability sub-factor. Usability for the same project was also evaluated using Fuzzy approach in MATLAB to validate the experimental work of this research paper. It was identified that the value of usability obtained from software metrics and fuzzy model was very similar. This research work will be useful for the software developer to evaluate the usability of any CBSS and will also help them to compare different version of any CBSS in term of their usability.
- Published
- 2020
8. Design of antenna in Wireless Body Area Network (WBAN) for biotelemetry applications
- Author
-
Sherlin J. Benitta and M. Nesa Sudha
- Subjects
FEKO ,business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,0206 medical engineering ,Electrical engineering ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,020601 biomedical engineering ,Signal ,Directivity ,Human-Computer Interaction ,Software ,Artificial Intelligence ,Telemetry ,Body area network ,0202 electrical engineering, electronic engineering, information engineering ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Computer Vision and Pattern Recognition ,Antenna (radio) ,business ,Biotelemetry - Abstract
Biomedical telemetry system transmits biological signal to a long distance. Antenna finds vital role in transmitting a biological signals. Patch Antennas are integrated with medical devices and used widely because of their design flexibility and shape . This paper proposes two antennas for telemetry application. Yagi antennas provide high directivity. Patch antennas are of light weight and is easy to carry. It can be fabricated on the fabric easily. Hence it proves to be an ideal solution for biotelemetry where the patients can wear the antenna, which in turn can collect the bio signals from the body and send it to the monitoring station. This combination of yagi and patch antennas gives sufficient gain and directivity his paper proposes the design of two Yagi Patch Antennas (YPA) with two different substrates for telemetry purpose. A miniature telemetry antenna model is designed, which captures information from the human body and transmits it to the monitoring station. The biological information is transmitted at a frequency of 2.45 GHz. in WLAN range. The performance of the antenna is simulated and analyzed using FEKO software.
- Published
- 2016
9. Agent-based customization of a remote conversation support system
- Author
-
Masaya Morita and Kazuhiro Kuwabara
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Interaction protocol ,Personalization ,Human-Computer Interaction ,World Wide Web ,User agent ,Artificial Intelligence ,Gadget ,Web application ,The Internet ,Conversation ,Computer Vision and Pattern Recognition ,business ,Protocol (object-oriented programming) ,Software ,media_common - Abstract
This paper proposes an agent-based approach to customizing a remote conversation support system for people with cognitive handicaps such as aphasia. This remote conversation support system is designed as a web application to assist with conversation over the Internet. The system is built from 'gadgets', each of which implements a particular conversation support function. Since the need for conversation support varies from person to person, such a system needs to be customized to suit the requirements of multiple users who conduct the conversation. The proposed approach introduces a user agent that corresponds to a human user. The negotiation protocol for selecting the necessary gadgets is defined based on the FIPA interaction protocols. This paper describes a gadget-based remote conversation support system, and how the proposed protocol can be used to determine the gadgets to be used.
- Published
- 2013
10. Virtual Whiteboard: A gesture-controlled pen-free tool emulating school whiteboard
- Author
-
Andrzej Czyzewski, Michał Lech, and Bozena Kostek
- Subjects
Focus (computing) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Whiteboard ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,law.invention ,Human-Computer Interaction ,Software ,Projector ,Artificial Intelligence ,law ,Gesture recognition ,Interactive whiteboard ,Computer graphics (images) ,Computer Vision and Pattern Recognition ,business ,Gesture - Abstract
In the paper the so-called Virtual Whiteboard is presented which may be an alternative solution for modern electronic whiteboards based on electronic pens and sensors. The presented tool enables the user to write, draw and handle whiteboard contents using his/her hands only. An additional equipment such as infrared diodes, infrared cameras or cyber gloves is not needed. The user's interaction with the Virtual Whiteboard computer application is based on dynamic hand gesture recognition. Gestures are recognized in the process of analyzing video stream obtained from a webcam coupled with a multimedia projector displaying whiteboard contents. The tracking positions of hands in the image is supported by Kalman filtering. In the paper the hardware and software of the Virtual Whiteboard is presented with a special focus on utilizing Kalman filters for prediction of consecutive hand positions. For the gestures applied to handle whiteboard contents, examined efficacy of Kalman filter supported recognition and the efficacy without using the filtering is given. In addition, the results of system efficiency tests are provided.
- Published
- 2012
11. Simulation of autonomous crowd behaviour on Xbox 360
- Author
-
Matthew Brittain and Minhua Ma
- Subjects
business.industry ,Computer science ,Autonomous agent ,ComputingMilieux_PERSONALCOMPUTING ,Bottleneck ,Human-Computer Interaction ,Crowds ,Artificial Intelligence ,Human–computer interaction ,Embedded system ,Component (UML) ,Character animation ,Computer Vision and Pattern Recognition ,Crowd simulation ,Game Developer ,Representation (mathematics) ,business ,Software - Abstract
A crowd simulator which creates autonomous characters' behaviour in crowds consists many components such as pathfinding, collision avoidance, character creation, behaviour system, and level of details. The majority of these involve different level of decision making in order to simulate autonomous agents' behaviour. Some components have a few different algorithms that can be adopted. For a simulator with a large number of autonomous agents, these components need to be efficient to contribute to the creation of a faster and cheaper game environment. Otherwise bottlenecks may occur and this can led to a poor representation. In this paper we investigate these areas, discuss and compare existing approaches in each component, and select the best combination on Xbox 360 through a series of experiments on our crowd simulator within the Microsoft XNA framework. We used the Xbox 360 console for accurate testing which is not affected by other processes running in the background. We also optimise the application to overcome bottleneck issues. Our simulator is able to handle a large number of automonous agents with a healthy frame rate of 60 FPS. Based on our implementation and testing results, some recommendations are provided in this paper, which will be useful for independent game developers who create games containing autonomous crowd for Xbox 360 using XNA framework.
- Published
- 2011
12. Visual object tracking system employing fixed and PTZ cameras
- Author
-
Grzegorz Szwoch, Piotr Dalka, Andrzej Ciarkowski, Piotr Szczuko, and Andrzej Czyzewski
- Subjects
Background subtraction ,Computer science ,business.industry ,Event (computing) ,Event type ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tracking (particle physics) ,Object (computer science) ,Human-Computer Interaction ,Artificial Intelligence ,Video tracking ,Computer graphics (images) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Video monitoring ,business ,Software ,Camera resectioning - Abstract
The paper presents a video monitoring system utilizing fixed and PTZ cameras for tracking of moving objects. First type of camera provides image for background modelling, being employed for foreground objects localization. Estimated objects locations are then utilised for steering of PTZ cameras when observing targeted objects with high close-ups. Objects are classified into several classes, then basic event detection is being performed. Event type, object localisation and images acquired by the cameras are presented visually in a "live map" system. In the paper details related to detection of moving objects are presented. Next, camera calibration procedure and geopositioning coordinates way of processing are discussed. Event detection is described. Finally an experiment is presented, organised in order to verify the camera tracking system accuracy.
- Published
- 2011
13. Challenging computer software frontiers and the human resistance to change
- Author
-
Jens G. Pohl
- Subjects
Knowledge management ,Computer science ,business.industry ,Semantic search ,Software development ,Database-centric architecture ,Human-Computer Interaction ,Hierarchical temporal memory ,Software ,Artificial Intelligence ,Software agent ,Human–computer interaction ,Application domain ,Computer Vision and Pattern Recognition ,Software system ,business - Abstract
This paper examines the driving and opposing forces that are governing the current paradigm shift from a data-processing information technology environment without software intelligence to an information-centric environment in which data changes are automatically interpreted within the context of the application domain. The driving forces are related to the large quantity of data and the complexity of networked systems that both call for software intelligence. The opposing forces are non-technical and due to the natural human resistance to change. Based on this background the paper describes current information-centric technology, proposes a vision of intelligent software system capabilities, and identifies four areas of necessary research. Most urgent among these are the ability to dynamically extend and merge ontologies and semantic search capabilities that can be initiated either by human users or software agents. Longer term research interests that pose a more severe challenge are related to the translation of emerging theoretical hierarchical temporal memory (HTM) concepts into usable software capabilities and the automated interpretation of graphical images such as those recorded by surveillance video cameras.
- Published
- 2010
14. Towards a portable intelligent facial expression recognizer
- Author
-
Yok-Yen Nguwi, Siu-Yeung Cho, and Teik-Toe Teoh
- Subjects
Facial expression ,Face hallucination ,business.industry ,Computer science ,Speech recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Feature selection ,Task (project management) ,Human-Computer Interaction ,Constraint (information theory) ,Facial muscles ,ComputingMethodologies_PATTERNRECOGNITION ,medicine.anatomical_structure ,Artificial Intelligence ,Face (geometry) ,medicine ,Three-dimensional face recognition ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
Facial expression recognition is a challenging task. A facial expression is fonned by contracting or relaxing different facial muscles on human face which results in temporally deformed facial features like wide open mouth, raising eyebrows or etc. Such a system presents challenges. For instances, lighting condition is a very difficult problem to constraint and regulate. On the other hand, real-time processing is also a challenging problem since there are so many facial features to be extracted and processed and sometime conventional classifiers are not even effective to handle those features and then produce good classification perfonnance. This paper discusses the issues on how the advanced feature selection techniques together with good classifiers can playa vital important role of real-time facial expression recognition. The content of this paper is a way to open-up a discussion about building a real-time system to read and respond to the emotions of people from facial expressions.
- Published
- 2009
15. Reasoning about conceptual interoperability of simulations using meta-level graph relations
- Author
-
Levent Yilmaz
- Subjects
business.industry ,Computer science ,Interoperability ,Graph theory ,Reuse ,Formal methods ,Metamodeling ,Human-Computer Interaction ,Artificial Intelligence ,Composability ,Application domain ,Systems engineering ,Graph (abstract data type) ,Computer Vision and Pattern Recognition ,Software engineering ,business ,Software - Abstract
Engineering of large and complex simulation systems is becoming more reliant on the reuse of existing simulation models. While existing technical standards facilitate syntactic and technical interoperability among disparate simulation models, there is still lack of formal methods that enable sound reasoning about the conceptual congruity of models that are selected for composition. This paper suggests a graph-theoretic approach to measure the extent of conceptual congruity of models within a new context. The premise of the approach is based on having contextualized models that provide introspective access to their metamodels. A metamodel associated with a reusable model entails a conceptualization of the domain in which it is originally designed to be situated in. The metamodels are used to instantiate a metagraph and graph distance metrics are used to measure the alignment of metamodels in the context of the new application domain. The paper also presents a strategy for packaging and distributing such metamodels with implemented models to facilitate practical application of the proposed method.
- Published
- 2008
16. Nutritional biomarkers and machine learning for personalized nutrition applications and health optimization
- Author
-
George A. Tsihrintzis, Dimitrios P. Panagoulias, and Dionisios N. Sotiropoulos
- Subjects
Nutritional biomarkers ,Artificial neural network ,business.industry ,Computer science ,Disease ,Machine learning ,computer.software_genre ,Field (computer science) ,Human-Computer Interaction ,Metabolomics ,Artificial Intelligence ,Snapshot (computer storage) ,Personalized medicine ,Artificial intelligence ,Computer Vision and Pattern Recognition ,business ,computer ,Organism ,Software - Abstract
The doctrine of the “one size fits all” approach has been overcome in the field of disease diagnosis and patient management and has been replaced by a more per patient approach known as “personalized medicine”. Biomarkers are the key variables in the research and development of new methods of training prognostic models and neural networks in the scientific field of machine learning and artificial intelligence [1] [2]. Important biomarkers related to metabolism are the metabolites. Metabolomics refers to the systematic study of unique chemical fingerprints that are left behind by specific cellular processes. The metabolic profile can provide a snapshot of cell physiology and, by extension, metabolomics provide a direct “functional reading of the physiological state” of an organism. The goal of this paper is to employ current machine learning methodologies, specifically neural networks, to formulate a general evaluation chart of the nutritional biomarkers, to investigate how to best predict body mass index and to discover dietary patterns.
- Published
- 2022
17. Topic-BERT: Detecting harmful information from social media
- Author
-
Deng Hongtao, Wang Gao, Xun Zhu, and Yuan Fang
- Subjects
Human-Computer Interaction ,Artificial Intelligence ,Computer science ,business.industry ,Internet privacy ,Social media ,Computer Vision and Pattern Recognition ,business ,Software - Abstract
Harmful information identification is a critical research topic in natural language processing. Existing approaches have been focused either on rule-based methods or harmful text identification of normal documents. In this paper, we propose a BERT-based model to identify harmful information from social media, called Topic-BERT. Firstly, Topic-BERT utilizes BERT to take additional information as input to alleviate the sparseness of short texts. The GPU-DMM topic model is used to capture hidden topics of short texts for attention weight calculation. Secondly, the proposed model divides harmful short text identification into two stages, and different granularity labels are identified by two similar sub-models. Finally, we conduct extensive experiments on a real-world social media dataset to evaluate our model. Experimental results demonstrate that our model can significantly improve the classification performance compared with baseline methods.
- Published
- 2021
18. A novel genetic algorithm for curriculum sequence optimization
- Author
-
Mahnane Lamia, Ouissem Benmesbah, and Mohamed Hafidi
- Subjects
Human-Computer Interaction ,Artificial Intelligence ,business.industry ,Computer science ,Sequence optimization ,Genetic algorithm ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Curriculum ,Software - Abstract
A curriculum sequence represents a match between learners’ preferences, needs, and surroundings from one side, and the learning content characteristics and the pedagogical requirements from the other side. The curriculum sequence adaptation problem (CSA) is considered as an important issue in adaptive and personalized learning field. It concerns the dynamic generation of a personal optimal learning path for a specific learner. This problem has gained an increased research interest in the last decade, and heuristics and meta-heuristics are usually used to solve it. In this direction, this paper summarizes existing works and presents a novel GA-based approach modeled as an objective optimization problem to deal with this problem. The experimental results from simulations showed that the proposed GA could outperform particle swarm optimization (PSO) and a random search approach in many simulated datasets. Moreover, from a pedagogical perspective, positive learners’ feedback and high acceptance towards the proposed approach is indicated.
- Published
- 2021
19. On the integration of Machine Learning algorithms and Operations Research techniques in the development of a hybrid Recommender System
- Author
-
Nikos Karacapilidis, Georgios Kournetas, and Panagiotis Giannopoulos
- Subjects
Human-Computer Interaction ,Development (topology) ,Artificial Intelligence ,business.industry ,Computer science ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Recommender system ,business ,Machine learning ,computer.software_genre ,computer ,Software - Abstract
Recommender Systems is a highly applicable subclass of information filtering systems, aiming to provide users with personalized item suggestions. These systems build on collaborative filtering and content-based methods to overcome the information overload issue. Hybrid recommender systems combine the abovementioned methods and are generally proved to be more efficient than the classical approaches. In this paper, we propose a novel approach for the development of a hybrid recommender system that is able to make recommendations under the limitation of processing small amounts of data with strong intercorrelation. The proposed hybrid solution integrates Machine Learning and Multi-Criteria Decision Analysis algorithms. The experimental evaluation of the proposed solution indicates that it performs better than widely used Machine Learning algorithms such as the k-Nearest Neighbors and Decision Trees.
- Published
- 2021
20. Tomato pest classification using deep convolutional neural network with transfer learning, fine tuning and scratch learning
- Author
-
K. Parvathi, Gayatri Pattnaik, and Vimal K. Shrivastava
- Subjects
Fine-tuning ,business.industry ,Computer science ,Convolutional neural network ,Human-Computer Interaction ,Artificial Intelligence ,Scratch ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Transfer of learning ,computer ,Software ,computer.programming_language - Abstract
Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.
- Published
- 2021
21. Uncertainty query sampling strategies for active learning of named entity recognition task
- Author
-
Manu Vardhan, Sarsij Tripathi, and Ankit Agrawal
- Subjects
0209 industrial biotechnology ,Active learning (machine learning) ,Computer science ,business.industry ,Sampling (statistics) ,02 engineering and technology ,computer.software_genre ,Task (project management) ,Human-Computer Interaction ,020901 industrial engineering & automation ,Named-entity recognition ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Natural language processing - Abstract
Active learning approach is well known method for labeling huge un-annotated dataset requiring minimal effort and is conducted in a cost efficient way. This approach selects and adds most informative instances to the training set iteratively such that the performance of learner improves with each iteration. Named entity recognition (NER) is a key task for information extraction in which entities present in sequences are labeled with correct class. The traditional query sampling strategies for the active learning only considers the final probability value of the model to select the most informative instances. In this paper, we have proposed a new active learning algorithm based on the hybrid query sampling strategy which also considers the sentence similarity along with the final probability value of the model and compared them with four other well known pool based uncertainty query sampling strategies based active learning approaches for named entity recognition (NER) i.e. least confident sampling, margin of confidence sampling, ratio of confidence sampling and entropy query sampling strategies. The experiments have been performed over three different biomedical NER datasets of different domains and a Spanish language NER dataset. We found that all the above approaches are able to reach to the performance of supervised learning based approach with much less annotated data requirement for training in comparison to that of supervised approach. The proposed active learning algorithm performs well and further reduces the annotation cost in comparison to the other sampling strategies based active algorithm in most of the cases.
- Published
- 2021
22. Optimization of software cost estimation model based on biogeography-based optimization algorithm
- Author
-
Jun Long, Jinfang Sheng, Aman Ullah, Bin Wang, Muhammad Asim, and Zejun Sun
- Subjects
010302 applied physics ,Mathematical optimization ,Cost estimate ,Computer science ,business.industry ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Biogeography-based optimization ,Human-Computer Interaction ,Software ,Artificial Intelligence ,0103 physical sciences ,Computer Vision and Pattern Recognition ,0210 nano-technology ,business - Abstract
Estimation of software cost (ESC) is considered a crucial task in the software management life cycle as well as time and quality. Prior to the development of a software project, precise estimations are required in the form of person month and time. In the last few decades, various parametric and non-algorithmic or non-parametric approaches regarding the estimation of software costs have been developed. Among them, the constrictive cost model (COCOMO-II) is a commonly used method for estimating software cost. To further improve the accuracy of this model, researchers and practitioners have applied numerous computational intelligence algorithms to optimize their parameters. However, accuracy is still a big problem in this model to be addressed. In this paper, we proposed a biogeography-based optimization (BBO) method to optimize the current coefficients of COCOMO-II for better estimation of software project cost or effort. The experiments are conducted on two standard data sets: NASA-93 and Turkish Industry software projects. The performance of the proposed algorithm called BBO-COCOMO-II is evaluated by using performance indicators including the manhattan distance (MD) and the mean magnitude of relative error (MMRE). Simulation results reveal that the proposed algorithm obtained high accuracy and significant error minimization compared to original COCOMO-II, particle swarm optimization, genetic algorithm, flower pollination algorithm, and other various baseline cost estimation models.
- Published
- 2021
23. Design of search and rescue system using autonomous Multi-UAVs
- Author
-
Kheireddine Choutri, Lagha Mohand, and Laurent Dala
- Subjects
021110 strategic, defence & security studies ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,Human-Computer Interaction ,Software ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,business ,Search and rescue - Abstract
Over the last several decades advancement in UAV technology has formed an important part of recent research. Nowadays, these flying robots are a great aid and even replace humans in many types of critical activities such as surveillance, fire protection, search & rescue (SAR), etc. In this paper, we study the design of SAR system using autonomous quadrotors UAVs. The developed system attempts to maximize the probability of target detection and minimize the expected search time while also minimizing the number of UAVs required. It is also adapted to counter UAV failures by re-configuring the UAVs to optimally continue the mission. Furthermore the SAR system algorithms are designed to accomplish the mission with a minimum amount of control information to be passed between UAVs. Finally a case study of survivor search and rescue operations is provided along with the obtained results which confirm the efficiency of the designed system.
- Published
- 2021
24. SeMBlock: A semantic-aware meta-blocking approach for entity resolution
- Author
-
Gerhard Weiss, Delaram Javdani, Hossein Rahmani, RS: FSE DACS, and Dept. of Advanced Computing Sciences
- Subjects
semantic similarity ,Word embedding ,Computer science ,business.industry ,Blocking (radio) ,big data integration ,Pattern recognition ,meta-blocking ,Data matching ,word embedding ,Locality-sensitive hashing ,Human-Computer Interaction ,Semantic similarity ,entity resolution ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,locality-sensitive hashing ,ALGORITHM ,business ,Software - Abstract
Entity resolution refers to the process of identifying, matching, and integrating records belonging to unique entities in a data set. However, a comprehensive comparison across all pairs of records leads to quadratic matching complexity. Therefore, blocking methods are used to group similar entities into small blocks before the matching. Available blocking methods typically do not consider semantic relationships among records. In this paper, we propose a Semantic-aware Meta-Blocking approach called SeMBlock. SeMBlock considers the semantic similarity of records by applying locality-sensitive hashing (LSH) based on word embedding to achieve fast and reliable blocking in a large-scale data environment. To improve the quality of the blocks created, SeMBlock builds a weighted graph of semantically similar records and prunes the graph edges. We extensively compare SeMBlock with 16 existing blocking methods, using three real-world data sets. The experimental results show that SeMBlock significantly outperforms all 16 methods with respect to two relevant measures, F-measure and pair-quality measure. F-measure and pair-quality measure of SeMBlock are approximately 7% and 27%, respectively, higher than recently released blocking methods.
- Published
- 2021
25. Hybrid adapted fast correlation FCBF-support vector machine recursive feature elimination for feature selection
- Author
-
Souad Guessoum, Hayet Djellali, and Nacira Ghoualmi-Zine
- Subjects
business.industry ,Computer science ,Pattern recognition ,Feature selection ,02 engineering and technology ,Human-Computer Interaction ,Correlation ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Feature (computer vision) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
This paper investigates feature selection methods based on hybrid architecture using feature selection algorithm called Adapted Fast Correlation Based Feature selection and Support Vector Machine Recursive Feature Elimination (AFCBF-SVMRFE). The AFCBF-SVMRFE has three stages and composed of SVMRFE embedded method with Correlation based Features Selection. The first stage is the relevance analysis, the second one is a redundancy analysis, and the third stage is a performance evaluation and features restoration stage. Experiments show that the proposed method tested on different classifiers: Support Vector Machine SVM and K nearest neighbors KNN provide a best accuracy on various dataset. The SVM classifier outperforms KNN classifier on these data. The AFCBF-SVMRFE outperforms FCBF multivariate filter, SVMRFE, Particle swarm optimization PSO and Artificial bees colony ABC.
- Published
- 2020
26. Forecasting stock price index movement using a constrained deep neural network training algorithm
- Author
-
Panayiotis E. Pintelas, Stavros Stavroyiannis, Theodore Kotsilieris, and Ioannis E. Livieris
- Subjects
Artificial neural network ,business.industry ,Computer science ,Movement (music) ,Training (meteorology) ,020207 software engineering ,02 engineering and technology ,Human-Computer Interaction ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Stock price index ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
The prediction of stock index movement is considered a rather significant objective in the financial world, since a reasonably accurate prediction has the possibility of gaining profit in stock exchange, yielding high financial benefits and hedging against market risks. Undoubtedly, the area of financial analysis has been dramatically changed from a rather qualitative science to a more quantitative science which is also based on knowledge extraction from databases. During the last years, deep learning constitutes a significant prediction tool in analyzing and exploiting the knowledge acquired from financial data. In this paper, we propose a new Deep Neural Network (DNN) prediction model for forecasting stock exchange index movement. The proposed DNN is characterized by the application of conditions on the weights in the form of box-constraints, during the training process. The motivation for placing these constraints is focused on defining the weights in the trained network in more uniform way, by restricting them from taking large values in order for all inputs and neurons of the DNN to be efficiently exploited and explored. The training of the new DNN model is performed by a Weight-Constrained Deep Neural Network (WCDNN) training algorithm which exploits the numerical efficiency and very low memory requirements of the L-BFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno) matrices together with a gradient-projection strategy for handling the bounds on the weights of the network. The performance evaluation carried out on three popular stock exchange indices, demonstrates the classification efficiency of the proposed algorithm.
- Published
- 2020
27. Special issue on the Design of Intelligent Environment
- Author
-
Toyohide Watanabe and Lakhrni C. Jain
- Subjects
business.industry ,Computer science ,Intelligent decision support system ,Information technology ,Collaborative learning ,USable ,computer.software_genre ,Data science ,Human-Computer Interaction ,Intelligent agent ,Artificial Intelligence ,Information system ,Intelligent environment ,Computer Vision and Pattern Recognition ,Agent architecture ,business ,computer ,Software - Abstract
When we analyse behaviour from the viewpoint of information technologies, computer-supportedsystems may be mainly divided into two types of functional mechanisms. There are processing means and communication means. These two systems sort our activities from the view of Function and Mechanism. This special issue focuses on the correspondence between persons and computers or information systems and from the viewpoint of service support. It also investigates the use of information technologies in a wide range of application fields. This is done in terms of the current state-of-art. This volume contains eight research papers dealing with the theory and applications in a computersupported behaviour environment. These eight papers may be divided into four groups. These are the Computer Support for personal activities in the first two papers and Computer Support for an E-learning Environment in the next two papers. Computer Support for Information Networks is dealt with in a successive paper. Finally Computer Support for Social Simulation is covered in the remaining three papers. The first paper by Watanabe and others focuses on extracting the information exchanged in collaborative learning process. It provides knowledge for the solution of the subjects discussed. It organizes the knowledge structure into a usable form. This usable
- Published
- 2010
28. An ELM based multi-agent system and its applications to power generation
- Author
-
Hwa Jen Yap, Ungku Anisa Ungku Amirulddin, Shing Chiang Tan, Shen Yuong Wong, Chong Tak Yaw, and Keem Siah Yap
- Subjects
0209 industrial biotechnology ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Machine learning ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Trust management (information system) ,Pima indians ,media_common ,Extreme learning machine ,Artificial neural network ,business.industry ,Multi-agent system ,Human-Computer Interaction ,Electricity generation ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Computer Vision and Pattern Recognition ,business ,computer ,Software ,Reputation - Abstract
This paper presents an implementation of Extreme Learning Machine (ELM) in the Multi-Agent System (MAS). The proposed method is a trust measurement approach namely Certified Belief in Strength (CBS) for Extreme Learning Machine in Multi-Agent Systems (ELM-MAS-CBS). The CBS is applied on the individual agents of MAS, i.e., ELM neural network. The trust measurement is introduced to compute reputation and strength of the individual agents. Strong elements that are related to the ELM agents are assembled to form the trust management in which will be letting the CBS method to improve the performance in MAS. The efficacy of the ELM-MAS-CBS model is verified with several activation functions using benchmark datasets (i.e., Pima Indians Diabetes, Iris and Wine) and real world applications (i.e., circulating water systems and governor). The results show that the proposed ELM-MAS-CBS model is able to achieve better accuracy as compared with other approaches.
- Published
- 2018
29. Stochastic optimal controller design for medium access constrained networked control systems with unknown dynamics
- Author
-
Luigi Glielmo, B. Subathra, Sreram Balasubramaniyan, Seshadhri Srinivasan, Valentina Emilia Balas, Hamed Kebraei, Balasubramaniyan, Sreram, Srinivasan, Seshadhri, Kebraei, Hamed, Subathra, B., Balas, Valentina Emilia, and Glielmo, Luigi
- Subjects
0209 industrial biotechnology ,Computer science ,Q-learning ,02 engineering and technology ,Networked control systems (NCSs) ,020901 industrial engineering & automation ,Software ,Control theory ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Markov Decision Process (MDP) ,Controller design ,business.industry ,Stochastic optimal controller ,Control engineering ,Networked control system ,Human-Computer Interaction ,Constraint ,Dynamics (music) ,Control system ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Medium acce ,business - Abstract
This paper proposes a stochastic optimal controller for networked control systems (NCS) with unknown dynamics and medium access constraints. The medium access constraint of NCS is modelled as a Markov Decision Process (MDP) that switches modes depending the channel access to the actuators. We then show that using the MDP assumption, the NCS with medium access constraint can be modelled as a Markovian jump linear system. Then a stochastic optimal controller is proposed that minimizes the quadratic cost function using Q-learning algorithm. The resulting control algorithm simultaneously optimizes the quadratic cost function and also allocates the network bandwidth judiciously by designing a scheduler. Two compensation strategies transmit zero and zero-order hold for control inputs that fail to get an access to channel are studied. The proposed controller and scheduler are illustrated using experiments on networks and simulations on an industrial four-tank system. The advantage of the proposed approach is that the optimal controller and scheduler can be designed forward-in-time for NCS with unknown dynamics. This is a departure from traditional dynamic programming based approaches that assume complete knowledge of the NCS dynamics and network constraints beforehand to solve the optimal controller problem backward-in-time.
- Published
- 2017
30. ECG Morphological Marking using Discrete Wavelet Transform
- Author
-
T.R. Sumithira and A. Sampath
- Subjects
Discrete wavelet transform ,Computer science ,business.industry ,Noise (signal processing) ,Computation ,Feature extraction ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Signal ,Human-Computer Interaction ,Background noise ,QRS complex ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,cardiovascular diseases ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Decomposition method (constraint satisfaction) ,business ,Software - Abstract
Electrocardiogram (ECG) is almost recurrent signals that express the activity of the heart. A large amount of information on the normal and pathological physiology of heart can be acquired from ECG. However, the ECG signals being nonstationary in nature, it is very hard to visually examine them. Thus the need is there for computer built methods for ECG signal analysis. This paper has been stimulated by the need to find an effective method for ECG signal analysis which is common and has excellent accuracy and minimum computation time. The initial task for efficient analysis is the removal of noise. It actually involves the extraction of the needed cardiac components by degeneration of the background noise. Enhancement of signal is attained by the use of multi-resolution feature extraction using DWT decomposition method. Use of this was influenced by its adaptive nature. The second task is that of P, QRS complex and T wave peak detection which is performed by the use of multi-resolution adaptive threshold method executed with DWT. The experiments are carried out on MIT-BIH database and PTB diagnostic ECG database. The results show the proposed method is very effective and an efficient method for computation of P, QRS complex and T wave peak detection.
- Published
- 2016
31. Soft granular computing based classification using hybrid fuzzy-KNN-SVM
- Author
-
B. K. Tripathy, Mrutyunjaya Panda, and Ajith Abraham
- Subjects
Discretization ,Artificial neural network ,business.industry ,Computer science ,Granular computing ,02 engineering and technology ,Machine learning ,computer.software_genre ,Fuzzy logic ,Human-Computer Interaction ,Support vector machine ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,Minimum description length ,Fuzzy knn ,computer ,Software - Abstract
This paper aims at providing the concept of information granulation in Granular computing based pattern classification that is used to deal with incomplete, unreliable, uncertain knowledge from the view of a dataset. Data Discretization provides us the granules which further can be used to classify the instances. We use Equal width and Equal frequency Discretization as unsupervised ones; Fayyad-Irani's Minimum description length and Kononenko's supervised discretization approaches along with Fuzzy logic, neural network, Support vector machine and their hybrids to develop an efficient granular information pro- cessing paradigm. The experimental results show the effectiveness of our approach. We use benchmark datasets in UCI Machine Learning Repository in order to verify the performance of granular computing based approach in comparison with other existing approaches. Finally, we perform statistical significance test for confirming validity of the results obtained.
- Published
- 2016
32. A risky multi-criteria decision-making approach under language environment
- Author
-
Xia Feng, Cun-Bin Li, Zhi-Qiang Qi, and Peng-Fei Gao
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Degree (graph theory) ,Computer science ,business.industry ,Rationality ,Cloud computing ,02 engineering and technology ,Machine learning ,computer.software_genre ,Multi criteria decision ,Human-Computer Interaction ,020901 industrial engineering & automation ,Artificial Intelligence ,Prospect theory ,Value (economics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Reliability (statistics) - Abstract
We propose a decision-making approach based on the prospect theory and the cloud model for risky multi-criteria decision-making problem in which the criteria value of alternatives are uncertain and the linguistic variables and criteria's weights are partially unknown. In this method, the linguistic variables are converted into a cloud model. To address the shortcomings of the existing cloud generating method, a new method is developed based on the golden segmentation ratio and the cloud's rule of '3En'. And then, another method for measuring the distance among clouds and the degree of possibility of the cloud model is given to apply Prospect Theory to the linguistic environment. Furthermore, the treatment of other alternative solutions as dynamic reference points is considered. Based on the algorithm of maximizing deviation, the optimal model balances the subjective and objective information and through this model, the optimal criteria weights and the integrated prospect value of each solution are obtained. Finally, these solutions are ordered on the principle that the bigger the integrated prospect value is, the more optimal the solution will be. At the end of this paper, we use an example to examine the rationality and reliability of the given method.
- Published
- 2016
33. Tuning machine learning algorithms for content-based movie recommendation
- Author
-
Maria Brbic and Ivana Podnar arko
- Subjects
Computer science ,business.industry ,Decision tree ,Recommender system ,Machine learning ,computer.software_genre ,MovieLens ,Human-Computer Interaction ,Naive Bayes classifier ,Artificial Intelligence ,Collaborative filtering ,The Internet ,Computer Vision and Pattern Recognition ,Data mining ,Artificial intelligence ,recommender systems ,Cluster analysis ,business ,computer ,Algorithm ,Classifier (UML) ,Software - Abstract
Machine learning algorithms are often used in content-based recommender systems since a recommendation task can naturally be reduced to a classification problem: A recommender needs to learn a classifier for a given user where learning examples are characteristics of items previously liked/bought/seen by the user. However, multi-valued and continuous attributes require special approaches for classifier implementation as they can significantly influence classifier accuracy. In this paper we propose novel approaches for handling multi- valued and continuous attributes adequate for the naïve Bayes classifier and decision trees classifier, and tune it for content-based movie recommendation. We evaluate the performance of the resulting approaches using the MovieLens data set enriched with movie details retrieved from the Internet Movie Database. Our empirical results demonstrate that the naïve Bayes classifier is more suitable for content-based movie recommendation than the decision trees algorithm. In addition, the naïve Bayes classifier achieves better results with smart discretization of continuous attributes compared to the approach which models continuous attributes with a Gaussian distribution. Finally, we combine our best performing content-based algorithm with the k-means clustering algorithm typically used for collaborative filtering, and evaluate the performance of the resulting hybrid approach for a movie recommendation task. The experimental results clearly show that the hybrid approach significantly increases recommendation accuracy compared to collaborative filtering while reducing the risk of over specification, which is a typical problem of content-based approaches.
- Published
- 2015
34. Flexible dynamic weight decision scheme
- Author
-
Baomin Wang, Hongqiang Jiao, Chao Chen, and Junfeng Tian
- Subjects
Mathematical optimization ,business.industry ,Computer science ,media_common.quotation_subject ,Rationality ,Decision problem ,Machine learning ,computer.software_genre ,Data type ,Adaptability ,Decision scheme ,Human-Computer Interaction ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Completeness (statistics) ,computer ,Software ,Decision analysis ,Optimal decision ,media_common - Abstract
Weight allocation is the key in multiple attributes decision making MADM procedure. Because of the complexity of decision problems, it is difficult to embody the subjectivity of the people in the decision-making process if using objective method only. Meanwhile, it is difficult to reflect the essence of the problem objectively if only according to person's subjective judgment. Flexible decision holds better adaptability. Therefore, this paper proposed a flexible dynamic weight decision scheme FDWDS. On the basis of the data type analysis, attributes type and the completeness, the accuracy of related knowledge information, the influence of objective, subjective and the preference of decision makers for flexible weight allocation, the weight could dynamically adjust according to the analysis of different objects. Finally, through the simulation experiment its applicability and the rationality of weight allocation were verified.
- Published
- 2014
35. Aggregated local models via subspace clustering
- Author
-
Bernardete Ribeiro and Ning Chen
- Subjects
business.industry ,Computer science ,Correlation clustering ,computer.software_genre ,Machine learning ,Human-Computer Interaction ,Biclustering ,Binary classification ,Artificial Intelligence ,Margin (machine learning) ,Bankruptcy prediction ,Graph (abstract data type) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,Decision model ,computer ,Software ,Subspace topology - Abstract
Decision models take a valuable step towards harnessing the problem of efficient risk assessment in social and technological environments. In particular, in bankruptcy prediction models it becomes difficult to know exactly what happens when so many financial and external variables are at stake. To partly tackle this problem, a new approach encompassing aggregated local models obtained via subspace clustering and intelligent decision technologies is proposed in this paper. The approach first takes co-clusters of firms and financial ratios found by a biclustering algorithm; second the weight affinity graph matrix embedding data points is built for learning the subspace clustering model; finally, a large margin binary classifier over the regularized model is used to make predictions on financial real data. We empirically show that our model by combining biclustering with subspace learning significantly outperforms the competing approach without biclustering and the alternative without subspace learning in terms of prediction accuracy without a significant increase in the computational cost. Furthermore, we propose a consensus of found local models which is able through a simple aggregate rule to improve results even further.
- Published
- 2014
36. Automated transcription of conversational Call Center speech – with respect to non-verbal acoustic events
- Author
-
Péter Mihajlik, Gellért Sárosi, Tibor Fegyó, and Balázs Tarján
- Subjects
Event modeling ,Vocabulary ,Computer science ,business.industry ,media_common.quotation_subject ,Speech recognition ,Word error rate ,Cognition ,Human-Computer Interaction ,Nonverbal communication ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Speech transcription ,Telephony ,Transcription (software) ,business ,Software ,media_common - Abstract
This paper summarizes our recent efforts made to transcribe real-life Call Center conversations automatically with respect to non-verbal acoustic events, as well. Future Call Centers – as cognitive infocom systems – must respond automatically not only for well formed utterances but also for spontaneous and non-word speaker manifestations and must be robust against sudden noises. Conversational telephony speech transcription itself is a big challenge, primarily we address this issue on real-life (Bank and Insurance) tasks. In addition, we introduce several non-word acoustic modeling approaches and their integration to LVCSR (Large Vocabulary Continuous Speech Recognition). In the experiments, one and two channel (client and agent speech merged into one or left in two separate audio stream) transcription results, cross-task results and the handling of transcription data insufficiency are investigated – in parallel with the non-verbal acoustic event modeling. On the agent side less than 15% word error rate could be achieved and the best error rate reduction is 20% (relative) due to the inclusion of various written corpora and due to acoustic event handling.
- Published
- 2014
37. Phonetic analysis and automatic prediction of vowel duration in Hungarian spontaneous speech
- Author
-
Mária Gósy and András Beke
- Subjects
Vowel length ,business.industry ,Computer science ,Speech recognition ,Context (language use) ,Phonology ,Distinctive feature ,computer.software_genre ,Human-Computer Interaction ,Artificial Intelligence ,Duration (music) ,Vowel ,Mid vowel ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Natural language processing ,Utterance - Abstract
A large number of phonetic and phonology research papers analyzed segmental durations focusing on factors and interactions that determine their durations. The results often play an important role in Language Technology applications, for example in TTS (text-to-speech synthesis), ASR (automatic speech recognition) and are widely used in infocommunication. Speech sound duration depends on various factors such as phonetic quality, phonological context, phonological position in the word or in the utterance, speech style, etc. The multifunction dependence of vowel duration is more complex in those languages where vowel length is a distinctive feature like in Hungarian. The main goal of the present research was to analyze the physical durations of pairs of vowels in spontaneous speech that exhibit a phonological length opposition. In addition, we intended to develop an algorithm for automatic classification of the short and long vowels occurring in spontaneous speech. On the basis of these findings we intended to predict automatically the vowel durations based on three different methods. The measured data confirmed our hypothesis that phonologically short vs. long vowels would significantly differ in their physical durations in spontaneous speech. The results of the automatic vowel length classification also supported this finding. The third aspect of our investigations was to use different supervised learning methods in order to predict vowel duration, based on different feature vectors consisting of characteristic and spectral features. The best result was yielded by the combined features and FFNN were used. The correlation between the target and the predicted vowel duration was 0.79 while RMSE was 25 ms. The results obtained support the complexity of features affecting vowel duration, on the one hand, and indicate the temporal complexity of segments in spontaneous speech, as has been reported for Lithuanian, Czech, Hindi, Telugu and Korean, on the other hand.
- Published
- 2014
38. About uses an expert system for an intelligent exploitation of the large data set
- Author
-
Mohamed Selmi and Amel Grissa Touzi
- Subjects
Exploit ,business.industry ,Process (engineering) ,Semantics (computer science) ,Computer science ,computer.software_genre ,Machine learning ,Expert system ,Human-Computer Interaction ,Data set ,Set (abstract data type) ,Knowledge extraction ,Knowledge base ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software - Abstract
While Knowledge Discovery in Databases KDD have enjoyed great popularity and success in the recent years, these approaches are restricted to the application of discovery and modeling techniques within the KDD process. Thus, the goal to exploit these data is often neglected. In this paper, we propose an intelligent approach for exploitation of these data. For this, we propose to define an Expert System ES allowing the user to easily exploit the large data set. The Knowledge Base KB of our ES is defined by introducing a new KDD approach taking in consideration another degree of granularity into the process of knowledge extraction. This set represents a reduced knowledge of the initial data set and allows deducting the semantics of the data. We prove that, this ES can help the user to give semantics for these data and to exploit them in intelligent way.
- Published
- 2014
39. Consensus model for large-scale group decision support in IT services management
- Author
-
Iván Palomares
- Subjects
Decision support system ,Service (systems architecture) ,Knowledge management ,Group (mathematics) ,Computer science ,Management science ,business.industry ,Group decision-making ,Human-Computer Interaction ,Artificial Intelligence ,Scale (social sciences) ,Business decision mapping ,Consensus model ,Heterogeneous information ,Computer Vision and Pattern Recognition ,business ,Software - Abstract
IT-based services management in organizations frequently requires the use of decision making approaches. Several multi-criteria and group decision making models have been proposed in the literature to facilitate this kind of processes. However, there are some important aspects when a large number of experts take part in group decisions, that have not been considered yet in organizational contexts, such as: the existence of multiple subgroups of experts with different attitudes and/or interests, the necessity of applying a consensus reaching process to make highly accepted collective decisions, and the problem of dealing with heterogeneous contexts, since experts belonging to different areas might prefer to provide preferences in different information domains. This paper presents an attitude-based consensus model for IT-based services management, that deals with heterogeneous information and multiple criteria. An example that illustrates its application to a real-life problem about selecting an IT-based banking service for its improvement is also presented.
- Published
- 2014
40. Cloud service management decision support: An application of AHP for provider selection of a cloud-based IT service management system
- Author
-
Thorsten Proehl, Ruediger Zarnekow, and Jonas Repschlaeger
- Subjects
Service (business) ,Process management ,Service delivery framework ,Computer science ,business.industry ,Management science ,Software as a service ,IT service management ,Service level objective ,Business service provider ,Cloud computing ,Service level requirement ,Human-Computer Interaction ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,business ,Software - Abstract
PURPOSE: This paper examines the applicability of the analytic hierarchy process AHP model to solve the decision problem of a provider selection for a cloud-based IT service management ITSM solution. Furthermore, critical selection criteria for the process of provider selection are identified.DESIGN/METHODOLOGY/APPROACH: This article describes an approach which supports IT organizations in their decision-making process to select an appropriate provider using the AHP. Therefore, an AHP-based model is created and applied to standard cloud use cases. For standard cloud use cases, seven IT executives were interviewed and their priorities related to each cloud level were discussed. In addition, a case study was conducted with a large publishing company, which has recently implemented a cloud-based ITSM solution.FINDINGS: Using the AHP approach, criteria for cloud provider selection are successfully applied to ITSM selection. The findings suggest that IT security and compliance is perceived as a mandatory dimension. Although the interviewed company is almost exclusively concerned about IT security and compliance issues, it does not consider these dimensions for provider comparison. Furthermore, provider selection is a complex and multi-criteria decision problem with the following most important factors: service elasticity IaaS, information security PaaS, and continual service improvement SaaS.
- Published
- 2014
41. An entropy-based approach to enhancing Random Forests
- Author
-
Mohamed Medhat Gaber and Harinder Singh Atwal
- Subjects
Computer science ,business.industry ,Data classification ,Machine learning ,computer.software_genre ,Random forest ,Human-Computer Interaction ,Artificial Intelligence ,Predictive power ,Entropy (information theory) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Overall performance ,Data mining ,Information gain ,business ,computer ,Software - Abstract
Data classification is a major problem in data mining and machine learning. The process involves construction of a model from a set of historical data instances having one of the features designated as the class. This model is then used to classify instances in which the class feature is unknown. An important development in data classification has been the use of a set of classifiers that are built from different, but possibly overlapping sets of instances. This approach is known as ensemble-based classification. Random Forests is an example of an ensemble-based classification where the model outputs of many trees are used to classify an instance. Developed by Breiman in 2001, this technique has proved to be effective and a representative of the state-of-the-art in data classification. In this paper we propose an important enhancement to the technique in order to boost the overall performance of Random Forests. Random Forests take two parameters: the number of trees and the number of features to be randomly drawn from the set of all the features at each split in the tree. We shall investigate and incorporate the use of an information theoretic approach to evaluating the predictive power of the features in a given dataset, namely, Information Gain. We shall show experimentally that the predictive power of the features provides a guide to the setting of the second parameter of Random Forests number of randomly drawn features to split on.
- Published
- 2013
42. On solving chance constrained programming problems involving uniform distribution with fuzzy parameters
- Author
-
Nilkanta Modak and Animesh Biswas
- Subjects
Mathematical optimization ,Fuzzy classification ,Computer science ,business.industry ,Type-2 fuzzy sets and systems ,Defuzzification ,Human-Computer Interaction ,Artificial Intelligence ,Goal programming ,Fuzzy mathematics ,Fuzzy number ,Fuzzy set operations ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Membership function - Abstract
As a special field of mathematical programming, fuzzy chance constrained programming is now emerging as a promising area of study from the view point of its ability to capture fuzziness and randomness simultaneously. In this paper a fuzzy goal programming method has been presented for solving multiobjective chance constrained programming problems in which the right sided parameters associated with the system constraints are uniformly distributed fuzzy random variables. In the proposed approach the fuzzy chance constrained programming problem is converted first into its equivalent fuzzy programming form by using the concept of α-cuts. Then the problem is decomposed on the basis of tolerance ranges of fuzzy parameters associated with the system constraints. Next by setting imprecise aspiration level to each of the individual objectives, the membership function is defined to measure the degree of achievements of goal levels of the objectives. Afterwards a fuzzy goal programming model is developed to achieve the highest degree of each of the defined membership goals to the extent possible by minimizing the group regrets consisting of under deviational variables of the fuzzy goals in the decision making context. To explore the potentiality of the proposed approach, an illustrative example is solved and the solution is compared with other technique.
- Published
- 2013
43. Evolutionary algorithms using cluster patterns for timetabling
- Author
-
Nandita Sharma, Tom Gedeon, and B. Sumudu U. Mendis
- Subjects
education.field_of_study ,Computer science ,business.industry ,media_common.quotation_subject ,Population ,Evolutionary algorithm ,Combinatorial optimization problem ,Machine learning ,computer.software_genre ,Human-Computer Interaction ,Data set ,Artificial Intelligence ,Genetic algorithm ,Benchmark (computing) ,Cluster (physics) ,Quality (business) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,education ,computer ,Software ,media_common - Abstract
The examination timetabling problem ETP is a NP complete, combinatorial optimization problem. Intuitively, use of properties such as patterns or clusters in the data suggests possible improvements in the performance and quality of timetabling. This paper investigates whether the use of a genetic algorithm GA informed by patterns extracted from student timetable data to solve ETPs can produce better quality solutions. The data patterns were captured in clusters, which then were used to generate the initial population and evaluate fitness of individuals. The proposed techniques were compared with a traditional GA and popular techniques on widely used benchmark problems, and a local data set, the Australian National University ANU ETP, which was the motivating problem for this work. A formal definition of the ANU ETP is also proposed. Results show techniques using cluster patterns produced better results than the traditional GA with statistical significance of p < 0.01, showing strong evidence. Our techniques either clearly outperformed or performed well compared to the best known techniques in the literature and produced a better timetable than the manually constructed timetable used by ANU, both in terms of quality and execution time. In this work, we also propose clear criteria for specifying the top results in this area.
- Published
- 2013
44. A genetic algorithm approach to global optimization of software cost estimation by analogy
- Author
-
Ioannis Stamelos, Christos Chatzibagias, and Dimitrios Milios
- Subjects
Mathematical optimization ,Similarity (geometry) ,Cost estimate ,business.industry ,Computer science ,Parameter space ,Field (computer science) ,Human-Computer Interaction ,Software ,Artificial Intelligence ,Genetic algorithm ,Computer Vision and Pattern Recognition ,Project management ,business ,Global optimization - Abstract
Estimation by Analogy is a popular method in the field of software cost estimation. However, the configuration of the method affects estimation accuracy, which has a great effect on project management decisions. This paper proposes an optimal global setup for determining empirically the best parameter configuration based on genetic algorithms. Those parameters involve the definition of project similarity, the number of analogies and the way of adjusting the analogies used. We describe how such a search can be performed in the parameter space spanned by these parameters, which are essentially of different type. We report results on two datasets and compare with approaches that explore partially the search space. Results provide evidence that our method produces similar or better accuracy figures with respect to other approaches.
- Published
- 2013
45. SA Tabu Miner: A hybrid heuristic algorithm for rule induction
- Author
-
Dragan Mihajlov, Ivan Chorbev, and Boban Joksimoski
- Subjects
Optimization algorithm ,Computer science ,business.industry ,Rule induction ,Heuristic (computer science) ,Ant colony optimization algorithms ,MathematicsofComputing_NUMERICALANALYSIS ,InformationSystems_DATABASEMANAGEMENT ,Tabu search ,Data mining algorithm ,Human-Computer Interaction ,Knowledge extraction ,Artificial Intelligence ,Simulated annealing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
This paper presents a Hybrid Heuristic algorithm for induction of classification rules called SA Tabu Miner Simulated Annealing and Tabu Search based Data Miner. The proposed procedure is inspired by both research on heuristic optimization algorithms and rule induction data mining concepts and principles. A comparison is made of the performance of SA Tabu Miner with CN2 and C4.5, well-known data mining algorithms for classification, and Ant-Miner, a recently proposed Ant Colony Optimization based algorithm, over public domain data sets. The results provide evidence that: our algorithm is comparable with CN2, C4.5 and Ant-Miner in terms of predictive accuracy; and the rule lists discovered by our algorithm are considerably simpler smaller than those discovered by other algorithms.
- Published
- 2012
46. Image retrieval based on high level concept detection and semantic labelling
- Author
-
Sheela Ramanna and Buddhika Madduma
- Subjects
Information retrieval ,Computer science ,business.industry ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Supervised learning ,Pattern recognition ,Human-Computer Interaction ,Euclidean distance ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Automatic image annotation ,Artificial Intelligence ,Labelling ,Computer Vision and Pattern Recognition ,Visual Word ,Artificial intelligence ,business ,Classifier (UML) ,Image retrieval ,Software - Abstract
This paper presents a novel approach to high-level concept detection and retrieval in images based on a combination of visual thesaurus and multi-class supervised learning. The visual thesaurus includes both conceptual and spatial location information of semantic concepts that are key to image labelling. Our image annotation (or labelling) process includes segmenting and building an image signature. The visual thesaurus is then built using a multi-class supervised SYM classifier. Algorithm for spatial location matching is included. Similarity matching during retrieval is performed on both the content as well as the location information using the standard Euclidean distance. Corel data set was used for experimentation and results were compared with two related approaches to visual thesaurus and image retrieval.
- Published
- 2012
47. A new approach for automatic assessment of a neurological condition employing hand gesture classification
- Author
-
Katarzyna Kaszuba and Bozena Kostek
- Subjects
Computer science ,business.industry ,Pattern recognition ,Gesture classification ,Fuzzy logic ,Hand movements ,Human-Computer Interaction ,Static image ,Artificial Intelligence ,Gesture recognition ,Rating scale ,Finger tapping ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Classifier (UML) ,Software - Abstract
The paper presents a new approach to hand gesture classification which may be useful in testing and monitoring patients with neurological conditions. Since applying gesture recognition based on static image processing may easily fail when it comes to work with patients with neurological disorders, it is crucial to convert static masks into dynamic 3-dimensional geometrical model. The system being developed is meant to be used by patients with Parkinson Disease (PD). Three tests based on UPDRS (Unified Parkinson's Disease Rating Scale) are envisioned to be performed by a patient with the use of the system, i.e.: finger tapping (test No. 23), hand movements (test No. 24) and rapid alternating hand movements (test No. 25). In this concept presentation the movement interpolation curves based on various parameters are to be used as an input to the fuzzy logic classifier. In conclusion the aim of this research is presented and the approach advantages and disadvantages are shown and discussed.
- Published
- 2012
48. A confidence paradigm for classification systems with out-of-library considerations
- Author
-
Nathan J. Leap and Kenneth W. Bauer
- Subjects
Mahalanobis distance ,business.industry ,Computer science ,Feature vector ,Posterior probability ,computer.software_genre ,Machine learning ,Synthetic data ,Human-Computer Interaction ,Set (abstract data type) ,Automatic target recognition ,Artificial Intelligence ,Bounding overwatch ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,computer ,Software - Abstract
There is no universally accepted methodology to determine how much confidence one should place in the output of a classification system. Leap and Bauer [15] present a confidence paradigm based on the assumptions that system confidence acts like, or can be modeled as value and that indication confidence can be modeled as a function of the posterior probability estimates. This paper extends the paradigm to include out-of-library considerations. In addition, a novel out-of-library detector is presented. Developing the out-of-library detector involves bounding and discretizing the feature space and assigning each discrete point to either in-library or out-of-library classes based upon Mahalanobis distance from the in-library target classes. Application of the confidence paradigm to the out-of-library detector leads us to the demonstration of a new concept called out-of-library non-declarations. The extended paradigm is applied to a synthetic data set as well as an automatic target recognition data set. In all cases, the results show performance that tracks well with previous studies found in the literature and demonstrate positive steps toward fuller development of a theoretical framework that unites the viewpoints of the classification system developer and its user.
- Published
- 2011
49. Modular symbiotic adaptive neuro evolution for high dimensionality classificatory problems
- Author
-
Rahul Kala, Ritu Tiwari, and Anupam Shukla
- Subjects
Artificial neural network ,business.industry ,Computer science ,Probabilistic logic ,Modular design ,Division (mathematics) ,Modular neural network ,Machine learning ,computer.software_genre ,Human-Computer Interaction ,Data set ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Face (geometry) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Curse of dimensionality - Abstract
There has been a considerable effort in the design of evolutionary systems for the automatic generation of neural networks. Symbiotic Adaptive Neuro Evolution (SANE) is a novel approach that carries co-evolution of neural networks at two levels of neuron and network. The SANE network is likely to face problems when the applied data set has high number of attributes or a high dimensionality. In this paper we build a modular neural network with probabilistic sum integration technique to solve this curse of dimensionality. Each module is a SANE network. The division of the problem involves the breaking up of the problem into sub-problems with different (may be overlapping) attributes. The algorithm was simulated for the Breast Cancer database from UCI machine learning repository. Simulation results show that the algorithm, keeping the dimensionality low, was able to effectively solve the problem.
- Published
- 2011
50. Network intrusion detection system: A machine learning approach
- Author
-
Swagatam Das, Manas Ranjan Patra, Mrutyunjaya Panda, and Ajith Abraham
- Subjects
Network administrator ,Computer science ,Anomaly-based intrusion detection system ,business.industry ,Decision tree ,Bayesian network ,Intrusion detection system ,Machine learning ,computer.software_genre ,Random forest ,Human-Computer Interaction ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Data mining ,Artificial intelligence ,AdaBoost ,business ,computer ,Software - Abstract
Intrusion detection systems IDSs are currently drawing a great amount of interest as a key part of system defence. IDSs collect network traffic information from some point on the network or computer system and then use this information to secure the network. Recently, machine learning methodologies are playing an important role in detecting network intrusions or attacks, which further helps the network administrator to take precautionary measures for preventing intrusions. In this paper, we propose to use ten machine learning approaches that include Decision Tree J48, Bayesian Belief Network, Hybrid Naive Bayes with Decision Tree, Rotation Forest, Hybrid J48 with Lazy Locally weighted learning, Discriminative multinomial Naive Bayes, Combining random Forest with Naive Bayes and finally ensemble of classifiers using J48 and NB with AdaBoost AB to detect network intrusions efficiently. We use NSL-KDD dataset, a variant of widely used KDDCup 1999 intrusion detection benchmark dataset, for evaluating our proposed machine learning approaches for network intrusion detection. Finally, Experimental results with 5-class classification are demonstrated that include: Detection rate, false positive rate, and average cost for misclassification. These are used to aid a better understanding for the researchers in the domain of network intrusion detection.
- Published
- 2011
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.