502 results on '"Morten Goodwin"'
Search Results
202. Accessibility of eGovernment Web Sites: Towards a Collaborative Retrofitting Approach
- Author
-
Nietzio, Annika, primary, Olsen, Morten Goodwin, additional, Eibegger, Mandana, additional, and Snaprud, Mikael, additional
- Published
- 2010
- Full Text
- View/download PDF
203. Automatic Checking of Alternative Texts on Web Pages
- Author
-
Olsen, Morten Goodwin, primary, Snaprud, Mikael, additional, and Nietzio, Annika, additional
- Published
- 2010
- Full Text
- View/download PDF
204. A solution to the exact match on rare item searches: introducing the lost sheep algorithm.
- Author
-
Morten Goodwin
- Published
- 2011
- Full Text
- View/download PDF
205. Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?
- Author
-
Casado Martínez, Carlos, primary, Martínez-Normand, Loïc, additional, and Olsen, Morten Goodwin, additional
- Published
- 2009
- Full Text
- View/download PDF
206. Chatbot Research and Design : 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers
- Author
-
Asbjørn Følstad, Theo Araujo, Symeon Papadopoulos, Effie L.-C. Law, Ewa Luger, Morten Goodwin, Petter Bae Brandtzaeg, Asbjørn Følstad, Theo Araujo, Symeon Papadopoulos, Effie L.-C. Law, Ewa Luger, Morten Goodwin, and Petter Bae Brandtzaeg
- Subjects
- Natural language processing (Computer science), Computer engineering, Computer networks, Logic programming, Computer science
- Abstract
This book constitutes the proceedings of the 4th International Workshop on Chatbot Research and Design, CONVERSATIONS 2020, which was held during November 23-24, 2020, hosted by the University of Amsterdam. The conference was planned to take place in Amsterdam, The Netherlands, but changed to an online format due to the COVID-19 pandemic.The 14 papers included in this volume were carefully reviewed and selected from a total of 36 submissions. The papers in the proceedings are structured in four topical groups: Chatbot UX and user perceptions, social and relational chatbots, chatbot applications, and chatbots for customer service. The papers provide new knowledge through empirical, theoretical, or design contributions.
- Published
- 2021
207. A Proposed Architecture for Large Scale Web Accessibility Assessment
- Author
-
Snaprud, Mikael Holmesland, primary, Ulltveit-Moe, Nils, additional, Pillai, Anand Balachandran, additional, and Olsen, Morten Goodwin, additional
- Published
- 2006
- Full Text
- View/download PDF
208. Increasing sample efficiency in deep reinforcement learning using generative environment modelling
- Author
-
Per-Arne Andersen, Ole-Christoffer Granmo, and Morten Goodwin
- Subjects
Artificial neural network ,Computer science ,business.industry ,Sample (statistics) ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,Reinforcement learning ,Markov decision process ,Artificial intelligence ,business ,computer ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 ,Generative grammar - Published
- 2020
209. ANN modelling of CO2 refrigerant cooling system COP in a smart warehouse
- Author
-
Armin Hafner, Mohan Kolhe, Sven Myrdahl Opalic, Henrik Kofoed Nielsen, Ángel Á. Pardiñas, Morten Goodwin, and Lei Jiao
- Subjects
Renewable Energy, Sustainability and the Environment ,Computer science ,business.industry ,020209 energy ,Strategy and Management ,05 social sciences ,02 engineering and technology ,Energy consumption ,Industrial and Manufacturing Engineering ,Energy storage ,Automotive engineering ,Renewable energy ,Refrigerant ,Energy management system ,Mean absolute percentage error ,Operating temperature ,050501 criminology ,0202 electrical engineering, electronic engineering, information engineering ,Water cooling ,business ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 ,0505 law ,General Environmental Science - Abstract
Author's accepted manuscript Industrial cooling systems consume large quantities of energy with highly variable power demand. To reduce environmental impact and overall energy consumption, and to stabilize the power requirements, it is recommended to recover surplus heat, store energy, and integrate renewable energy production. To control these operations continuously in a complex energy system, an intelligent energy management system can be employed using operational data and machine learning. In this work, we have developed an artificial neural network based technique for modelling operational CO2 refrigerant based industrial cooling systems for embedding in an overall energy management system. The operating temperature and pressure measurements, as well as the operating frequency of compressors, are used in developing operational model of the cooling system, which outputs electrical consumption and refrigerant mass flow without the need for additional physical measurements. The presented model is superior to a generalized theoretical model, as it learns from data that includes individual compressor type characteristics. The results show that the presented approach is relatively precise with a Mean Average Percentage Error (MAPE) as low as 5%, using low resolution and asynchronous data from a case study system. The developed model is also tested in a laboratory setting, where MAPE is shown to be as low as 1.8%.
- Published
- 2020
210. Learning Automata-based Misinformation Mitigation via Hawkes Processes
- Author
-
Ole-Christoffer Granmo, Ahmed Abouzeid, Christian Webersik, and Morten Goodwin
- Subjects
Computer Networks and Communications ,Computer science ,Distributed computing ,Stochastic optimization ,Social media Misinformation ,02 engineering and technology ,Crisis mitigation ,Article ,Theoretical Computer Science ,Learning automata ,020204 information systems ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,State space ,Social media ,Misinformation ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 ,Social network ,business.industry ,Automaton ,020201 artificial intelligence & image processing ,business ,Hawkes processes ,Software ,Information Systems - Abstract
Mitigating misinformation on social media is an unresolved challenge, particularly because of the complexity of information dissemination. To this end, Multivariate Hawkes Processes (MHP) have become a fundamental tool because they model social network dynamics, which facilitates execution and evaluation of mitigation policies. In this paper, we propose a novel light-weight intervention-based misinformation mitigation framework using decentralized Learning Automata (LA) to control the MHP. Each automaton is associated with a single user and learns to what degree that user should be involved in the mitigation strategy by interacting with a corresponding MHP, and performing a joint random walk over the state space. We use three Twitter datasets to evaluate our approach, one of them being a new COVID-19 dataset provided in this paper. Our approach shows fast convergence and increased valid information exposure. These results persisted independently of network structure, including networks with central nodes, where the latter could be the root of misinformation. Further, the LA obtained these results in a decentralized manner, facilitating distributed deployment in real-life scenarios.
- Published
- 2020
211. Temperate Fish Detection and Classification: a Deep Learning based Approach
- Author
-
Morten Goodwin, Kim Aleksander Tallaksen Halvorsen, Alf Ring Kleiven, Kristian Muri Knausgård, Lei Jiao, Arne Wiklund, and Tonje Knutsen Sørdalen
- Subjects
0106 biological sciences ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,010603 evolutionary biology ,01 natural sciences ,Convolutional neural network ,VDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420 ,Machine Learning (cs.LG) ,Artificial Intelligence ,Classifier (linguistics) ,FOS: Electrical engineering, electronic engineering, information engineering ,business.industry ,010604 marine biology & hydrobiology ,Deep learning ,Image and Video Processing (eess.IV) ,Process (computing) ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Object detection ,A priori and a posteriori ,Noise (video) ,Artificial intelligence ,Transfer of learning ,business - Abstract
A wide range of applications in marine ecology extensively uses underwater cameras. Still, to efficiently process the vast amount of data generated, we need to develop tools that can automatically detect and recognize species captured on film. Classifying fish species from videos and images in natural environments can be challenging because of noise and variation in illumination and the surrounding habitat. In this paper, we propose a two-step deep learning approach for the detection and classification of temperate fishes without pre-filtering. The first step is to detect each single fish in an image, independent of species and sex. For this purpose, we employ the You Only Look Once (YOLO) object detection technique. In the second step, we adopt a Convolutional Neural Network (CNN) with the Squeeze-and-Excitation (SE) architecture for classifying each fish in the image without pre-filtering. We apply transfer learning to overcome the limited training samples of temperate fishes and to improve the accuracy of the classification. This is done by training the object detection model with ImageNet and the fish classifier via a public dataset (Fish4Knowledge), whereupon both the object detection and classifier are updated with temperate fishes of interest. The weights obtained from pre-training are applied to post-training as a priori. Our solution achieves the state-of-the-art accuracy of 99.27\% on the pre-training. The percentage values for accuracy on the post-training are good; 83.68\% and 87.74\% with and without image augmentation, respectively, indicating that the solution is viable with a more extensive dataset., Comment: arXiv admin note: substantial text overlap with arXiv:1904.02768
- Published
- 2020
- Full Text
- View/download PDF
212. Indoor Space Classification Using Cascaded LSTM
- Author
-
Lei Jiao, Bimal Bhattarai, Morten Goodwin, Ole-Christoffer Granmo, and Rohan Kumar Yadav
- Subjects
Computer science ,Ultra-wideband ,020302 automobile design & engineering ,02 engineering and technology ,Space (commercial competition) ,computer.software_genre ,Object detection ,law.invention ,Domain (software engineering) ,Bluetooth ,0203 mechanical engineering ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,computer ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 - Abstract
Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Indoor space classification is an important part of localization that helps in precise location extraction, which has been extensively utilized in industrial and domestic domain. There are various approaches that employ Bluetooth Low Energy (BLE), Wi-Fi, magnetic field, object detection, and Ultra Wide Band (UWB) for indoor space classification purposes. Many of the existing approaches need extensive pre-installed infrastructure, making the cost higher to obtain reasonable accuracy. Therefore, improvements are still required to increase the accuracy with minimum requirements of infrastructure. In this paper, we propose an approach to classify the indoor space using geomagnetic field (GMF) and radio signal strength (RSS) as the identity. The indoor space is an open big test bed divided into different indiscernible subspace. We collect GMF and RSS at each subspace and classify it using cascaded Long Short Term Memory (LSTM). The experimental results show that the accuracy is significantly improved when GMF and RSS are combined to make distinct features. In addition, we compare the performance of the proposed model with the state-of-the-art machine learning methods.
- Published
- 2020
213. The Use of Artificial Intelligence in Disaster Management - A Systematic Literature Review
- Author
-
Morten Goodwin and Vimala Nunavath
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Big data ,Intelligent decision support system ,020206 networking & telecommunications ,02 engineering and technology ,Latent Dirichlet allocation ,Convolutional neural network ,Support vector machine ,symbols.namesake ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Whenever a disaster occurs, users in social media, sensors, cameras, satellites, and the like generate vast amounts of data. Emergency responders and victims use this data for situational awareness, decision-making, and safe evacuations. However, making sense of the generated information under time-bound situations is a challenging task as the amount of data can be significant, and there is a need for intelligent systems to analyze, process, and visualize it. With recent advancements in Artificial Intelligence (AI), numerous researchers have begun exploring AI, machine learning (ML), and deep learning (DL) techniques for big data analytics in managing disasters efficiently. This paper adopts a systematic literature approach to report on the application of AI, ML, and DL in disaster management. Through a systematic review process, we identified one relevant hundred publications. After that, we analyzed all the identified papers and concluded that most of the reviewed articles used AI, ML, and DL methods on social media data, satellite data, sensor data, and historical data for classification and prediction. The most common algorithms are support vector machines (SVM), Naive Bayes (NB), Random Forest (RF), Convolutional Neural Networks (CNN), Artificial neural networks (ANN), Natural language processing techniques (NLP), Latent Dirichlet Allocation (LDA), K-nearest neighbor (KNN), and Logistic Regression (LR).
- Published
- 2019
- Full Text
- View/download PDF
214. Load Demand Analysis of Nordic Rural Area with Holiday Resorts for Network Capacity Planning
- Author
-
Morten Goodwin, Mohan Kolhe, and Nils Jakob Johannesen
- Subjects
Transport engineering ,Capacity planning ,Electrical load ,Peak demand ,Computer science ,business.industry ,Distributed generation ,Rural area ,Demand forecasting ,Grid ,business ,Energy storage - Abstract
Most of the Nordic holiday resorts are in rural area with low capacity distributed network. The rural area network is weak and needs capacity expansion planning as the load demand of this area are going to increase due to penetration of electric vehicles and heat pumps. Such type of rural network can also be operated as a micro-grid, and therefore load analysis is required for appropriate operation. The load analysis will also be useful for finding proper sizing of distributed energy resources including energy storage. In this work, load demand analysis of a typical Nordic holiday resorts, connected in rural grid, is presented to find out the load variation during the usage periods. The load analysis is targeted for demand prediction. The demand forecasting has been considered through integrating Regression Tools with Artificial Neural Networks due to the low amount of data available from the Holiday Resorts. Collected data is from a rural area in Norway consisting of 125 holiday cabins, with maximum load of 478 kW in the period of 2014 to 2018. This work is presenting the analysis on the total electric load consumption of cabins during typical short and long term holidays. It is observed, during the longer time holiday period, the loads are significantly higher compared to shorter time holiday period. Prediction analysis shows that the MAPE is relatively higher compare to predicted results in higher load area. Through analysis, it is observed that the curvature of the maximum peak demand is unfitting the predictive outcome. To overcome this problem the finite gradient by autoregression, has been used in this work.
- Published
- 2019
- Full Text
- View/download PDF
215. Automated Dental Identification with Lowest Cost Path-Based Teeth and Jaw Separation
- Author
-
Morten Goodwin and Jan-Vidar Ølberg
- Subjects
021110 strategic, defence & security studies ,K5000-5582 ,business.industry ,Separation (aeronautics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Anatomy ,Dental identification ,path-finding ,human dental identification ,Criminal law and procedure ,stomatognathic diseases ,stomatognathic system ,Social pathology. Social and public welfare. Criminology ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,HV1-9960 - Abstract
Teeth are some of the most resilient tissues of the human body. Because of their placement, teeth often yield intact indicators even when other metrics, such as finger prints and DNA, are missing. Forensics on dental identification is now mostly manual work which is time and resource intensive. Systems for automated human identification from dental X-ray images have the potential to greatly reduce the necessary efforts spent on dental identification, but it requires a system with high stability and accuracy so that the results can be trusted. This paper proposes a new system for automated dental X-ray identification. The scheme extracts tooth and dental work contours from the X-ray images and uses the Hausdorff-distance measure for ranking persons. This combination of state-of-the-art approaches with a novel lowest cost path-based method for separating a dental X-ray image into individual teeth, is able to achieve comparable and better results than what is available in the literature. The proposed scheme is fully functional and is used to accurately identify people within a real dental database. The system is able to perfectly separate 88.7% of the teeth in the test set. Further, in the verification process, the system ranks the correct person in top in 86% of the cases, and among the top five in an astonishing 94% of the cases. The approach has compelling potential to significantly reduce the time spent on dental identification.
- Published
- 2016
- Full Text
- View/download PDF
216. A pattern recognition approach for peak prediction of electrical consumption
- Author
-
Anis Yazidi and Morten Goodwin
- Subjects
Consumption (economics) ,Computer science ,business.industry ,020209 energy ,Load balancing (electrical power) ,Pattern recognition ,Context (language use) ,02 engineering and technology ,Computer Science Applications ,Theoretical Computer Science ,Power (physics) ,Task (project management) ,Computational Theory and Mathematics ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,Artificial intelligence ,business ,Software ,Energy (signal processing) - Abstract
Predicting and mitigating demand peaks in electrical networks has become a prevalent research topic. Demand peaks pose a particular challenge to energy companies because these are difficult to foresee and require the net to support abnormally high consumption levels. In smart energy grids, time-differentiated pricing policies that increase the energy cost for the consumers during peak periods, and load balancing are examples of simple techniques for peak regulation. In this paper, we tackle the task of predicting power peaks prior to their actual occurrence in the context of a pilot Norwegian smart grid network.
- Published
- 2016
- Full Text
- View/download PDF
217. Smart load prediction analysis for distributed power network of Holiday Cabins in Norwegian rural area
- Author
-
Morten Goodwin, Nils Jakob Johannesen, and Mohan Kolhe
- Subjects
Mathematical optimization ,Renewable Energy, Sustainability and the Environment ,Computer science ,020209 energy ,Strategy and Management ,05 social sciences ,Autocorrelation ,Distributed power ,Regression analysis ,02 engineering and technology ,Load profile ,Industrial and Manufacturing Engineering ,Random forest ,Autoregressive model ,Peak demand ,050501 criminology ,0202 electrical engineering, electronic engineering, information engineering ,Symmetric mean absolute percentage error ,0505 law ,General Environmental Science - Abstract
The Norwegian rural distributed power network is mainly designed for Holiday Cabins with limited electrical loading capacity. Load prediction analysis, within such type of network, is necessary for effective operation and to manage the increasing demand of new appliances (e. g. electric vehicles and heat pumps). In this paper, load prediction of a distributed power network (i.e. a typical Norwegian rural area power network of 125 cottages with 478 kW peak demand) is carried out using regression analysis techniques for establishing autocorrelations and correlations among weather parameters and occurrence time in the period of 2014–2018. In this study, the regression analysis for load prediction is done considering vertical and continuous time approach for day-ahead prediction. The vertical time approach uses seasonal data for training and inference, compared to continuous time approach which utilizes all data in a continuum from the start of the dataset until the time period used for inference. The vertical approach does this with even fewer data than continuous approach. The regression tools can perform using the low amount of data, and the prediction accuracy matches with other techniques. It is observed through load predictive analysis that the autocorrelation by vertical approach with kNN-regressor gives a low Symmetric Mean Absolute Percentage Error. The kNN-regressor is compared with Random Forest Regressor, and it uses autoregression. Autoregression is the simplest and the most straightforward predictive model based on the targeted vector itself. The autoregression indicates the decline and incline of the time-series, and thus gives a finite gradient for the curvature of load profile. It is observed that joint learning of regression tools with autoregression can predict time-series components of the different load profile characteristics. The presented load prediction analysis is going to be useful for distributed network operation, demand-side management, integration of renewable energy sources and distributed generators.
- Published
- 2020
- Full Text
- View/download PDF
218. Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization With Medical Applications
- Author
-
Lei Jiao, Morten Goodwin, Ole-Christoffer Granmo, Geir Thore Berge, Tor Oddbjørn Tveit, and Bernt Viggo Matheussen
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,General Computer Science ,Computer science ,text categorization ,Natural language understanding ,Decision tree ,Machine Learning (stat.ML) ,02 engineering and technology ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550::Annen informasjonsteknologi: 559 ,Machine learning ,computer.software_genre ,supervised learning ,Machine Learning (cs.LG) ,Naive Bayes classifier ,Text mining ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Tsetlin machine ,health informatics ,Interpretability ,Propositional variable ,Classification algorithms ,Artificial neural network ,business.industry ,Deep learning ,020208 electrical & electronic engineering ,General Engineering ,Random forest ,Support vector machine ,machine learning ,Categorization ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,Precision and recall ,computer ,lcsh:TK1-9971 - Abstract
Medical applications challenge today's text categorization techniques by demanding both high accuracy and ease-of-interpretation. Although deep learning has provided a leap ahead in accuracy, this leap comes at the sacrifice of interpretability. To address this accuracy-interpretability challenge, we here introduce, for the first time, a text categorization approach that leverages the recently introduced Tsetlin Machine. In all brevity, we represent the terms of a text as propositional variables. From these, we capture categories using simple propositional formulae, such as: if "rash" and "reaction" and "penicillin" then Allergy. The Tsetlin Machine learns these formulae from a labelled text, utilizing conjunctive clauses to represent the particular facets of each category. Indeed, even the absence of terms (negated features) can be used for categorization purposes. Our empirical comparison with Na\"ive Bayes, decision trees, linear support vector machines (SVMs), random forest, long short-term memory (LSTM) neural networks, and other techniques, is quite conclusive. The Tsetlin Machine either performs on par with or outperforms all of the evaluated methods on both the 20 Newsgroups and IMDb datasets, as well as on a non-public clinical dataset. On average, the Tsetlin Machine delivers the best recall and precision scores across the datasets. Finally, our GPU implementation of the Tsetlin Machine executes 5 to 15 times faster than the CPU implementation, depending on the dataset. We thus believe that our novel approach can have a significant impact on a wide range of text analysis applications, forming a promising starting point for deeper natural language understanding with the Tsetlin Machine., Comment: 10 pages, 4 figures
- Published
- 2019
219. The regression Tsetlin machine: a novel approach to interpretable nonlinear regression
- Author
-
Ole-Christoffer Granmo, Morten Goodwin, Lei Jiao, Xuan Zhang, and K. Darshana Abeyrathna
- Subjects
021110 strategic, defence & security studies ,Theoretical computer science ,Empirical comparison ,Computer science ,General Mathematics ,0211 other engineering and technologies ,General Engineering ,General Physics and Astronomy ,Binary number ,02 engineering and technology ,Thresholding ,Regression ,Propositional formula ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Bitwise operation ,Theme (computing) ,Nonlinear regression ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 - Abstract
Relying simply on bitwise operators, the recently introduced Tsetlin machine (TM) has provided competitive pattern classification accuracy in several benchmarks, including text understanding. In this paper, we introduce the regression Tsetlin machine (RTM), a new class of TMs designed for continuous input and output, targeting nonlinear regression problems. In all brevity, we convert continuous input into a binary representation based on thresholding, and transform the propositional formula formed by the TM into an aggregated continuous output. Our empirical comparison of the RTM with state-of-the-art regression techniques reveals either superior or on par performance on five datasets. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.
- Published
- 2019
220. Environment Sound Classification using Multiple Feature Channels and Attention based Deep Convolutional Neural Network
- Author
-
Jivitesh Sharma, Morten Goodwin, and Ole-Christoffer Granmo
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Sound (cs.SD) ,Computer science ,020209 energy ,Machine Learning (stat.ML) ,02 engineering and technology ,computer.software_genre ,Convolutional neural network ,Computer Science - Sound ,Domain (software engineering) ,Machine Learning (cs.LG) ,Statistics - Machine Learning ,Audio and Speech Processing (eess.AS) ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Audio signal processing ,VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 ,business.industry ,SIGNAL (programming language) ,Pattern recognition ,Feature (computer vision) ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Mel-frequency cepstrum ,business ,computer ,Electrical Engineering and Systems Science - Audio and Speech Processing ,Communication channel - Abstract
In this paper, we propose a model for the Environment Sound Classification Task (ESC) that consists of multiple feature channels given as input to a Deep Convolutional Neural Network (CNN) with Attention mechanism. The novelty of the paper lies in using multiple feature channels consisting of Mel-Frequency Cepstral Coefficients (MFCC), Gammatone Frequency Cepstral Coefficients (GFCC), the Constant Q-transform (CQT) and Chromagram. Such multiple features have never been used before for signal or audio processing. And, we employ a deeper CNN (DCNN) compared to previous models, consisting of spatially separable convolutions working on time and feature domain separately. Alongside, we use attention modules that perform channel and spatial attention together. We use some data augmentation techniques to further boost performance. Our model is able to achieve state-of-the-art performance on all three benchmark environment sound classification datasets, i.e. the UrbanSound8K (97.52%), ESC-10 (95.75%) and ESC-50 (88.50%). To the best of our knowledge, this is the first time that a single environment sound classification model is able to achieve state-of-the-art results on all three datasets. For ESC-10 and ESC-50 datasets, the accuracy achieved by the proposed model is beyond human accuracy of 95.7% and 81.3% respectively., Comment: Re-checking results
- Published
- 2019
- Full Text
- View/download PDF
221. The Role of Artificial Intelligence in Social Media Big data Analytics for Disaster Management -Initial Results of a Systematic Literature Review
- Author
-
Morten Goodwin and Vimala Nunavath
- Subjects
Emergency management ,Process (engineering) ,business.industry ,Computer science ,Big data ,02 engineering and technology ,Convolutional neural network ,Task (project management) ,Systematic review ,Order (exchange) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Social media ,Artificial intelligence ,business - Abstract
When any kind of disaster occurs, victims who are directly and indirectly affected by the disaster often post vast amount of data (e.g., images, text, speech, video) using numerous social media platforms. This is because social media has recently become a primary communication channel among people to report either to public or to emergency responders (ERs). ERs, who are from various emergency response organizations (EROs), usually consider to gain awareness of the situation in order to respond to occurred disaster. However, with the occurrence of the disaster, within minutes, the social media platforms are flooded with various kinds of data which become overwhelmed for ERs with big data. Further, in this posted data, there may be majority of the data consist of redundant and irrelevant content. With this, it becomes challenging for ERs to make sense and take decisions of/on the available big data. Despite recent advances in the technology, processing and analyzing of the disaster related social media big data remains a challenging task. Hence, in this paper, we focus on presenting an initial analysis of a systematic literature review on application of artificial intelligence to analyze/process social media big data for efficient disaster management. During a systematic review process 68 publications were identified. Thereafter, we analyzed all the identified papers. From our analysis, we conclude that the most of the reviewed papers are on text and image classification and mostly convolutional neural networks have been employed for the classification.
- Published
- 2018
- Full Text
- View/download PDF
222. A Novel Tsetlin Automata Scheme to Forecast Dengue Outbreaks in the Philippines
- Author
-
Morten Goodwin, Ole-Christoffer Granmo, and Darshana Abeyrathna Kuruge
- Subjects
0301 basic medicine ,Scheme (programming language) ,Computational complexity theory ,Learning automata ,business.industry ,Computer science ,Stochastic process ,030231 tropical medicine ,Function (mathematics) ,Machine learning ,computer.software_genre ,030112 virology ,Automaton ,03 medical and health sciences ,0302 clinical medicine ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
Being capable of online learning in unknown stochastic environments, Tsetlin Automata (TA) have gained considerable interest. As a model of biological systems, teams of TA have been used for solving complex problems in a decentralized manner, with low computational complexity. For many domains, decentralized problem solving is an advantage, however, also may lead to coordination difficulties and unstable learning. To combat this negative effect, this paper proposes a novel TA coordination scheme designed for learning problems with continuous input and output. By saving and updating the best solution that has been chosen so far, we can avoid having the overall system being led astray by spurious erroneous actions. We organize this process hierarchically by a principal-teacherclass structure. We further propose a binary representation of continuous actions (coefficients). Each coefficient in the cost function is represented by 8 TA. TA teams at different classes produce different solutions. They are trained to find the global optimum with the help of their own best and the overall best solutions. The proposed algorithm is tested first with an artificial dataset and later used to forecast dengue haemorrhagic fever in the Philippines. Results of the novel procedure are compared with results from two traditional TA approaches. The training error of the novel TA scheme is lower approx. 50 and 62 times compared to the considered two traditional Tsetlin Automata approaches and testing error is approx. 31 and 21 times lower for the new scheme. These improvements not only highlight the effectiveness of the proposed scheme, but also the importance of old, simple, yet powerful concepts in the Artificial Intelligence techniques.
- Published
- 2018
- Full Text
- View/download PDF
223. Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games
- Author
-
Ole-Christoffer Granmo, Morten Goodwin, and Per-Arne Andersen
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,business.industry ,Computer science ,Computer Science - Artificial Intelligence ,ComputingMilieux_PERSONALCOMPUTING ,02 engineering and technology ,Convolutional neural network ,Accelerated learning ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence (cs.AI) ,Real-time strategy ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Reinforcement learning (RL) is an area of research that has blossomed tremendously in recent years and has shown remarkable potential for artificial intelligence based opponents in computer games. This success is primarily due to the vast capabilities of convolutional neural networks, that can extract useful features from noisy and complex data. Games are excellent tools to test and push the boundaries of novel RL algorithms because they give valuable insight into how well an algorithm can perform in isolated environments without the real-life consequences. Real-time strategy games (RTS) is a genre that has tremendous complexity and challenges the player in short and long-term planning. There is much research that focuses on applied RL in RTS games, and novel advances are therefore anticipated in the not too distant future. However, there are to date few environments for testing RTS AIs. Environments in the literature are often either overly simplistic, such as microRTS, or complex and without the possibility for accelerated learning on consumer hardware like StarCraft II. This paper introduces the Deep RTS game environment for testing cutting-edge artificial intelligence algorithms for RTS games. Deep RTS is a high-performance RTS game made specifically for artificial intelligence research. It supports accelerated learning, meaning that it can learn at a magnitude of 50 000 times faster compared to existing RTS games. Deep RTS has a flexible configuration, enabling research in several different RTS scenarios, including partially observable state-spaces and map complexity. We show that Deep RTS lives up to our promises by comparing its performance with microRTS, ELF, and StarCraft II on high-end consumer hardware. Using Deep RTS, we show that a Deep Q-Network agent beats random-play agents over 70% of the time. Deep RTS is publicly available at https://github.com/cair/DeepRTS., Comment: Proceedings of the IEEE International Conference on Computational Intelligence and Games (CIG 2018)
- Published
- 2018
- Full Text
- View/download PDF
224. PolyACO+: a multi-level polygon-based ant colony optimisation classifier
- Author
-
Torry Tufteland, Morten Goodwin, Guro Ødesneltvedt, and Anis Yazidi
- Subjects
021103 operations research ,Artificial neural network ,Computer science ,business.industry ,Polygons ,Training time ,Multi-levelling ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Ant colony ,Support vector machine ,Artificial Intelligence ,Multiple time dimensions ,Polygon ,Ant colony optimisation ,0202 electrical engineering, electronic engineering, information engineering ,Artificial Ants ,020201 artificial intelligence & image processing ,Artificial intelligence ,Classifications ,business ,Classifier (UML) - Abstract
Ant Colony Optimisation for classification has mostly been limited to rule based approaches where artificial ants walk on datasets in order to extract rules from the trends in the data, and hybrid approaches which attempt to boost the performance of existing classifiers through guided feature reductions or parameter optimisations. A recent notable example that is distinct from the mainstream approaches is PolyACO, which is a proof of concept polygon-based classifier that resorts to ant colony optimisation as a technique to create multi-edged polygons as class separators. Despite possessing some promise, PolyACO has some significant limitations, most notably, the fact of supporting classification of only two classes, including two features per class.This paper introduces PolyACO+, which is an extension of PolyACO in three significant ways: (1) PolyACO+ supports classifying multiple classes, (2) PolyACO+ supports polygons in multiple dimensions enabling classification with more than two features, and (3) PolyACO+ substantially reduces the training time compared to PolyACO by using the concept of multi-leveling. This paper empirically demonstrates that these updates improve the algorithm to such a degree that it becomes comparable to state-of-the-art techniques such as SVM, Neural Networks, and AntMiner+. nivå1
- Published
- 2017
225. On Solving the Problem of Identifying Unreliable Sensors Without a Knowledge of the Ground Truth: The Case of Stochastic Environments
- Author
-
B. John Oommen, Anis Yazidi, and Morten Goodwin
- Subjects
Reliability theory ,Ground truth ,Weighted Majority Algorithm ,Learning automata ,business.industry ,Condorcet's jury theorem ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,Sensor fusion ,Computer Science Applications ,Human-Computer Interaction ,Parameter identification problem ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,Information Systems ,Mathematics - Abstract
The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying unreliable sensors (in a domain of reliable and unreliable ones) without any knowledge of the ground truth. This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counterintuitive in and of itself. One aspect of our contribution is to show how redundancy can be introduced, and how it can be effectively utilized in resolving this paradox. Legacy work and the reported literature (for example, in the so-called weighted majority algorithm) have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. Unfortunately, the fundamental assumption of revealing the ground truth cannot be always guaranteed (or even expected) in many real life scenarios. While some extensions of the Condorcet jury theorem [9] can lead to a probabilistic guarantee on the quality of the fused process, they do not provide a solution to the unreliable sensor identification problem. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth—as advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of learning automata (LA) so as to gradually learn the identity of the reliable and unreliable sensors. To achieve this, we resort to a team of LA, where a distinct automaton is associated with each sensor. The solution provided here has been subjected to rigorous experimental tests, and the results presented are, in our opinion, both novel and conclusive. Nivå2
- Published
- 2017
226. Adaptive Task Assignment in Online Learning Environments
- Author
-
Morten Goodwin, Christian Kråkevik, Per-Arne Andersen, and Anis Yazidi
- Subjects
FOS: Computer and information sciences ,Class (computer programming) ,Computer science ,business.industry ,Computer Science - Artificial Intelligence ,Node (networking) ,05 social sciences ,050301 education ,Contrast (statistics) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Popularity ,Intelligent tutoring system ,Task (project management) ,Artificial Intelligence (cs.AI) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Adaptive learning ,Artificial intelligence ,business ,0503 education ,computer - Abstract
With the increasing popularity of online learning, intelligent tutoring systems are regaining increased attention. In this paper, we introduce adaptive algorithms for personalized assignment of learning tasks to student so that to improve his performance in online learning environments. As main contribution of this paper, we propose a a novel Skill-Based Task Selector (SBTS) algorithm which is able to approximate a student's skill level based on his performance and consequently suggest adequate assignments. The SBTS is inspired by the class of multi-armed bandit algorithms. However, in contrast to standard multi-armed bandit approaches, the SBTS aims at acquiring two criteria related to student learning, namely: which topics should the student work on, and what level of difficulty should the task be. The SBTS centers on innovative reward and punishment schemes in a task and skill matrix based on the student behaviour. To verify the algorithm, the complex student behaviour is modelled using a neighbour node selection approach based on empirical estimations of a students learning curve. The algorithm is evaluated with a practical scenario from a basic java programming course. The SBTS is able to quickly and accurately adapt to the composite student competency --- even with a multitude of student models., 6th International Conference on Web Intelligence
- Published
- 2016
227. On Distinguishing between Reliable and Unreliable Sensors Without a Knowledge of the Ground Truth
- Author
-
B. John Oommen, Anis Yazidi, and Morten Goodwin
- Subjects
Reliability theory ,Ground truth ,Weighted Majority Algorithm ,Learning automata ,Sensor Fusion ,business.industry ,Computer science ,Reliability (computer networking) ,media_common.quotation_subject ,Learning Automata ,computer.software_genre ,Sensor fusion ,Machine learning ,Quality (business) ,Data mining ,Artificial intelligence ,business ,computer ,media_common - Abstract
In many applications, data from different sensors are aggregated in order to obtain more reliable information about the process that the sensors are monitoring. However, the quality of the aggregated information is intricately dependent on the reliability of the individual sensors. In fact, unreliable sensors will tend to report erroneous values of the ground truth, and thus degrade the quality of the fused information. Finding strategies to identify unreliable sensors can assist in having a counter-effect on their respective detrimental influences on the fusion process, and this has has been a focal concern in the literature. The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying which sensors are unreliable without any knowledge of the ground truth . This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counter-intuitive in and of itself . To the best of our knowledge, this is the first reported solution to the aforementioned paradox. Legacy work and the reported literature have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. The informed reader will observe that the so-called Weighted Majority Algorithm is a representative example of a large class of such legacy algorithms. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth – as advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of Learning Automata (LA) so as to gradually learn the identity of the reliable and unreliable sensors. To achieve this, we resort to a team of LA , where a distinct automaton is associated with each sensor. The solution provided here has been subjected to rigorous experimental tests, and the results presented are, in our opinion, both novel and conclusive.
- Published
- 2015
- Full Text
- View/download PDF
228. On Utilizing Stochastic Non-linear Fractional Bin Packing to Resolve Distributed Web Crawling
- Author
-
Morten Goodwin, B. John Oommen, Ole-Christoffer Granmo, and Anis Yazid
- Subjects
Theoretical computer science ,Learning automata ,Bin packing problem ,Computer science ,Web page ,Continuous knapsack problem ,Resource allocation ,Distributed web crawling ,Resource management ,Resource management (computing) ,Web crawler - Abstract
This paper deals with the extremely pertinent problem of web crawling, which is far from trivial considering the magnitude and all-pervasive nature of the World-Wide Web. While numerous AI tools can be used to deal with this task, in this paper we map the problem onto the combinatorially-hard stochastic non-linear fractional knapsack problem, which, in turn, is then solved using Learning Automata (LA). Such LA-based solutions have been recently shown to outperform previous state-of-the-art approaches to resource allocation in Web monitoring. However, the ever growing deployment of distributed systems raises the need for solutions that cope with a distributed setting. In this paper, we present a novel scheme for solving the non-linear fractional bin packing problem. Furthermore, we demonstrate that our scheme has applications to Web crawling, i.e., Distributed resource allocation, and in particular, to distributed Web monitoring. Comprehensive experimental results demonstrate the superiority of our scheme when compared to other classical approaches.
- Published
- 2014
- Full Text
- View/download PDF
229. A novel strategy for solving the stochastic point location problem using a hierarchical searching scheme
- Author
-
B. John Oommen, Ole-Christoffer Granmo, Anis Yazidi, and Morten Goodwin
- Subjects
Continuous-time stochastic process ,Mathematical optimization ,Optimization problem ,Controlled random walk ,Time reversibility ,Discretized learning ,02 engineering and technology ,Learning automata ,Stochastic-point problem ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Stochastic neural network ,Mathematics ,Binary tree ,020206 networking & telecommunications ,Random walk ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Stochastic optimization ,Software ,Information Systems - Abstract
Stochastic point location (SPL) deals with the problem of a learning mechanism (LM) determining the optimal point on the line when the only input it receives are stochastic signals about the direction in which it should move. One can differentiate the SPL from the traditional class of optimization problems by the fact that the former considers the case where the directional information, for example, as inferred from an Oracle (which possibly computes the derivatives), suffices to achieve the optimization-without actually explicitly computing any derivatives. The SPL can be described in terms of a LM (algorithm) attempting to locate a point on a line. The LM interacts with a random environment which essentially informs it, possibly erroneously, if the unknown parameter is on the left or the right of a given point. Given a current estimate of the optimal solution, all the reported solutions to this problem effectively move along the line to yield updated estimates which are in the neighborhood of the current solution(1) This paper proposes a dramatically distinct strategy, namely, that of partitioning the line in a hierarchical tree-like manner, and of moving to relatively distant points, as characterized by those along the path of the tree. We are thus attempting to merge the rich fields of stochastic optimization and data structures. Indeed, as in the original discretized solution to the SPL, in one sense, our solution utilizes the concept of discretization and operates a uni-dimensional controlled random walk (RW) in the discretized space, to locate the unknown parameter. However, by moving to nonneighbor points in the space, our newly proposed hierarchical stochastic searching on the line (HSSL) solution performs such a controlled RW on the discretized space structured on a superimposed binary tree. We demonstrate that the HSSL solution is orders of magnitude faster than the original SPL solution proposed by Oommen. By a rigorous analysis, the HSSL is shown to be optimal if the effectiveness (or credibility) of the environment, given by p , is greater than the golden ratio conjugate. The solution has been both analytically solved and simulated, and the results obtained are extremely fascinating, as this is the first reported use of time reversibility in the analysis of stochastic learning. The learning automata extensions of the scheme are currently being investigated. As we shall see later, hierarchical solutions have been proposed in the field of LA.
- Published
- 2014
230. A Spatio-temporal Probabilistic Model of Hazard and Crowd Dynamics in Disasters for Evacuation Planning
- Author
-
Jaziar Radianti, Morten Goodwin, Parvaneh Sarshar, Julie Dugdale, Ole-Christoffer Granmo, Sondre Glimsdal, Jose J. Gonzalez, University of Agder (UIA), Modélisation d’agents autonomes en univers multi-agents (MAGMA), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF), Moonis Ali, Tibor Bosse, Koen V. Hindriks, Mark Hoogendoorn, Catholijn M. Jonker, and Jan Treur
- Subjects
Hazard (logic) ,Crowd dynamics ,Operations research ,VDP::Mathematics and natural science: 400::Mathematics: 410::Statistics: 412 ,Computer science ,Hazard Modeling ,02 engineering and technology ,Crowd Modeling ,Time step ,11. Sustainability ,0202 electrical engineering, electronic engineering, information engineering ,Crowd psychology ,Dynamic Bayesian network ,business.industry ,Evacuation Planning ,020207 software engineering ,Statistical model ,Crowd modeling ,Ant Based Colony Optimization ,Crowd evacuation ,13. Climate action ,[INFO.INFO-MA]Computer Science [cs]/Multiagent Systems [cs.MA] ,020201 artificial intelligence & image processing ,Artificial intelligence ,Dynamic Bayesian Networks ,business - Abstract
Published version of a chapter in the book: Recent Trends in Applied Artificial Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/978-3-642-38577-3_7 Managing the uncertainties that arise in disasters – such as ship fire – can be extremely challenging. Previous work has typically focused either on modeling crowd behavior or hazard dynamics, targeting fully known environments. However, when a disaster strikes, uncertainty about the nature, extent and further development of the hazard is the rule rather than the exception. Additionally, crowd and hazard dynamics are both intertwined and uncertain, making evacuation planning extremely difficult. To address this challenge, we propose a novel spatio-temporal probabilistic model that integrates crowd with hazard dynamics, using a ship fire as a proof-of-concept scenario. The model is realized as a dynamic Bayesian network (DBN), supporting distinct kinds of crowd evacuation behavior – both descriptive and normative (optimal). Descriptive modeling is based on studies of physical fire models, crowd psychology models, and corresponding flow models, while we identify optimal behavior using Ant-Based Colony Optimization (ACO). Simulation results demonstrate that the DNB model allows us to track and forecast the movement of people until they escape, as the hazard develops from time step to time step. Furthermore, the ACO provides safe paths, dynamically responding to current threats.
- Published
- 2013
- Full Text
- View/download PDF
231. Ant colony optimisation for planning safe escape routes
- Author
-
Jaziar Radianti, Ole-Christoffer Granmo, Sondre Glimsdal, Parvaneh Sarshar, and Morten Goodwin
- Subjects
Emergency personnel ,VDP::Mathematics and natural science: 400::Mathematics: 410::Applied mathematics: 413 ,Operations research ,Smart phone ,Computer science ,Event (computing) ,VDP::Technology: 500::Information and communication technology: 550 ,Ant colony ,Computer security ,computer.software_genre ,Hazard (computer architecture) ,Emergency situations ,computer ,Wireless sensor network - Abstract
Published version of a chapter from the volume: Recent Trends in Applied Artificial Intelligence. Also available on SpringerLink: http://dx.doi.org/10.1007/978-3-642-38577-3_6 An emergency requiring evacuation is a chaotic event filled with uncertainties both for the people affected and rescuers. The evacuees are often left to themselves for navigation to the escape area. The chaotic situation increases when a predefined escape route is blocked by a hazard, and there is a need to re-think which escape route is safest. This paper addresses automatically finding the safest escape route in emergency situations in large buildings or ships with imperfect knowledge of the hazards. The proposed solution, based on Ant Colony Optimisation, suggests a near optimal escape plan for every affected person — considering both dynamic spread of hazards and congestion avoidance.The solution can be used both on an individual bases, such as from a personal smart phone of one of the evacuees, or from a remote location by emergency personnel trying to assist large groups.
- Published
- 2013
232. Following the WCAG 2.0 techniques: Experiences from designing a WCAG 2.0 checking tool
- Author
-
Mikael Snaprud, Annika Nietzio, Mandana Eibegger, and Morten Goodwin
- Subjects
World Wide Web ,Web standards ,medicine.medical_specialty ,VDP::Mathematics and natural science: 400::Information and communication science: 420::System development and system design: 426 ,business.industry ,Computer science ,Logical combination ,medicine ,Metric (unit) ,Software engineering ,business ,Web modeling ,Web accessibility - Abstract
Published version of a chapter in the book: Computers Helping People with Special Needs. Also available from the publisher at: http://dx.doi.org/10.1007/978-3-642-31522-0_63 This paper presents a conceptual analysis of how the Web Content Accessibility Guidelines (WCAG) 2.0 and its accompanying documents can be used as a basis for the implementation of an automatic checking tool and the definition of a web accessibility metric. There are two major issues that need to be resolved to derive valid and reliable conclusions from the output of individual tests. First, the relationship of Sufficient Techniques and Common Failures has to be taken into account. Second, the logical combination of the techniques related to a Success Criterion must be represented in the results. The eGovMon project has a lot of experience in specifying and implementing tools for automatic checking of web accessibility. The project is based on the belief that web accessibility evaluation is not an end in itself. Its purpose is to promote web accessibility and initiate improvements.
- Published
- 2012
233. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites
- Author
-
Annika Nietzio, Mikael Snaprud, Deniz Susar, Morten Goodwin, and Christian S. Jensen
- Subjects
Government ,Public Administration ,Sociology and Political Science ,General Computer Science ,Human rights ,e-participation ,business.industry ,media_common.quotation_subject ,Internet privacy ,Declaration ,Benchmarking ,World Wide Web ,Dignity ,Web Accessibility Initiative ,Business ,media_common ,Web accessibility - Abstract
Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web site owners are generally exaggerated.
- Published
- 2011
- Full Text
- View/download PDF
234. Accessibility of eGovernment Web Sites: Towards a Collaborative Retrofitting Approach
- Author
-
Mandana Eibegger, Annika Nietzio, Mikael Snaprud, and Morten Goodwin Olsen
- Subjects
Background information ,Competition (economics) ,Knowledge management ,business.industry ,Added value ,Retrofitting ,VDP::Technology: 500::Information and communication technology: 550 ,Business ,Benchmarking ,Enforcement ,Content management system ,Web site - Abstract
Published version of a chapter from the book: Computers Helping People with Special Needs.The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/978-3-642-14097-6_75 Accessibility benchmarking is efficient to raise awareness and initiate competition. However, traditional benchmarking is of little avail when it comes to removing barriers from eGovernment web sites in practice. Regulations and legal enforcement may be helpful in a long-term perspective. For more rapid progress both vendors and web site maintainers are willing to take short-term action towards improvements, provided that clear advise is available. The approach of the eGovernment Monitoring project (eGovMon) integrates benchmarking as a central activity in a user-driven project. In addition to benchmarking results, several other services and background information are provided to enable the users – in this case a group of Norwegian municipalities who want to improve the accessibility of their web sites – to gain real added value from benchmarking.
- Published
- 2010
235. Benchmarking e-Government - A Comparative Review of Three International Benchmarking Studies
- Author
-
Morten Goodwin Olsen and Lasse Berntzen
- Subjects
Government ,E-Government ,Order (exchange) ,Computer science ,Management science ,Benchmarking ,Strengths and weaknesses - Abstract
This paper makes a range of comparisons between e-government developments and performance worldwide. In order to make such comparisons, it is necessary to use a set of indicators. This paper examines the evolution of indicators used by three widely referenced international e-government studies, from the early days of e-government benchmarking until today. Some critical remarks related to the current state-of-the-art are given. The authors conclude that all three studies have their strengths and weaknesses, and propose automatic assessment of e-government services as a potential solution to some of the problems experienced by current benchmarking studies.
- Published
- 2009
- Full Text
- View/download PDF
236. Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?
- Author
-
Loïc Martínez-Normand, Carlos Casado Martínez, and Morten Goodwin Olsen
- Subjects
Web standards ,medicine.medical_specialty ,accessibilitat web ,Computer science ,accessibility analysis tools ,02 engineering and technology ,eines d'accessibilitat web ,Web testing ,Web page ,Web design ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,herramientas de accesibilidad web ,0501 psychology and cognitive sciences ,Web navigation ,benchmarking ,acesibilidad web ,050107 human factors ,Data Web ,Informática ,Information retrieval ,05 social sciences ,web accessibility ,020201 artificial intelligence & image processing ,Web modeling ,Web accessibility - Abstract
The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large websites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results. L'enfocament més adequat per l'accessibilitat web de referència és l'avaluació manual d'experts complementada amb eines d'anàlisi automàtiques. Tot i així, l'avaluació manual té un cost elevat i no és pràctic que s'apliqui a grans llocs web. En realitat, no hi ha més remei que confiar amb les eines automatitzades en la revisió de l'accessibilitat dels llocs web de major grandària. La pregunta és: en quina mesura els resultats de l'avaluació automàtica d'un lloc web i de pàgines individuals es poden utilitzar com una aproximació als resultats manuals? Aquest treball presenta els resultats inicials d'una investigació dirigida a respondre aquesta pregunta. S'han realitzat dues avaluacions manuals i automàtiques de l'accessibilitat de dues pàgines web i s'han comparat els resultats. A la nostra base de dades els resultats recuperats automàticament es podrien utilitzar com una aproximació manual als resultats de l'avaluació. El enfoque más adecuado para la accesibilidad web de referencia es la evaluación manual de expertos complementada con herramientas de análisis automático. Sin embargo, la evaluación manual tiene un alto costo y no es práctico que se aplique en los sitios web de gran tamaño. En realidad, no hay más remedio que confiar en las herramientas automatizadas en la revisión de la accesibilidad de los sitios web de gran tamaño. La pregunta es: ¿en qué medida los resultados de la evaluación automática de un sitio web y páginas web individuales pueden ser utilizados como una aproximación a los resultados manuales? Este trabajo presenta los resultados iniciales de una investigación dirigida a responder a esta pregunta. Se han realizado dos evaluaciones manuales y automáticas de la accesibilidad de las páginas web de dos sitios diferentes y se han comparado los resultados. En nuestra base de datos los resultados recuperados automáticamente podrían ser utilizados como una aproximación manual a los resultados de la evaluación.
- Published
- 2009
237. Monitoring Accessibility of Governmental Web Sites in Europe.
- Author
-
Bühler, Christian, Heck, Helmut, Nietzio, Annika, Olsen, Morten Goodwin, and Snaprud, Mikael
- Abstract
Web accessibility is an important goal of the European i2010 strategy. Several one-off surveys of eAccessibility have been conducted in the past few years. In this paper, we describe an approach to supplement the results of such surveys with automated assessments, that can easily be repeated at regular intervals. The software basis is provided by the European Internet Accessibility Observatory (EIAO). We analyse how the data collected by EIAO can be compared to other surveys. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
238. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology
- Author
-
Nils Ulltveit-Moe, Morten Goodwin Olsen, Pillai, Anand B., Christian Thomsen, Terje Gjøsæter, and Mikael Snaprud
- Abstract
The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded into the EIAO data warehouse using an Extract-Transform-Load (ETL) tool. The aggregated indicator results in the data warehouse are finally presented in a Plone based online reporting tool. This paper describes the final version of the EIAO architecture and outlines some of the technical and architectural challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements.
- Published
- 2008
239. Determining Optimal Polling Frequency Using a Learning Automata-based Solution to the Fractional Knapsack Problem
- Author
-
B.J. Oommen, Ole-Christoffer Granmo, Svein Arild Myrer, and Morten Goodwin Olsen
- Subjects
Mathematical optimization ,Learning automata ,Discretization ,Knapsack problem ,Stochastic process ,Continuous knapsack problem ,Resource allocation ,Stochastic optimization ,Random walk ,Mathematics - Abstract
Previous approaches to resource allocation in Web monitoring target optimal performance under restricted capacity constraints (Pandey et al., 2003; Wolf et al., 2002). The resource allocation problem is generally modelled as a knapsack problem with known deterministic properties. However, for practical purposes the Web must often be treated as stochastic and unknown. Unfortunately, estimating unknown knapsack properties (e.g., based on an estimation phase (Pandey et al., 2003; Wolf et al., 2002)) delays finding an optimal or near-optimal solution. Dynamic environments aggravate this problem further when the optimal solution changes with time. In this paper, we present a novel solution for the nonlinear fractional knapsack problem with a separable and concave criterion function (Bretthauer and Shetty, 2002). To render the problem realistic, we consider the criterion function to be stochastic with an unknown distribution. At every time instant, our scheme utilizes a series of informed guesses to move, in an online manner, from a "current" solution, towards the optimal solution. At the heart of our scheme, a game of deterministic learning automata performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of our scheme. In order to yield a required precision, the current resource allocation solution is consistently improved, until a near-optimal solution is found. Furthermore, our proposed scheme quickly adapts to periodically switching environments. Thus, we believe that our scheme is qualitatively superior to the class of estimation-based schemes
- Published
- 2006
- Full Text
- View/download PDF
240. Scalability Issues for large scale Web Accessibility Evaluation
- Author
-
Mikael Snaprud, Nils Ulltveit-Moe, Morten Goodwin Olsen, Torben Bach Pedersen, Christian Thomsen, Anand Pillai, Terje Gjøsæter, Helene Unander, Prinz, Andreas, and Tveit, Merete Skjelten
- Published
- 2006
241. Incremental web crawling as a competitive game of learning automata
- Author
-
Myrer, Svein Arild and Olsen, Morten Goodwin
- Subjects
IKT590 ,VDP::Matematikk og naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Algoritmer og beregnbarhetsteori: 422 ,VDP::Matematikk og naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429 - Abstract
Masteroppgave i informasjons- og kommunikasjonsteknologi 2005 - Høgskolen i Agder, Grimstad There is no doubt that the World Wide Web has lived up to it’s hype of being the world’s central information highway through the past years. An increasing amount of versatile services keeps finding their way onto the Web as information providers continue to embrace the possibilities that the Web can offer. Especially the possibility of producing dynamic content has been an accelerant factor and is the reason why we now conveniently can participate in online auctions or see the latest development of our favorite stocks in near real-time from our own living rooms. However, for automated data mining applications that deploy crawlers to continuously capture the information provided by this new breed of services, the highly dynamic nature of the content is not convenient at all. As a matter of fact, a complete new set of challenges emerges where traditional crawling strategies are shown to be sub-optimal. Accordingly a new class of methods for crawling operations are clearly needed. Nonetheless, the problem area has so far been given limited attention in literature. In this thesis we address the new problem area of monitoring highly dynamic data sources of different importance. We use the concept of an incremental web crawler as a basis for our novel approach where we consider the incremental crawling task as a continuous learning problem where scheduling of monitoring tasks is combined with parameter estimation in an on-line manner. By mapping the problem to two variants of the so called knapsack problem we propose two solutions based on a machine learning technique known as learning automata. We show empirically that our proposed solutions continuously improve their performance through a learning process and that they are capable of operating in non-stationary environments. We also show their performance in comparison to alternative algorithms where, most notably, our schemes are shown to outdo the traditional uniform crawling scheme by factors up to 550% in certain situations.
- Published
- 2005
242. Monitoring Accessibility of Governmental Web Sites in Europe
- Author
-
Bühler, Christian, primary, Heck, Helmut, additional, Nietzio, Annika, additional, Olsen, Morten Goodwin, additional, and Snaprud, Mikael, additional
- Full Text
- View/download PDF
243. Benchmarking e-Government - A Comparative Review of Three International Benchmarking Studies
- Author
-
Berntzen, Lasse, primary and Olsen, Morten Goodwin, additional
- Published
- 2009
- Full Text
- View/download PDF
244. A Proposed Architecture for Large Scale Web Accessibility Assessment.
- Author
-
Miesenberger, Klaus, Klaus, Joachim, Wolfgang Zagler, Karshmer, Arthur, Snaprud, Mikael Holmesland, Ulltveit-Moe, Nils, Pillai, Anand Balachandran, and Olsen, Morten Goodwin
- Abstract
This paper outlines the architecture of a system designed to demonstrate large scale web accessibility assessment developed in a European research project. The system consists of a set of integrated software components designed to automatically evaluate accessibility metrics for a large number of The project is co-funded by the European Commission DG Information Society and Media, under the contract IST-004526. websites and present results in a common report. The system architecture is designed to be maintainable, scalable, and extensible in order to facilitate further development of the tool. To meet these design criteria within a limited set of resources, an Open Source approach is adopted both for selecting, designing and developing the software. Keywords: Software architecture, Web accessibility evaluation, free/open source software. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
245. On the Convergence of Tsetlin Machines for the IDENTITY- and NOT Operators.
- Author
-
Zhang, Xuan, Jiao, Lei, Granmo, Ole-Christoffer, and Goodwin, Morten
- Subjects
PATTERN recognition systems ,MACHINE learning ,TIME perspective ,MACHINERY ,MATHEMATICAL analysis - Abstract
The Tsetlin Machine (TM) is a recent machine learning algorithm with several distinct properties, such as interpretability, simplicity, and hardware-friendliness. Although numerous empirical evaluations report on its performance, the mathematical analysis of its convergence is still open. In this article, we analyze the convergence of the TM with only one clause involved for classification. More specifically, we examine two basic logical operators, namely, the “IDENTITY”- and “NOT” operators. Our analysis reveals that the TM, with just one clause, can converge correctly to the intended logical operator, learning from training data over an infinite time horizon. Besides, it can capture arbitrarily rare patterns and select the most accurate one when two candidate patterns are incompatible, by configuring a granularity parameter. The analysis of the convergence of the two basic operators lays the foundation for analyzing other logical operators. These analyses altogether, from a mathematical perspective, provide new insights on why TMs have obtained state-of-the-art performance on several pattern recognition problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
246. MRESGAT: MULTI-HEAD RESIDUAL DILATED CONVOLUTION ASSISTED GATED UNIT FRAMEWORK FOR CROP YIELD PREDICTION.
- Author
-
SHETTY, SAHANA and T. R., MAHESH
- Subjects
DISTRIBUTED computing ,OPTIMIZATION algorithms ,CROP yields ,FOOD supply ,AGRICULTURAL forecasts - Abstract
The importance of predicting crop yields has increased due to growing concerns of surrounding food security. Early forecasting of crop yields holds a pivotal role in avoiding starvations by estimating the food supply available for the expanding global population. Several Deep Learning (DL) and Machine Learning (ML) techniques are involved to develop effective and accurate crop yield prediction model. Nevertheless, existing models faces some limitations such as less accuracy, high error rate because of noisy data, high training time and extracted less effective features for prediction. To overcome these issues, the novel DL methodology is introduced for attaining high accurate crop yield prediction. Initially, the soil, weather and other resources big data are collected from the various agriculture field. In data collection phase, the input data of larger size are stored in the Hadoop platform for the purpose of storing as well as processing the entire data in a distributed manner. The data are pre-processed through the utilization of Missing value imputation and z-score based data normalization. From the pre-processed data, the optimal features are considered using Integrated Correlation Random recursive elimination (InCorRe) approach. Based on the previous soil and weather information, the suitable yield of crops can be predicted using Multi head Residual dilated convolution assisted gated unit (MResGat) model. Finally, the losses of the network model can be optimized using African vulture optimization algorithm (AVO). The proposed method is evaluated using the several performance metrics, which achieved 0.023% of MSE value and 0.036% of MAE values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
247. Using AI to Empower Norwegian Agriculture: Attention-Based Multiple-Instance Learning Implementation.
- Author
-
Kvande, Mikkel Andreas, Jacobsen, Sigurd Løite, Goodwin, Morten, and Gupta, Rashmi
- Subjects
ARTIFICIAL intelligence ,REMOTE-sensing images ,AGRICULTURE ,CROP growth ,AGRICULTURAL development - Abstract
Agricultural development is one of the most essential needs worldwide. In Norway, the primary foundation of grain production is based on geological and biological features. Existing research is limited to regional-scale yield predictions using artificial intelligence (AI) models, which provide a holistic overview of crop growth. In this paper, the authors propose detecting several field-scale crop types and use this analysis to predict yield production early in the growing season. In this study, the authors utilise a multi-temporal satellite image, meteorological, geographical, and grain production data corpus. The authors extract relevant vegetation indices from satellite images. Furthermore, the authors use field-area-specific features to build a field-based crop type classification model. The proposed model, consisting of a time-distributed network and a gated recurrent unit, can efficiently classify crop types with an accuracy of 70%. In addition, the authors justified that the attention-based multiple-instance learning models could learn semi-labelled agricultural data, and thus, allow realistic early in-season predictions for farmers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
248. An Interpretable Modular Deep Learning Framework for Video-Based Fall Detection.
- Author
-
Dutt, Micheal, Gupta, Aditya, Goodwin, Morten, and Omlin, Christian W.
- Subjects
DEEP learning ,CONVOLUTIONAL neural networks ,VIDEO monitors ,FOURIER transforms ,MEDICAL personnel ,OLDER people - Abstract
Falls are a major risk factor for older adults, increasing morbidity and healthcare costs. Video-based fall-detection systems offer crucial real-time monitoring and assistance. Yet, their deployment faces challenges such as maintaining privacy, reducing false alarms, and providing understandable outputs for healthcare providers. This paper introduces an innovative automated fall-detection framework that includes a Gaussian blur module for privacy preservation, an OpenPose module for precise pose estimation, a short-time Fourier transform (STFT) module to capture frames with significant motion selectively, and a computationally efficient one-dimensional convolutional neural network (1D-CNN) classification module designed to classify these frames. Additionally, integrating a gradient-weighted class activation mapping (GradCAM) module enhances the system's explainability by visually highlighting the movement of the key points, resulting in classification decisions. Modular flexibility in our system allows customization to meet specific privacy and monitoring needs, enabling the activation or deactivation of modules according to the operational requirements of different healthcare settings. This combination of STFT and 1D-CNN ensures fast and efficient processing, which is essential in healthcare environments where real-time response and accuracy are vital. We validated our approach across multiple datasets, including the Multiple Cameras Fall Dataset (MCFD), the UR fall dataset, and the NTU RGB+D Dataset, which demonstrates high accuracy in detecting falls and provides the interpretability of results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
249. Towards misinformation mitigation on social media: novel user activity representation for modeling societal acceptance.
- Author
-
Abouzeid, Ahmed, Granmo, Ole-Christoffer, Goodwin, Morten, and Webersik, Christian
- Published
- 2024
- Full Text
- View/download PDF
250. دور تقنيات الذكاء الاصطناعي في تحقيق الإبداع بالإنتاج الإذاعي والتليفزيوني: دراسة على الإعلاميين والخبراء.
- Author
-
رشا محمد عاطف محمود الشيخ
- Subjects
ARTIFICIAL intelligence ,CREATIVE ability ,RADIO & television towers ,TELEVISION ,RADIO broadcasting ,JOURNALISTS - Abstract
Copyright of Journal of Public Relations Research Middle East / Magallat Bhut Al-Laqat Al-Amh - Al-Srq Al-Aust is the property of Egyptian Public Relation Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.