3,059 results
Search Results
2. Process versus product, abstraction, and formalism: A personal perspective (position paper)
- Author
-
Ataru T. Nakagawa
- Subjects
Software development process ,Theoretical computer science ,Computer science ,Process (engineering) ,Formalism (philosophy) ,Locale (computer hardware) ,Perspective (graphical) ,Position paper ,Engineering ethics ,Product (category theory) ,Abstraction - Abstract
A local workshop series, called Japanese Software Process Workshop (JSPW), has been held annually, the last (4th) in February, 1992. The original motivation behind this workshop series was given by the provocation of Osterweil[5] and the subsequent reinvigorated interest in software process in general, and in software process description in particular. One of the features of the series is that it at tracted participants both from academia and industry, and that in consequence there occurred fruitful interactions between the normally separated communities. The author has attended this workshop series from the outset and gained some insights into the trends of opinions in our locale. This paper presents a personal perspective of the current issues in this field, in view of the discussions held in the workshop series.
- Published
- 2005
3. Transforming Medical Imaging: The First SCAR TRIP™ Conference: A Position Paper from the SCAR TRIP™ Subcommittee of the SCAR Research and Development Committee
- Author
-
Katherine P. Andriole and Richard L. Morin
- Subjects
Breakout ,Imaging informatics ,Radiological and Ultrasound Technology ,business.industry ,Computer science ,Computer Applications ,Usability ,Translational research ,Data science ,Article ,Computer Science Applications ,Medical imaging ,System integration ,Position paper ,Radiology, Nuclear Medicine and imaging ,Engineering ethics ,business - Abstract
The First Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP™) Conference and Workshop, “Transforming Medical Imaging” was held on January 31–February 1, 2005 in Bethesda, MD. Representatives from all areas of medical and scientific imaging—academia, research, industry, and government agencies—joined together to discuss the future of medical imaging and potential new ways to manage the explosion in numbers, size, and complexity of images generated by today's continually advancing imaging technologies. The two-day conference included plenary, scientific poster, and breakout sessions covering six major research areas related to TRIP™. These topic areas included human perception, image processing and computer-aided detection, data visualization, image set navigation and usability, databases and systems integration, and methodology evaluation and performance validation. The plenary presentations provided a general status review of each broad research field to use as a starting point for discussion in the breakout sessions, with emphasis on specific topics requiring further study. The goals for the breakout sessions were to define specific research questions in each topic area, to list the impediments to carrying out research in these fields, to suggest possible solutions and near- and distant-future directions for each general topic, and to report back to the general session. The scientific poster session provided another mechanism for presenting and discussing TRIP™-related research. This report summarizes each plenary and breakout session, and describes the group recommendations as to the issues facing the field, major impediments to progress, and the outlook for radiology in the short and long term. The conference helped refine the definition of the SCAR TRIP™ Initiative and the problems facing radiology with respect to the dramatic growth in medical imaging data, and it underscored a present and future need for the support of interdisciplinary translational research in radiology bridging bench-to-bedside. SCAR will continue to fund research grants exploring TRIP™ solutions. In addition, the organization proposes providing an infrastructure to foster collaborative research partnerships between SCAR corporate and academic members in the form of a TRIP™ Imaging Informatics Network (TRIPI2N).
- Published
- 2006
4. Which multi-attribute utility instruments are recommended for use in cost-utility analysis? A review of national health technology assessment (HTA) guidelines
- Author
-
Matthew Kennedy-Martin, Jan J. V. Busschbach, Tessa Kennedy-Martin, Kristina S. Boye, Michael Herdman, Wolfgang Greiner, Mandy van Reenen, Bernhard Slaap, and Psychiatry
- Subjects
Technology Assessment, Biomedical ,Computer science ,Cost-Benefit Analysis ,Health Status ,Economics, Econometrics and Finance (miscellaneous) ,Guidelines ,i11, i18 ,Technology assessment ,Pharmacoeconomics ,Utility ,Surveys and Questionnaires ,Humans ,Multi-attribute utility ,Economics, Pharmaceutical ,Health technology assessment ,Reimbursement ,Original Paper ,Cost–utility analysis ,Actuarial science ,Health economics ,Health Policy ,Cost-utility analysis ,Multi-attribute utility instruments ,Guideline ,Preference ,Quality of Life - Abstract
Background Several multi-attribute utility instruments (MAUIs) are available from which utilities can be derived for use in cost-utility analysis (CUA). This study provides a review of recommendations from national health technology assessment (HTA) agencies regarding the choice of MAUIs. Methods A list was compiled of HTA agencies that provide or refer to published official pharmacoeconomic (PE) guidelines for pricing, reimbursement or market access. The guidelines were reviewed for recommendations on the indirect calculation of utilities and categorized as: a preference for a specific MAUI; providing no MAUI preference, but providing examples of suitable MAUIs and/or recommending the use of national value sets; and recommending CUA, but not providing examples of MAUIs. Results Thirty-four PE guidelines were included for review. MAUIs named for use in CUA: EQ-5D (n = 29 guidelines), the SF-6D (n = 11), HUI (n = 10), QWB (n = 3), AQoL (n = 2), CHU9D (n = 1). EQ-5D was a preferred MAUI in 15 guidelines. Alongside the EQ-5D, the HUI was a preferred MAUI in one guideline, with DALY disability weights mentioned in another. Fourteen guidelines expressed no preference for a specific MAUI, but provided examples: EQ-5D (n = 14), SF-6D (n = 11), HUI (n = 9), QWB (n = 3), AQoL (n = 2), CHU9D (n = 1). Of those that did not specify a particular MAUI, 12 preferred calculating utilities using national preference weights. Conclusions The EQ-5D, HUI, and SF-6D were the three MAUIs most frequently mentioned in guidelines. The most commonly cited MAUI (in 85% of PE guidelines) was EQ-5D, either as a preferred MAUI or as an example of a suitable MAUI for use in CUA in HTA.
- Published
- 2020
5. An Automatic Identification and Resolution System for Protein-Related Abbreviations in Scientific Papers
- Author
-
Paolo Atzeni, Fabio Polticelli, Daniele Toti, Atzeni, Paolo, Polticelli, Fabio, and Toti, D.
- Subjects
Set (abstract data type) ,Identification (information) ,Information retrieval ,Recall ,Process (engineering) ,Computer science ,Compound ,abbreviations ,Scalability ,Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI ,Resolution (logic) ,Personalization - Abstract
We propose a methodology to identify and resolve proteinrelated abbreviations found in the full texts of scientific papers, as part of a semi-automatic process implemented in our PRAISED framework. The identification of biological acronyms is carried out via an effective syntactical approach, by taking advantage of lexical clues and using mostly domain-independent metrics, resulting in considerably high levels of recall as well as extremely low execution time. The subsequent abbreviation resolution uses both syntactical and semantic criteria in order to match an abbreviation with its potential explanation, as discovered among a number of contiguous words proportional to the abbreviation's length. We have tested our system against the Medstract Gold Standard corpus and a relevant set of manually annotated PubMed papers, obtaining significant results and high performance levels, while at the same time allowing for great customization, lightness and scalability.
- Published
- 2011
6. Overlaying Paper Maps with Digital Information Services for Tourists
- Author
-
Beat Signer, Moira C. Norrie, Informatics and Applied Informatics, and Web and Information System Engineering
- Subjects
World Wide Web ,Multimedia ,Digital mapping ,Computer science ,Information system ,Overlay ,computer.software_genre ,computer ,Tourism - Abstract
Despite the increasing availability of various forms of digital maps and guides, paper still prevails as the main information medium used by tourists during city visits. The authors describe how recent technologies for digitally augmented paper maps can be used to develop interactive paper maps that provide value-added services for tourists through digital overlays. An initial investigation into the use of these maps to support visitors to the Edinburgh festivals is also presented.
- Published
- 2005
7. Modeling of fourdrinier paper making machines and basis weight control
- Author
-
Masao Murata
- Subjects
Basis (linear algebra) ,Computer science ,Chemical pulp ,Control engineering ,Weight control - Abstract
In this paper, the modeling of Fourdrinier paper making machines in basis weight control is discussed under the new approach, where attention is paid to the retention of solid matherials on the wire.
- Published
- 2005
8. Introduction to the Paper by Eliot Siegel and Bruce Reiner, 'Work Flow Redesign: The Key to Success When Using PACS'
- Author
-
Eliot L. Siegel
- Subjects
Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,Workstation ,Operations research ,Computer science ,Interoperability ,Article ,Computer Science Applications ,law.invention ,Engineering management ,Picture archiving and communication system ,Workflow ,law ,Order (business) ,Key (cryptography) ,Information system ,Radiology, Nuclear Medicine and imaging - Abstract
AS AN EARLY PICTURE ARCHIVING AND COMMUNICATION SYSTEM (PACS) adopter, I am often asked about “the most important lesson that I’ve learned about filmless radiology.” Although we have learned many things during the past decade, the most important lesson has undoubtedly been that the purchase of a PACS provides an opportunity to re-engineer and streamline the inefficient manual workflow found in most conventional imaging departments. In my experience, many imaging departments have used PACS as an electronic substitute for film, completely emulating all aspects of a conventional department. Such departments continue to push individual studies from a specific acquisition device (such as a CT scanner) to a specific workstation in a manner similar to hanging films from that modality on a specific film alternator. They continue to enter patient information either electronically or on paper multiple times on multiple information systems in a manner analogous to paper and index cards, rather than having these systems communicate with each other. It is therefore not surprising that there have been mixed results with regard to the impact of PACS on departmental productivity and cost savings. It has been my experience that departments that have concentrated on redesign of workflow using integrated systems that communicate with each other have been able to achieve the greatest gains in savings and efficiency. In 1991, when we purchased our PACS for the Baltimore VA Medical Center, we were required to create custom interfaces for each of these information systems in order to achieve interoperability. Today, customers can take advantage of the Integrating the Healthcare Enterprise (IHE) effort to minimize the need to reinvent this wheel and to optimize departmental and/or hospital workflow for imaging. This paper discusses our analysis which compared our workflow processes before and after implementation of an enterprise-wide PACS.
- Published
- 2003
9. Panel: 'Why are object-oriented folks producing systems, while deductive folks are producing papers?'
- Author
-
François Bancilhon, Constantino Thanos, and Dennis Tsichritzis
- Subjects
Object-oriented programming ,Computer science ,Programming language ,computer.software_genre ,computer - Published
- 2005
10. Introduction to the Paper by Seong K. Mun, Ph.D., et al, 'Experience with Image Management Networks at Three Universities: Is the Cup Half-empty or Half-full?'
- Author
-
Seong Ki Mun
- Subjects
Telemedicine ,Radiological and Ultrasound Technology ,Workstation ,Process (engineering) ,Computer science ,business.industry ,Network security ,Mature technology ,Information technology ,Network topology ,Data science ,Article ,Computer Science Applications ,law.invention ,Picture archiving and communication system ,law ,Radiology, Nuclear Medicine and imaging ,business - Abstract
OUR RESEARCH APPROACH to the development of a picture archiving and communication system (PACS) has been at the system level. While component technology was developed by various investigators, we felt that our unique role was to look at the PACS from the perspective of network and clinical operations as well as the management of insertion of technology into a complex environment. One of the first questions we tried to address was, “How should one describe PACS?” The paper reprinted here was an early attempt to describe PACS at an operational level without being limited to specific component technology, as we knew the technology would eventually change. This approach as laid the foundation for the performance specifications of the Medical Diagnostic Imaging System that the Department of Defense adopted in the 1990s. During that decade many new efforts were directed toward developing quantitative requirements for the PACS performance, especially to meet the needs of radiologists. Whereas workstation performance and network topologies were topics of intense discussion and development in various quarters, our focus remained on the system level performance and technology deployment. Today PACS is a mature technology. It is difficult to know if any aspects of PACS development influenced the advances in generic computer and information technology. It is reasonable to assume, however, that the PACS initiative has a significant impact on advancing imaging, image processing and image processing technologies. Certainly the PACS efforts in the radiology community have laid the foundation for filmless electronic hospitals and telemedicine. Many major technical issues have been resolved, but as applications expand and the landscape of usage changes, new issues arise. The questions of network topology, workstation performance, image quality on the electronic displays, digital radiography, and clinical acceptance are no longer challenges for PACS today. But the integration of PACS with the radiology information system and other enterprise-wide information and imaging systems continues to pose implementation difficulties. Network security, patient privacy, and health information assurance are a few of the new requirements that the PACS community must address. Through participation in the evolution of PACS, we have experienced firsthand the old lessons associated with the adoption of new technology. This process must begin with proof of the merit of the technology; in addition it requires overcoming entrenched habits and self-interest associated with preserving old technology and old work rules. For PACS, the process of maturation has taken more than 10 years. It was successful because the technology solved difficult problems of managing large amount of complex data for many different stakeholders. The next challenges may be the development of new research programs and patient care capabilities by accessing the vast image databases that are accumulating at PACS hospitals.
- Published
- 2003
11. Introduction to Paper by Sridhar B. Seshadri, MSEE, MBA, et al, 'Prototype Medical Image Management System (MIMS) at the University of Pennsylvania: Software Design Considerations'
- Author
-
Satjeet Khalsa, Inna Brikman, Sridhar B. Seshadri, Frans van der Voorde, and Ronald Arenson
- Subjects
Radiological and Ultrasound Technology ,Standardization ,business.industry ,Computer science ,Information technology ,Article ,Computer Science Applications ,DICOM ,Engineering management ,Software ,Workflow ,Management system ,Component-based software engineering ,Software design ,Radiology, Nuclear Medicine and imaging ,business - Abstract
IT IS ILLUMINATING and perhaps a little humbling to review a paper written over 15 years ago and compare it to the state-of-the-art in PACS today. Clearly, there have been astounding improvements in the core technology: increased processing power, higher communications bandwidth, cost-effective storage capacity, and superb display technologies. However, the authors’ view (in 1987) that the industry (vendors and customers) needs to focus more on software and systems issues still rings true. Standardization of hardware and software components has come a long way with the digital imaging and communications in medicine (DICOM) and Integrating the Healthcare Enterprise (IHE) standards; without these yeoman efforts, PACS would still be in the dark ages. Also, PACS appears to have “graduated” from being a departmental solution to becoming more and more integrated into the mainstream clinical information systems from information technology providers. Despite these great achievements, I think the industry needs to invest more thought and effort into unleashing the power of PACS with revolutionary workflow and process-change around the technology that will help users realize greater benefits. Interestingly, Louis Gerstner Jr., in an interview about his new book, Who Says Elephants Can’t Dance? says (about the computer industry) “. . . the process of integrating this technology and achieving the benefit is unbelievably painful for companies. The industry has been all about faster, faster, more function, more function. . . .” It appears, at least in this regard, that PACS shares the challenges of the rest of the computer industry!
- Published
- 2003
12. Introduction to Paper by G. James Blaine, D.Sc. et al, 'PACS Workbench at Mallinckrodt Institute of Radiology—1983'
- Author
-
G. James Blaine
- Subjects
medicine.medical_specialty ,Radiological and Ultrasound Technology ,business.operation ,Computer science ,business.industry ,Mallinckrodt ,Modular design ,Communications system ,Article ,Computer Science Applications ,DICOM ,Workflow ,Management system ,medicine ,Workbench ,Radiology, Nuclear Medicine and imaging ,Radiology ,business ,Implementation - Abstract
IT HAS BEEN ASTOUNDING to witness the two-decade computing evolution, which increased processing power, communications bandwidth, and storage capacity while reducing costs. Most of the technology limitations that challenged our early development of picture archiving and communications system (PACS) development have been dissolved. Our quest for modular components has been partially facilitated by the developments of DICOM v3 and more recently by the Radiological Society of North America (RSNA) and the Healthcare Information and Management Systems Society (HIMSS) initiative to address the integration of the healthcare environment (IHE). While many of the commercial systems are still limited in their ability to be tailored to support the radiologist’s need for specialized workflow, adoption of the IHE technical framework holds promise. As in the PACS Workbench experiments, we still find it is essential to have tools to measure image flow, queue arrival times, and queue departure times in order to understand the bounds and performance of our installed commercial systems. The dynamic display of performance metrics, missing in many systems today, continues to be required to enable the “measured, scientific approach” that we sought in our original implementations.
- Published
- 2003
13. Introduction to the Paper by H. U. Lemke et al, 'Applications of Picture Processing, Image Analysis and Computer Graphics Techniques to Cranial CT Scans'
- Author
-
Heinz U. Lemke
- Subjects
Computer graphics ,Radiological and Ultrasound Technology ,Computer science ,Cranial ct ,Computer graphics (images) ,Picture processing ,Radiology, Nuclear Medicine and imaging ,Data mining ,computer.software_genre ,computer ,Article ,Computer Science Applications ,Image (mathematics) - Published
- 2003
14. Introduction to paper by Spencer B. Gay, MD et al, 'Processes Involved in Reading Imaging Studies: Workflow Analysis and Implications for Workstation Development'
- Author
-
Samuel J. Dwyer and Spencer B. Gay
- Subjects
World Wide Web ,Radiological and Ultrasound Technology ,Workstation ,Workflow analysis ,law ,Computer science ,Reading (process) ,media_common.quotation_subject ,Radiology, Nuclear Medicine and imaging ,Article ,Computer Science Applications ,law.invention ,media_common - Published
- 2002
15. Network immunization and virus propagation in email networks: experimental evaluation and analysis
- Author
-
Jiming Liu, Chao Gao, and Ning Zhong
- Subjects
Operations research ,Computer science ,Network topology ,computer.software_genre ,Immunization strategies ,Virus ,Electronic mail ,Computer virus ,Enron ,Betweenness centrality ,Human dynamics ,Artificial Intelligence ,Virus propagation ,Regular Paper ,Email networks ,business.industry ,Immunization (finance) ,Human-Computer Interaction ,Key factors ,Hardware and Architecture ,business ,computer ,Software ,Information Systems ,Computer network - Abstract
Network immunization strategies have emerged as possible solutions to the challenges of virus propagation. In this paper, an existing interactive model is introduced and then improved in order to better characterize the way a virus spreads in email networks with different topologies. The model is used to demonstrate the effects of a number of key factors, notably nodes’ degree and betweenness. Experiments are then performed to examine how the structure of a network and human dynamics affects virus propagation. The experimental results have revealed that a virus spreads in two distinct phases and shown that the most efficient immunization strategy is the node-betweenness strategy. Moreover, those results have also explained why old virus can survive in networks nowadays from the aspects of human dynamics.
- Published
- 2010
16. Some aspects of the anemia of chronic disorders modeled and analyzed by petri net based approach
- Author
-
Piotr Formanowicz, Dorota Formanowicz, Andrea Sackmann, Adam Kozak, and Jacek Blazewicz
- Subjects
Computer science ,Anemia ,Iron ,Hepcidin ,Bioengineering ,Transferrin receptor ,Computational biology ,Models, Biological ,Iron homeostasis ,Hepcidins ,Receptors, Transferrin ,medicine ,Cluster Analysis ,Homeostasis ,Humans ,Computer Simulation ,Erythropoietin ,Inflammation ,Original Paper ,biology ,Modeling ,General Medicine ,Petri net ,medicine.disease ,Chronic disorders ,Chronic disease ,Immunology ,Chronic Disease ,biology.protein ,Petri net theory ,Biotechnology ,Antimicrobial Cationic Peptides - Abstract
Anemia of chronic disorders is a very important phenomenon and iron is a crucial factor of this complex process. To better understand this process and its influence on some other factors we have built a mathematical model of the human body iron homeostasis, which possibly most exactly would reflect the metabolism of iron in the case of anemia and inflammation. The model has been formulated in the language of Petri net theory, which allows for its simulation and precise analysis. The obtained results of the analysis of the model’s behavior, concerning the influence of anemia and inflammation on the transferrin receptors, and hepcidin concentration changes are the valuable complements to the knowledge following from clinical research. This analysis is one of the first attempts to investigate properties and behavior of a not fully understood biological system on a basis of its Petri net based model.
- Published
- 2011
17. Software engineering in a BS in computer science
- Author
-
Richard Louis Weis
- Subjects
Social software engineering ,AP Computer Science ,business.industry ,Computer science ,Informatics engineering ,Software construction ,Personal software process ,Position paper ,Computer science curriculum ,Software requirements ,Software engineering ,business ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) - Abstract
This position paper outlines the rationale for and the approach used at the University of Hawaii at Hilo to further augment the ACM/IEEE computer science curriculum for software engineering considerations.
- Published
- 2006
18. Caching Trust Rather Than Content
- Author
-
Mahadev Satyanarayanan
- Subjects
CPU cache ,Computer science ,Wireless network ,Compromise ,media_common.quotation_subject ,Distributed computing ,Champion ,Wearable computer ,Storage management ,Computer security ,computer.software_genre ,Application domain ,Microcomputer ,Server ,General Earth and Planetary Sciences ,Position paper ,Data content ,Cache ,Latency (engineering) ,Mobile device ,computer ,General Environmental Science ,media_common - Abstract
Caching, one of the oldest ideas in computer science, often improves performance and sometimes improves availability [1, 3]. Previous uses of caching have focused on data content. It is the presence of a local copy of data that reduces access latency and masks server or network failures. This position paper puts forth the idea that it can sometimes be useful to merely cache knowledge sufficient to recognize valid data. In other words, we do not have a local copy of a data item, but possess a substitute that allows us to verify the content of that item if it is offered to us by an untrusted source. We refer to this concept as caching trust.Mobile computing is a champion application domain for this concept. Wearable and handheld computers are constantly under pressure to be smaller and lighter. However, the potential volume of data that is accessible to such devices over a wireless network keeps growing. Something has to give. In this case, it is the assumption that all data of potential interest can be hoarded on the mobile client [1, 2, 6]. In other words, such clients have to be prepared to cope with cache misses during normal use. If they are able to cache trust, then any untrusted site in the fixed infrastructure can be used to stage data for servicing cache misses --- one does not have to go back to a distant server, nor does one have to compromise security. The following scenario explores this in more detail.
- Published
- 2006
19. Invariant Processing and Occlusion Resistant Recognition of Planar Shapes
- Author
-
Alfred M. Bruckstein
- Subjects
Planar ,Computer science ,business.industry ,Short paper ,Occlusion ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,Fixed point ,business ,Partial occlusion ,Smoothing - Abstract
This short paper surveys methods for planar shape smoothing and processing and planar shape recognition invariant under viewing distortions and even partial occlusions. It is argued that all the results available in the literature on these problems implicitly follow from successfully addressing two basic problems: invariant location of points with respect to a given shape (a given set of points in the plane) and invariant displacement of points with regard to the given shape.
- Published
- 2005
20. The KCM system: Speeding-up logic programming through hardware support
- Author
-
Jacques Noyé
- Subjects
Computer science ,business.industry ,Programming language ,Short paper ,computer.software_genre ,Prolog ,Logic synthesis ,Software ,Computer architecture ,business ,Logic Control ,computer ,Logic programming ,Computer hardware ,Logic optimization ,Register-transfer level ,computer.programming_language - Abstract
The aim of the KCM (Knowledge Crunching Machine) project was to study how to speed-up Prolog, and more generally Logic Programming, through hardware support at the processor level. An experimental approach was taken, which resulted in the design and implementation of a real system, hardware and software. This short paper outlines the key features of the system as well as the main conclusions which can be drawn from the project.
- Published
- 2005
21. The first computer
- Author
-
Michael G. Williams
- Subjects
Computer science ,Paper tape ,Mechanical engineering ,Magnetic wires - Published
- 2006
22. The Office of the Past
- Author
-
Steven M. Seitz, Maneesh Agrawala, and Jiwon Kim
- Subjects
Paper document ,Computer science ,business.industry ,Scale-invariant feature transform ,Scene graph ,Computer vision ,Artificial intelligence ,business - Published
- 2005
23. Short term production scheduling of the pulp mill — A decentralized optimization approach
- Author
-
Kauko Leiviska
- Subjects
Pulp mill ,Waste management ,Decentralized optimization ,business.industry ,Computer science ,Storage tank ,Production control ,Production schedule ,Paper mill ,business ,Term (time) - Published
- 2005
24. Contaminant Source Identification in Aquifers: A Critical View
- Author
-
J. Jaime Gómez-Hernández and Teng Xu
- Subjects
geography ,INGENIERIA HIDRAULICA ,geography.geographical_feature_category ,Forgetting ,Computer science ,media_common.quotation_subject ,Simulation-optimization ,Bayesian approach ,Heuristic approaches ,Aquifer ,Field (geography) ,Identification (information) ,Mathematics (miscellaneous) ,Risk analysis (engineering) ,Surrogate models ,Machine learning ,General Earth and Planetary Sciences ,Sophistication ,Problem solution ,media_common ,Backward tracking - Abstract
[EN] Forty years and 157 papers later, research on contaminant source identification has grown exponentially in number but seems to be stalled concerning advancement towards the problem solution and its field application. This paper presents a historical evolution of the subject, highlighting its major advances. It also shows how the subject has grown in sophistication regarding the solution of the core problem (the source identification), forgetting that, from a practical point of view, such identification is worthless unless it is accompanied by a joint identification of the other uncertain parameters that characterize flow and transport in aquifers., The first author wishes to acknowledge the financial contribution of the Spanish Ministry of Science and Innovation through Project No. PID2019-109131RB-I00, and the second author acknowledges the financial support from the Fundamental Research Funds for the Central Universities (B200201015) and Jiangsu Specially-Appointed Professor Program from Jiangsu Provincial Department of Education (B19052). Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
- Published
- 2022
25. Improving SIEM for critical SCADA water infrastructures using machine learning
- Author
-
David Brosset, Hanan Hindy, Amar Seeam, Ethan Bayne, Xavier Bellekens, Katsikas, Sokratis K., Cuppens, Frédéric, Cuppens, Nora, Lambrinoudakis, Costas, Antón, Annie, Gritzalis, Stefanos, Mylopoulos, John, Kalloniatis, Christos, Institut de Recherche de l'Ecole Navale (IRENAV), Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM), University of Mauritius, and Middlesex University
- Subjects
QA75 ,021110 strategic, defence & security studies ,Spoofing attack ,Computer science ,Process (engineering) ,Event (computing) ,business.industry ,Anomaly (natural sciences) ,0211 other engineering and technologies ,02 engineering and technology ,Machine learning ,computer.software_genre ,SCADA ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Anomaly detection ,[INFO]Computer Science [cs] ,Artificial intelligence ,business ,computer ,Implementation - Abstract
International audience; Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work helps in accelerating the mitigation process by notifying the operator with additional information when an anomaly occurs. This additional information includes the probability and confidence level of event(s) occurring. The model is trained and tested using a real-world dataset.
- Published
- 2019
26. Randomized neighbor discovery protocols with collision detection for static multi-hop wireless ad hoc networks
- Author
-
Carlos T. Calafate, Jaime Lloret, Jose Vicente Sorribes, and Lourdes Peñalver
- Subjects
Computer science ,Wireless ad hoc network ,computer.internet_protocol ,Neighbor discovery ,Throughput ,02 engineering and technology ,Neighbor Discovery Protocol ,0203 mechanical engineering ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Collision detection ,Electrical and Electronic Engineering ,Multihop ,Protocol (science) ,One-hop ,business.industry ,Network packet ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020302 automobile design & engineering ,020206 networking & telecommunications ,Energy consumption ,Castalia ,Randomized protocols ,Wireless ad hoc networks ,business ,computer ,Computer network - Abstract
[EN] Neighbor discovery represents a first step after the deployment of wireless ad hoc networks, since the nodes that form them are equipped with limited-range radio transceivers, and they typically do not know their neighbors. In this paper two randomized neighbor discovery approaches, called CDH and CDPRR, based on collision detection for static multi-hop wireless ad hoc networks, are presented. Castalia 3.2 simulator has been used to compare our proposed protocols against two protocols chosen from the literature and used as reference: the PRR, and the Hello protocol. For the experiments, we chose five metrics: the neighbor discovery time, the number of discovered neighbors, the energy consumption, the throughput and the number of discovered neighbors versus packets sent ratio. According to the results obtained through simulation, we can conclude that our randomized proposals outperform both Hello and PRR protocols in the presence of collisions regarding all five metrics, for both one-hop and multi-hop scenarios. As novelty compared to the reference protocols, both proposals allow nodes to discover all their neighbors with probability 1, they are based on collision detection and know when to terminate the neighbor discovery process. Furthermore, qualitative comparisons of the existing protocols and the proposals are available in this paper. Moreover, CDPRR presents better results in terms of time, energy consumption and number of discovered neighbors versus packets sent ratio. We found that both proposals achieve to operate under more realistic assumptions. Furthermore, CDH does not need to know the number of nodes in the network., This work has been partially supported by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" within the project under Grant TIN2017-84802-C2-1-P. This work has also been partially supported by European Union through the ERANETMED (Euromediterranean Cooperation through ERANET joint activities and beyond) project ERANETMED3-227 SMARTWATIR.
- Published
- 2021
27. Numerical procedure to couple shell to solid elements by using Nitsche's method
- Author
-
Kazumi Matsui, Takahiro Yamada, and Takeki Yamamoto
- Subjects
Nitsche's method ,Shell element ,Discretization ,Interface (Java) ,Computer science ,Computational Mechanics ,Shell (structure) ,Ocean Engineering ,02 engineering and technology ,Degrees of freedom (mechanics) ,Deformation (meteorology) ,01 natural sciences ,Domain (mathematical analysis) ,Stress (mechanics) ,0203 mechanical engineering ,Combined modeling ,Penalty method ,0101 mathematics ,Solid element ,Applied Mathematics ,Mechanical Engineering ,Mathematical analysis ,010101 applied mathematics ,Computational Mathematics ,020303 mechanical engineering & transports ,Computational Theory and Mathematics - Abstract
This paper presents a numerical procedure to couple shell to solid elements by using the Nitsche’s method. The continuity of displacements can be satisfied approximately with the penalty method, which is effective in setting the penalty parameter to a sufficiently large value. When the continuity of only displacements on the interface is applied between shell and solid elements, an unreasonable deformation may be observed near the interface. In this work, the continuity of the stress vector on the interface is considered by employing the Nitsche’s method, and hence a reasonable deformation can be obtained on the interface. The authors propose two types of shell elements coupled with solid elements in this paper. One of them is the conventional MITC4 shell element, which is one of the most popular elements in engineering applications. This approach shows the capability of discretizing the domain of the structure with the different types of elements. The other is the shell element with additional degrees of freedom to represent thickness–stretch developed by the authors. In this approach, the continuity of displacements including the deformation in the thickness direction on the interface can be considered. Several numerical examples are presented to examine the fundamental performance of the proposed procedure. The behavior of the proposed simulation model is compared with that of the whole domain discretized with only solid elements., This is a post-peer-review, pre-copyedit version of an article published in "Computational Mechanics". The final authenticated version is available online at: https://doi.org/10.1007/s00466-018-1585-6
- Published
- 2019
28. Re-engineering the ant colony optimization for CMP architectures
- Author
-
José M. García and José M. Cecilia
- Subjects
Speedup ,CMP code redesign ,Xeon ,Computer science ,Parallel and distributed ACO ,Ant colony optimization algorithms ,Memory bandwidth ,Parallel computing ,TSP ,Travelling salesman problem ,Theoretical Computer Science ,ARQUITECTURA Y TECNOLOGIA DE COMPUTADORES ,Ant colony optimization ,Hardware and Architecture ,Intel Xeon Phi ,Scalability ,Performance evaluation ,Massively parallel ,Software ,Xeon Phi ,Information Systems - Abstract
[EN] The ant colony optimization (ACO) is inspired by the behavior of real ants, and as a bioinspired method, its underlying computation is massively parallel by definition. This paper shows re-engineering strategies to migrate the ACO algorithm applied to the Traveling Salesman Problem to modern Intel-based multi- and many-core architectures in a step-by-step methodology. The paper provides detailed guidelines on how to optimize the algorithm for the intra-node (thread and vector) parallelization, showing the performance scalability along with the number of cores on different Intel architectures, reporting up to 5.5x speedup factor between the Intel Xeon Phi Knights Landing and Intel Xeon v2. Moreover, parallel efficiency is provided for all targeted architectures, finding that core load imbalance, memory bandwidth limitations, and NUMA effects on data placement are some of the key factors limiting performance. Finally, a distributed implementation is also presented, reaching up to 2.96x speedup factor when running the code on 3 nodes over the single-node counterpart version. In the latter case, the parallel efficiency is affected by the synchronization frequency, which also affects the quality of the solution found by the distributed implementation., This work was partially supported by the Fundación Séneca, Agencia de Ciencia y Tecnología de la Región de Murcia under Project 20813/PI/18, and by Spanish Ministry of Science, Innovation and Universities as well as European Commission FEDER funds under Grants TIN2015-66972-C5-3-R, RTI2018-098156-B-C53, TIN2016-78799-P (AEI/FEDER, UE), and RTC-2017-6389-5. We acknowledge the excellent work done by Victor Montesinos while he was doing a research internship supported by the University of Murcia.
- Published
- 2020
- Full Text
- View/download PDF
29. Look ahead to improve QoE in DASH streaming
- Author
-
Ismael de Fez, Román Belda, Pau Arce, and Juan Carlos Guerri
- Subjects
Computer Networks and Communications ,Computer science ,Real-time computing ,Video multimethod assessment fusion (VMAF) ,INGENIERIA TELEMATICA ,Variable bitrate ,Video quality ,Adaptive bitrate streaming ,Dynamic Adaptive Streaming over HTTP ,Dynamic adaptive streaming over HTTP (DASH) ,Adaptive bitrate streaming (ABR) ,Hardware and Architecture ,Video encoding ,Dash ,TEORIA DE LA SEÑAL Y COMUNICACIONES ,Media Technology ,Quality of experience ,Look-ahead ,Bitstream ,ExoPlayer ,Software ,Quality of experience (QoE) - Abstract
[EN] When a video is encoded with constant quality, the resulting bitstream will have variable bitrate due to the inherent nature of the video encoding process. This paper proposes a video Adaptive Bitrate Streaming (ABR) algorithm, called Look Ahead, which takes into account this bitrate variability in order to calculate, in real time, the appropriate quality level that minimizes the number of interruptions during the playback. The algorithm is based on the Dynamic Adaptive Streaming over HTTP (DASH) standard for on-demand video services. In fact, it has been implemented and integrated into ExoPlayer v2, the latest version of the library developed by Google to play DASH contents. The proposed algorithm is compared to the Müller and Segment Aware Rate Adaptation (SARA) algorithms as well as to the default ABR algorithm integrated into ExoPlayer. The comparison is carried out by using the most relevant parameters that affect the Quality of Experience (QoE) in video playback services, that is, number and duration of stalls, average quality of the video playback and number of representation switches. These parameters can be combined to define a QoE model. In this sense, this paper also proposes two new QoE models for the evaluation of ABR algorithms. One of them considers the bitrate of every segment of each representation, and the second is based on VMAF (Video Multimethod Assessment Fusion), a Video Quality Assessment (VQA) method developed by Netflix. The evaluations presented in the paper reflect: first, that Look Ahead outperforms the Müller, SARA and the ExoPlayer ABR algorithms in terms of number and duration of video playback stalls, with hardly decreasing the average video quality; and second, that the two QoE models proposed are more accurate than other similar models existing in the literature., This work is supported by the PAID-10-18 Program of the Universitat Politecnica de Valencia (Ayudas para contratos de acceso al sistema espanol de Ciencia, Tecnologia e Innovacion, en estructuras de investigacion de la Universitat Politecnica de Valencia) and by the Project 20180810 from the Universitat Politecnica de Valencia ("Tecnologias de distribucion y procesado de informacion multimedia y QoE").
- Published
- 2020
30. Comparative study of AR versus video tutorials for minor maintenance operations
- Author
-
Juan M. Orduña, M. Carmen Juan, Pedro Morillo, Marcos Fernández, and Inmaculada García-García
- Subjects
Augmented Reality ,Multimedia ,Computer Networks and Communications ,business.industry ,Computer science ,Equipment maintenance ,020207 software engineering ,Usability ,02 engineering and technology ,Minor (academic) ,computer.software_genre ,Multimedia-based learning ,Hardware and Architecture ,Real user study ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Augmented reality ,Comparative study ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Software - Abstract
[EN] Augmented Reality (AR) has become a mainstream technology in the development of solutions for repair and maintenance operations. Although most of the AR solutions are still limited to specific contexts in industry, some consumer electronics companies have started to offer pre-packaged AR solutions as alternative to video-based tutorials (VT) for minor maintenance operations. In this paper, we present a comparative study of the acquired knowledge and user perception achieved with AR and VT solutions in some maintenance tasks of IT equipment. The results indicate that both systems help users to acquire knowledge in various aspects of equipment maintenance. Although no statistically significant differences were found between AR and VT solutions, users scored higher on the AR version in all cases. Moreover, the users explicitly preferred the AR version when evaluating three different usability and satisfaction criteria. For the AR version, a strong and significant correlation was found between the satisfaction and the achieved knowledge. Since the AR solution achieved similar learning results with higher usability scores than the video-based tutorials, these results suggest that AR solutions are the most effective approach to substitute the typical paper-based instructions in consumer electronics., This work has been supported by Spanish MINECO and EU ERDF programs under grant RTI2018-098156-B-C55.
- Published
- 2020
31. Benders decomposition for the mixed no-idle permutation flowshop scheduling problem
- Author
-
Alper Hamzadayi, Tolga Bektaş, and Rubén Ruiz
- Subjects
Mathematical optimization ,Speedup ,Computer science ,Benders decomposition ,ESTADISTICA E INVESTIGACION OPERATIVA ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Flowshop scheduling ,Permutation ,Idle ,Artificial Intelligence ,Referenced local search ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Local search (optimization) ,Metaheuristic ,021103 operations research ,Job shop scheduling ,business.industry ,General Engineering ,Mixed no-idle ,Exact algorithm ,020201 artificial intelligence & image processing ,business ,Software - Abstract
[EN] The mixed no-idle flowshop scheduling problem arises in modern industries including integrated circuits, ceramic frit and steel production, among others, and where some machines are not allowed to remain idle between jobs. This paper describes an exact algorithm that uses Benders decomposition with a simple yet effective enhancement mechanism that entails the generation of additional cuts by using a referenced local search to help speed up convergence. Using only a single additional optimality cut at each iteration, and combined with combinatorial cuts, the algorithm can optimally solve instances with up to 500 jobs and 15 machines that are otherwise not within the reach of off-the-shelf optimization software, and can easily surpass ad-hoc existing metaheuristics. To the best of the authors' knowledge, the algorithm described here is the only exact method for solving the mixed no-idle permutation flowshop scheduling problem., This research project was partially supported by the Scientific and Technological Research Council of Turkey (TuBITAK) under Grant 1059B191600107. While writing this paper, Dr Hamzaday was a visiting researcher at the Southampton Business School at the University of Southampton. Ruben Ruiz is supported by the Spanish Ministry of Science, Innovation and Universities, under the Project 'OPTEP-Port Terminal Operations Optimization' (No. RTI2018-094940-B-I00) financed with FEDER funds. Thanks are due to two anonymous reviewers for their careful reading of the paper and helpful suggestions.
- Published
- 2020
32. Utilizing geospatial information to implement SDGs and monitor their Progress
- Author
-
Ali Kharrazi, Ram Avtar, Tonni Agustiono Kurniawan, Ridhika Aggarwal, and Pankaj Kumar
- Subjects
Earth observation ,Geospatial analysis ,Geographic information system ,010504 meteorology & atmospheric sciences ,United Nations ,Computer science ,Geospatial data and techniques ,And indicators ,Big data ,Sustainable development goals ,Continuous planning ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,computer.software_genre ,01 natural sciences ,Citizen science ,Adaptation ,0105 earth and related environmental sciences ,General Environmental Science ,Sustainable development ,business.industry ,Member states ,General Medicine ,Sustainable Development ,Remote sensing ,Pollution ,Data science ,business ,computer ,Goals ,Environmental Monitoring - Abstract
It is more than 4 years since the 2030 agenda for sustainable development was adopted by the United Nations and its member states in September 2015. Several efforts are being made by member countries to contribute towards achieving the 17 Sustainable Development Goals (SDGs). The progress which had been made over time in achieving SDGs can be monitored by measuring a set of quantifiable indicators for each of the goals. It has been seen that geospatial information plays a significant role in measuring some of the targets, hence it is relevant in the implementation of SDGs and monitoring of their progress. Synoptic view and repetitive coverage of the Earth's features and phenomenon by different satellites is a powerful and propitious technological advancement. The paper reviews robustness of Earth Observation data for continuous planning, monitoring, and evaluation of SDGs. The scientific world has made commendable progress by providing geospatial data at various spatial, spectral, radiometric, and temporal resolutions enabling usage of the data for various applications. This paper also reviews the application of big data from earth observation and citizen science data to implement SDGs with a multi-disciplinary approach. It covers literature from various academic landscapes utilizing geospatial data for mapping, monitoring, and evaluating the earth's features and phenomena as it establishes the basis of its utilization for the achievement of the SDGs.
- Published
- 2020
33. Smart and sustainable urban logistic applications aided by intelligent techniques
- Author
-
Adriana Giret
- Subjects
Supply chain management ,Computer science ,Transport policy ,Urban logistics ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,020207 software engineering ,02 engineering and technology ,Field (computer science) ,Management Information Systems ,08.- Fomentar el crecimiento económico sostenido, inclusivo y sostenible, el empleo pleno y productivo, y el trabajo decente para todos ,Work (electrical) ,Risk analysis (engineering) ,Hardware and Architecture ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,LENGUAJES Y SISTEMAS INFORMATICOS ,Software ,Information Systems - Abstract
[EN] CO2-free urban logistics is one of the 10 objectives to reach by 2030 as part of transport policy. What technologies can help to accomplish it? In this paper, we discuss the very complex situation that today¿s big and modern cities are facing with a tremendous environment of many urban logistics companies running in the same city. In the majority of cases, there is less or none coordination among them worsening traffic congestions. We believe that intelligent techniques are one of the key approaches that can aid to support smart and sustainable urban logistic applications. There are large open problems in the field of cooperative urban logistics that can greatly improve with the help of artificial intelligence. Some solutions are cited in this paper, but the overall conclusion is that there is still much work to be done.
- Published
- 2019
34. A neural network filtering approach for similarity-based remaining useful life estimation
- Author
-
Kai Goebel, Jeffrey Alun Jones, Oguz Bektas, Indranil Roychoudhury, and Shankar Sankararaman
- Subjects
Annan samhällsbyggnadsteknik ,0209 industrial biotechnology ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Similarity-based RUL calculation ,Data-driven prognostics ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Similarity (psychology) ,C-MAPPS datasets ,Estimation ,Artificial neural network ,business.industry ,Mechanical Engineering ,Other Civil Engineering ,Computer Science Applications ,TA ,Control and Systems Engineering ,Prognostics ,Artificial intelligence ,Raw data ,business ,computer ,Software ,Neural networks - Abstract
The role of prognostics and health management is ever more prevalent with advanced techniques of estimation methods. However, data processing and remaining useful life prediction algorithms are often very different. Some difficulties in accurate prediction can be tackled by redefining raw data parameters into more meaningful and comprehensive health level indicators that will then provide performance information. Proper data processing has a significant importance on remaining useful life predictions, for example, to deal with data limitations or/and multi-regime operating conditions. The framework proposed in this paper considers a similarity-based prognostic algorithm that is fed by the use of data normalisation and filtering methods for operational trajectories of complex systems. This is combined with a data-driven prognostic technique based on feed-forward neural networks with multi-regime normalisation. In particular, the paper takes a close look at how pre-processing methods affect algorithm performance. The work presented herein shows a conceptual prognostic framework that overcomes challenges presented by short-term test datasets and that increases the prediction performance with regards to prognostic metrics. Validerad;2019;Nivå 2;2019-04-12 (johcin)
- Published
- 2019
35. Energy Efficiency in Cooperative Wireless Sensor Networks
- Author
-
Jaime Lloret, Jose M. Jimenez, Raquel Lacuesta, and Sandra Sendra
- Subjects
Computer Networks and Communications ,Computer science ,Transport network ,02 engineering and technology ,Fresh products ,0202 electrical engineering, electronic engineering, information engineering ,Cooperative monitoring ,Wireless sensor networks (WSN) ,business.industry ,Node (networking) ,020206 networking & telecommunications ,Energy consumption ,INGENIERIA TELEMATICA ,Energy efficiency ,Hardware and Architecture ,Path (graph theory) ,Shortest path problem ,020201 artificial intelligence & image processing ,business ,Wireless sensor network ,Delivery ,Software ,Constrained Shortest Path First ,Symmetric routing ,Information Systems ,Efficient energy use ,Computer network - Abstract
[EN] The transport of sensitive products is very important because their deterioration may cause the value lost and even the product rejection by the buyer. In addition, it is important to choose the optimal way to achieve this end. In a data network, the task of calculating the best routes is performed by routers. We can consider the optimal path as the one that provides a shortest route. However, if a real transport network is considered the shortest path can sometimes be affected by incidents and traffic jams that would make it inadvisable. On the other hand, when we need to come back, due to features that symmetry provides, it would be interesting to follow the same path in reverse sense. For this reason, in this paper we present a symmetric routing mechanism for cooperative monitoring system for the delivery of fresh products. The systems is based on a combination of fixed nodes and a mobile node that stores the path followed to be able of coming back following the same route in reverse sense. If this path is no longer available, the system will try to maintain the symmetry principle searching the route that provide the shortest time to the used in the initial trip. The paper shows the algorithm used by the systems to calculate the symmetric routes. Finally, the system is tested in a real scenario which combines different kind of roads. As the results shows, the energy consumption of this kind of nodes is highly influenced by the activity of sensors., This work has been supported by the "Ministerio de Economia y Competitividad", through the "Convocatoria 2014. Proyectos I+D -Programa Estatal de Investigacion Cientifica y Tecnica de Excelencia" in the "Subprograma Estatal de Generacion de Conocimiento", (project TIN2014-57991-C3-1- P) and the "programa para la Formacion de Personal Investigador - (FPI-2015-S2-884)" by the "Universitat Politecnica de Valencia".
- Published
- 2019
- Full Text
- View/download PDF
36. Publishing accessible proceedings: the DSAI 2016 case study
- Author
-
Sergio Sayago, Ricardo Pozzobon, and Mireia Ribera
- Subjects
PDF/UA ,Conversion procedures ,Computer Networks and Communications ,business.industry ,Computer science ,05 social sciences ,Software development ,050301 education ,Human-Computer Interaction ,World Wide Web ,Document accessibility ,EPUB3 ,Publishing ,0501 psychology and cognitive sciences ,Accessible proceedings ,business ,0503 education ,Computer communication networks ,050107 human factors ,Software ,Information Systems - Abstract
Access to research papers has changed in the last decades: from printed to digital sources, from close to open access. Despite these changes and broader access to research results, there are still accessibility barriers. A very few number of conferences and journals state an accessibility policy of their publications in their websites. The production of accessible documents is not a common practice in conferences devoted to accessibility. Purpose: The purpose of this paper is to present the case study of DSAI 2016 (Software Development and Technologies for Enhancing Accessibility and Fighting Infoexclusion) conference, wherein we were in charge of making accessible its proceedings. Methods: We discuss the methods and technical procedures we carried out to turn the original articles in MS Word and Latex formats into accessible PDFs and the steps necessary in authoring, conversion and validation. Results: the papers of DSAI 2016 were published in accessible format after much effort, best tools and procedure were using MS Word plus PDF Axes plus PDF Accessibility checker. Conclusion: We state the need to include a new role in the organizing committee of conferences for dealing with accessible publishing.
- Published
- 2019
37. On the effects of the fix geometric constraint in 2D profiles on the reusability of parametric 3D CAD models
- Author
-
Carmen González-Lluch, Pedro Company, Manuel Contero, David Pérez-López, and Jorge D. Camba
- Subjects
EXPRESION GRAFICA EN LA INGENIERIA ,Computer science ,media_common.quotation_subject ,Fix constraint ,0211 other engineering and technologies ,CAD ,02 engineering and technology ,computer.software_genre ,Automatic feedback tool ,Education ,2D profile ,Computer Aided Design ,Quality (business) ,021106 design practice & management ,Parametric statistics ,media_common ,Reusability ,DIBUJO ,05 social sciences ,General Engineering ,050301 education ,Model quality ,Constraint (information theory) ,Feature (computer vision) ,Metric (mathematics) ,Data mining ,0503 education ,computer - Abstract
[EN] In order to be reusable, history-based feature-based parametric CAD models must reliably allow for modifications while maintaining their original design intent. In this paper, we demonstrate that relations that fix the location of geometric entities relative to the reference system produce inflexible profiles that reduce model reusability. We present the results of an experiment where novice students and expert CAD users performed a series of modifications in different versions of the same 2D profile, each defined with an increasingly higher number of fix geometric constraints. Results show that the amount of fix constraints in a 2D profile correlates with the time required to complete reusability tasks, i.e., the higher the number of fix constraints in a 2D profile, the less flexible and adaptable the profile becomes to changes. In addition, a pilot software tool to automatically track this type of constraints was developed and tested. Results suggest that the detection of fix constraint overuse may result in a new metric to assess poor quality models with low reusability. The tool provides immediate feedback for preventing high semantic level quality errors, and assistance to CAD users. Finally, suggestions are introduced on how to convert fix constraints in 2D profiles into a negative metric of 3D model quality., The authors would like to thank Raquel Plumed for her support in the statistical analysis. This work has been partially funded by Grant UJI-A02017-15 (Universitat Jaume I) and DPI201784526-R (MINECO/AEI/FEDER, UE), project CAL-MBE. The authors also wish to thank the editor and reviewers for their valuable comments and suggestions that helped us improve the quality of the paper.
- Published
- 2019
- Full Text
- View/download PDF
38. Sparse analytic hierarchy process: an experimental analysis
- Author
-
Roberto Setola, Paolo Dell'Olmo, Gabriele Oliva, and Antonio Scala
- Subjects
0209 industrial biotechnology ,Process (engineering) ,Computer science ,Analytic hierarchy process ,Computational intelligence ,02 engineering and technology ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Task (project management) ,Body of knowledge ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,sparse information ,analytic hierarchy process ,decision-making ,Set (psychology) ,business.industry ,Aggregate (data warehouse) ,Rank (computer programming) ,020201 artificial intelligence & image processing ,Geometry and Topology ,Artificial intelligence ,business ,computer ,Software - Abstract
The aim of the sparse analytic hierarchy process (SAHP) problem is to rank a set of alternatives based on their utility/importance; this task is accomplished by asking human decision-makers to compare selected pairs of alternatives and to specify relative preference information, in the form of ratios of utilities. However, such an information is often affected by subjective biases or inconsistencies. Moreover, there is no general consent on the best approach to accomplish this task, and in the literature several techniques have been proposed. Finally, when more than one decision-maker is involved in the process, there is a need to provide adequate methodologies to aggregate the available information. In this view, the contribution of this paper to the SAHP body of knowledge is twofold. From one side, it develops a novel methodology to aggregate sparse data given by multiple sources of information. From another side, the paper undertakes an experimental validation of the most popular techniques to solve the SAHP problem, discussing the strength points and shortcomings of the different methodology with respect to a real case study.
- Published
- 2019
39. Proxy-based near real-time TV content transmission in mobility over 4G with MPEG-DASH transcoding on the cloud
- Author
-
Salvador Ferrairó, Román Belda, Ismael de Fez, Juan Carlos Guerri, and Pau Arce
- Subjects
Computer Networks and Communications ,Computer science ,Real-time computing ,ITU-T P.1203 ,Cloud computing ,Buffering ,02 engineering and technology ,Transcoding ,computer.software_genre ,Quality of experience ,TV ,Dynamic Adaptive Streaming over HTTP ,Dynamic adaptive streaming over HTTP (DASH) ,Digital Video Broadcasting ,TEORIA DE LA SEÑAL Y COMUNICACIONES ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,4G ,Proxy (statistics) ,Video streaming ,business.industry ,020207 software engineering ,INGENIERIA TELEMATICA ,Proxy ,Transmission (telecommunications) ,Handover ,Hardware and Architecture ,business ,computer ,Software - Abstract
[EN] This paper presents and evaluates a system that provides TV and radio services in mobility using 4G communications. The system has mainly two blocks, one on the cloud and another on the mobile vehicle. On the cloud, a DVB (Digital Video Broadcasting) receiver obtains the TV/radio signal and prepares the contents to be sent through 4G. Specifically, contents are transcoded and packetized using the DASH (Dynamic Adaptive Streaming over HTTP) standard. Vehicles in mobility use their 4G connectivity to receive the flows transmitted by the cloud. The key element of the system is an on-board proxy that manages the received flows and offers them to the final users in the vehicle. The proxy contains a buffer that helps reduce the number of interruptions caused by hand over effects and lack of coverage. The paper presents a comparison between a live transmission using 4G connecting the clients directly with the cloud server and a near real-time transmission based on an on-board proxy. Results prove that the use of the proxy reduces the number of interruptions considerably and, thus, improves the Quality of Experience of users at the expense of slightly increasing the delay., This work is supported by the Centro para el Desarrollo Tecnologico Industrial (CDTI) from the Government of Spain under the project "Plataforma avanzada de conectividad en movilidad" (CDTI IDI-20150126) and the project "Desarrollo de nueva plataforma de entretenimiento multimedia para entornos nauticos" (CDTI TIC-20170102).
- Published
- 2019
- Full Text
- View/download PDF
40. A mathematical programming tool for an efficient decision-making on teaching assignment under non-regular time schedules
- Author
-
D. Pérez-Perales, P. Solano Cutillas, and M. M. E. Alemany Díaz
- Subjects
0209 industrial biotechnology ,Schedule ,Operations research ,Process (engineering) ,Computer science ,Strategy and Management ,media_common.quotation_subject ,0211 other engineering and technologies ,Computational intelligence ,02 engineering and technology ,Non-regular schedules ,Management Science and Operations Research ,Model validation ,020901 industrial engineering & automation ,Mixed integer linear programming ,Management of Technology and Innovation ,Quality (business) ,media_common ,Numerical Analysis ,021103 operations research ,Academic year ,Subject (documents) ,Teaching assignment problem ,08.- Fomentar el crecimiento económico sostenido, inclusivo y sostenible, el empleo pleno y productivo, y el trabajo decente para todos ,Computational Theory and Mathematics ,Order (business) ,Modeling and Simulation ,ORGANIZACION DE EMPRESAS ,Type of credits ,Statistics, Probability and Uncertainty ,Time compatibility - Abstract
[EN] In this paper, an optimization tool based on a MILP model to support the teaching assignment process is proposed. It considers not only hierarchical issues among lecturers but also their preferences to teach a particular subject, the non-regular time schedules throughout the academic year, different type of credits, number of groups and other specific characteristics. Besides, it adds restrictions based on the time compatibility among the different subjects, the lecturers' availability, the maximum number of subjects per lecturer, the maximum number of lecturers per subject as well as the maximum and minimum saturation level for each lecturer, all of them in order to increase the teaching quality. Schedules heterogeneity and other features regarding the operation of some universities justify the usefulness of this model since no study that deals with all of them has been found in the literature review. Model validation has been performed with two real data sets collected from one academic year schedule at the Spanish University Universitat Politecnica de Valencia.
- Published
- 2022
41. Artificial intelligent system for multimedia services in smart home environments
- Author
-
Jose M. Jimenez, Albert Rego, Pedro Luis Gonzalez Ramirez, and Jaime Lloret
- Subjects
Service (systems architecture) ,Computer Networks and Communications ,Computer science ,020209 energy ,02 engineering and technology ,computer.software_genre ,Field (computer science) ,User experience design ,Smart home ,Home automation ,Reinforcement learning ,0202 electrical engineering, electronic engineering, information engineering ,Computer communication networks ,Multimedia ,business.industry ,Deep learning ,020206 networking & telecommunications ,INGENIERIA TELEMATICA ,Classification ,Artificial intelligence ,business ,Internet of Things ,computer ,Software - Abstract
[EN] Internet of Things (IoT) has introduced new applications and environments. Smart Home provides new ways of communication and service consumption. In addition, Artificial Intelligence (AI) and deep learning have improved different services and tasks by automatizing them. In this field, reinforcement learning (RL) provides an unsupervised way to learn from the environment. In this paper, a new intelligent system based on RL and deep learning is proposed for Smart Home environments to guarantee good levels of QoE, focused on multimedia services. This system is aimed to reduce the impact on user experience when the classifying system achieves a low accuracy. The experiments performed show that the deep learning model proposed achieves better accuracy than the KNN algorithm and that the RL system increases the QoE of the user up to 3.8 on a scale of 10., This work has been partially supported by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" within the project under Grant TIN2017-84802-C2-1-P. This work has also been partially founded by the Universitat Polite`cnica de Vale`ncia through the postdoctoral PAID-10-20 program.
- Published
- 2022
42. Exploring Functional Acceleration of OpenCL on FPGAs and GPUs Through Platform-Independent Optimizations
- Author
-
Umar Ibrahim Minhas, Georgios Karakonstantis, and Roger Woods
- Subjects
Design space exploration ,Computer science ,02 engineering and technology ,Parallel computing ,computer.software_genre ,Theoretical Computer Science ,Software portability ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,SDG 7 - Affordable and Clean Energy ,Field-programmable gate array ,Throughput (business) ,FPGA ,energy efficiency ,OpenCL ,05 social sciences ,050301 education ,020206 networking & telecommunications ,GPU accelerators ,Multiplication ,Compiler ,0503 education ,computer ,Efficient energy use ,Computer Science(all) - Abstract
OpenCL has been proposed as a means of accelerating functional computation using FPGA and GPU accelerators. Although it provides ease of programmability and code portability, questions remain about the performance portability and underlying vendor’s compiler capabilities to generate efficient implementations without user-defined, platform specific optimizations. In this work, we systematically evaluate this by formalizing a design space exploration strategy using platform-independent micro-architectural and application-specific optimizations only. The optimizations are then applied across Altera FPGA, NVIDIA GPU and ARM Mali GPU platforms for three computing examples, namely matrix-matrix multiplication, binomial-tree option pricing and 3-dimensional finite difference time domain. Our strategy enables a fair comparison across platforms in terms of throughput and energy efficiency by using the same design effort. Our results indicate that FPGA provides better performance portability in terms of achieved percentage of device’s peak performance (68%) compared to NVIDIA GPU (20%) and also achieves better energy efficiency (up to 1.4\(\times \)) for some of the considered cases without requiring in-depth hardware design expertise.
- Published
- 2018
43. On the use of models for high-performance scientific computing applications: an experience report
- Author
-
Jean-Michel Bruel, David Lugato, Marc Palyart, Ileana Ober, Commissariat à l'Energie Atomique et aux énergies alternatives - CEA (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), University of British Columbia (CANADA), Institut de Recherche en Informatique de Toulouse - IRIT (Toulouse, France), Advancing Rigorous Software and System Engineering (IRIT-ARGOS), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, University of British Columbia (UBC), Smart Modeling for softw@re Research and Technology (IRIT-SM@RT), Université Toulouse - Jean Jaurès (UT2J), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Toulouse Mind & Brain Institut (TMBI), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), and Institut National Polytechnique de Toulouse - INPT (FRANCE)
- Subjects
[INFO.INFO-AR]Computer Science [cs]/Hardware Architecture [cs.AR] ,Source code ,Modeling language ,Computer science ,media_common.quotation_subject ,Fortran ,02 engineering and technology ,[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE] ,Interface homme-machine ,Domain (software engineering) ,Abstraction layer ,Computational science ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,Software ,High-performance calculus ,Architectures Matérielles ,020204 information systems ,Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Génie logiciel ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Implementation ,media_common ,computer.programming_language ,business.industry ,020207 software engineering ,computer.file_format ,Modélisation et simulation ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Systèmes embarqués ,Modeling and Simulation ,HPC ,Cryptographie et sécurité ,[INFO.INFO-ES]Computer Science [cs]/Embedded Systems ,MDE Model-driven engineering ,Executable ,Model-driven architecture ,Software engineering ,business ,computer - Abstract
International audience; This paper reports on a four-year project that aims to raise the abstraction level through the use of model-driven engineering (MDE) techniques in the development of scientific applications relying on high-performance computing. The development and maintenance of high-performance scientific computing software is reputedly a complex task. This complexity results from the frequent evolutions of supercomputers and the tight coupling between software and hardware aspects. Moreover, current parallel programming approaches result in a mixing of concerns within the source code. Our approach relies on the use of MDE and consists in defining domain-specific modeling languages targeting various domain experts involved in the development of HPC applications, allowing each of them to handle their dedicated model in a both user-friendly and hardware-independent way. The different concerns are separated thanks to the use of several models as well as several modeling viewpoints on these models. Depending on the targeted execution platforms, these abstract models are translated into executable implementations by means of model transformations. To make all of these effective, we have developed a tool chain that is also presented in this paper. The approach is assessed through a multi-dimensional validation that focuses on its applicability, its expressiveness and its efficiency. To capitalize on the gained experience, we analyze some lessons learned during this project.
- Published
- 2018
44. A Global Optimal Path Planning and Controller Design Algorithm for Intelligent Vehicles
- Author
-
Feng You, Houbing Song, Hai-Wei Wang, Zhi-Han Lu, Jaime Lloret, and Xue-Cai Yu
- Subjects
0209 industrial biotechnology ,Controller design ,Computer Networks and Communications ,Computer science ,Terminal sliding mode ,02 engineering and technology ,Tracking (particle physics) ,020901 industrial engineering & automation ,Polynomial method ,Differential game ,0202 electrical engineering, electronic engineering, information engineering ,Stackelberg competition ,Motion planning ,Simulation ,Path planning ,020208 electrical & electronic engineering ,Work (physics) ,Control engineering ,INGENIERIA TELEMATICA ,Intelligent vehicle ,Hardware and Architecture ,Key (cryptography) ,Software ,Information Systems - Abstract
[EN] Autonomous vehicle guidance and trajectory planning is one of the key technologies in the autonomous control system for intelligent vehicles. Firstly, the target pursuit model for intelligent vehicles was established and described in this text. Then, the research work for global motion planning was carried out based on Stackelberg Differential Game Theory, and the global optimal solution was obtained by using the survival type differential game. Finally, to overcome errors, we use a polynomial method to achieve the smooth motion planning. So, based on Terminal Sliding Mode method, the Active Front Steering controller design was used to calculate the desired active wheel angle for intelligent vehicle path tracking. The simulation and experiment results demonstrate the feasibility and effectiveness of this method for intelligent vehicles' path planning and tracking., This paper is supported by the Zhejiang Provincial Natural Science Foundation under Grant No. LY13E080010. The first author would like to appreciate Dr. Xuecai Yu and the reviewers for the valuable discussions to improve the quality and presentation of the paper.
- Published
- 2018
- Full Text
- View/download PDF
45. An LSH-Based Model-Words-Driven Product Duplicate Detection Method
- Author
-
Max van Keulen, Diederik Mathol, Thomas van Noort, Aron Hartveld, Kim Schouten, Thomas Plaatsman, Flavius Frasincar, Econometrics, and Business Intelligence
- Subjects
Similarity (geometry) ,Computer science ,Minor (linear algebra) ,Process (computing) ,Binary number ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Locality-sensitive hashing ,Reduction (complexity) ,Product (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Focus (optics) ,computer - Abstract
The online shopping market is growing rapidly in the 21st century, leading to a huge amount of duplicate products being sold online. An important component for aggregating online products is duplicate detection, although this is a time consuming process. In this paper, we focus on reducing the amount of possible duplicates that can be used as an input for the Multi-component Similarity Method (MSM), a state-of-the-art duplicate detection solution. To find the candidate pairs, Locality Sensitive Hashing (LSH) is employed. A previously proposed LSH-based algorithm makes use of binary vectors based on the model words in the product titles. This paper proposes several extensions to this, by performing advanced data cleaning and additionally using information from the key-value pairs. Compared to MSM, the MSMP+ method proposed in this paper leads to a minor reduction by \(6\%\) in the \(F_1\)-measure whilst reducing the number of needed computations by \(95\%\).
- Published
- 2018
46. A Multilevel Approach to Sentiment Analysis of Figurative Language in Twitter
- Author
-
Paolo Rosso, Sivaji Bandyopadhyay, Soumadeep Mazumdar, Dipankar Das, and Braja Gopal Patra
- Subjects
Irony ,Metaphor ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Figurative text ,computer.software_genre ,01 natural sciences ,Literal and figurative language ,Sentiment analysis ,Sentiment abruptness measure ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,media_common ,Sarcasm ,business.industry ,010102 general mathematics ,Cosine similarity ,Variation (linguistics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,InformationSystems_MISCELLANEOUS ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Natural language processing ,Natural language ,Meaning (linguistics) - Abstract
[EN] Commendable amount of work has been attempted in the field of Sentiment Analysis or Opinion Mining from natural language texts and Twitter texts. One of the main goals in such tasks is to assign polarities (positive or negative) to a piece of text. But, at the same time, one of the important as well as difficult issues is how to assign the degree of positivity or negativity to certain texts. The answer becomes more complex when we perform a similar task on figurative language texts collected from Twitter. Figurative language devices such as irony and sarcasm contain an intentional secondary or extended meaning hidden within the expressions. In this paper we present a novel approach to identify the degree of the sentiment (fine grained in an 11-point scale) for the figurative language texts. We used several semantic features such as sentiment and intensifiers as well as we introduced sentiment abruptness, which measures the variation of sentiment from positive to negative or vice versa. We trained our systems at multiple levels to achieve the maximum cosine similarity of 0.823 and minimum mean square error of 2.170., The work reported in this paper is supported by a grant from the project “CLIA System Phase II” funded by Department of Electronics and Information Technology (DeitY), Ministry of Communications and Information Technology (MCIT), Government of India. The work of the fourth author is also supported by the SomEMBED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAPATER (PrometeoII/2014/030).
- Published
- 2018
- Full Text
- View/download PDF
47. Cross-language transfer of semantic annotation via targeted crowdsourcing: task design and evaluation
- Author
-
Marcos Calvo, Ioannis Klasinas, Arindam Ghosh, Evgeny A. Stepanov, Ali Orkan Bayer, Emilio Sanchis, Shammur Absar Chowdhury, and Giuseppe Riccardi
- Subjects
Linguistics and Language ,Computer science ,02 engineering and technology ,Library and Information Sciences ,Ontology (information science) ,Temporal annotation ,computer.software_genre ,Crowdsourcing ,Language and Linguistics ,Education ,Annotation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation ,Parsing ,Information retrieval ,Semantic annotation ,business.industry ,020206 networking & telecommunications ,Ontology ,Cross-language transfer ,020201 artificial intelligence & image processing ,Artificial intelligence ,Computational linguistics ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Natural language processing ,Spoken language - Abstract
[EN] Modern data-driven spoken language systems (SLS) require manual semantic annotation for training spoken language understanding parsers. Multilingual porting of SLS demands significant manual effort and language resources, as this manual annotation has to be replicated. Crowdsourcing is an accessible and cost-effective alternative to traditional methods of collecting and annotating data. The application of crowdsourcing to simple tasks has been well investigated. However, complex tasks, like cross-language semantic annotation transfer, may generate low judgment agreement and/or poor performance. The most serious issue in cross-language porting is the absence of reference annotations in the target language; thus, crowd quality control and the evaluation of the collected annotations is difficult. In this paper we investigate targeted crowdsourcing for semantic annotation transfer that delegates to crowds a complex task such as segmenting and labeling of concepts taken from a domain ontology; and evaluation using source language annotation. To test the applicability and effectiveness of the crowdsourced annotation transfer we have considered the case of close and distant language pairs: Italian-Spanish and Italian-Greek. The corpora annotated via crowdsourcing are evaluated against source and target language expert annotations. We demonstrate that the two evaluation references (source and target) highly correlate with each other; thus, drastically reduce the need for the target language reference annotations., This research is partially funded by the EU FP7 PortDial Project No. 296170, FP7 SpeDial Project No. 611396, and Spanish contract TIN2014-54288-C4-3-R. The work presented in this paper was carried out while the author was affiliated with Universitat Politecnica de Valencia.
- Published
- 2018
48. Structural Feature Selection for Event Logs
- Author
-
Teemu Lehto, Markku Hinkka, Keijo Heljanko, Alexander Jung, Teniente, E, Weidlich, M, and Helsinki Institute for Information Technology
- Subjects
0301 basic medicine ,FOS: Computer and information sciences ,Business process ,Computer science ,Process mining ,Context (language use) ,Feature selection ,Machine Learning (stat.ML) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Computer Science - Software Engineering ,03 medical and health sciences ,Computer Science - Databases ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Automatic business process discovery ,Cluster analysis ,business.industry ,Event (computing) ,Process mining Prediction ,Databases (cs.DB) ,Classification ,113 Computer and information sciences ,Software Engineering (cs.SE) ,Computer Science - Learning ,Statistical classification ,030104 developmental biology ,Clustering Feature selection ,020201 artificial intelligence & image processing ,Artificial intelligence ,Root cause analysis ,business ,computer - Abstract
We consider the problem of classifying business process instances based on structural features derived from event logs. The main motivation is to provide machine learning based techniques with quick response times for interactive computer assisted root cause analysis. In particular, we create structural features from process mining such as activity and transition occurrence counts, and ordering of activities to be evaluated as potential features for classification. We show that adding such structural features increases the amount of information thus potentially increasing classification accuracy. However, there is an inherent trade-off as using too many features leads to too long run-times for machine learning classification models. One way to improve the machine learning algorithms' run-time is to only select a small number of features by a feature selection algorithm. However, the run-time required by the feature selection algorithm must also be taken into account. Also, the classification accuracy should not suffer too much from the feature selection. The main contributions of this paper are as follows: First, we propose and compare six different feature selection algorithms by means of an experimental setup comparing their classification accuracy and achievable response times. Second, we discuss the potential use of feature selection results for computer assisted root cause analysis as well as the properties of different types of structural features in the context of feature selection., Comment: Extended version of a paper published in the proceedings of the BPM 2017 workshops
- Published
- 2018
49. Parallel signal detection for generalized spatial modulation MIMO systems
- Author
-
Victor M. Garcia-Molla, M. Angeles Simarro, Pedro Alonso, F. J. Martínez-Zaldívar, Murilo Boratto, and Alberto Gonzalez
- Subjects
Parallel computing ,Computer science ,Generalized spatial modulation ,MIMO communications ,INGENIERIA TELEMATICA ,Spatial modulation ,Theoretical Computer Science ,Hardware and Architecture ,TEORIA DE LA SEÑAL Y COMUNICACIONES ,Electronic engineering ,CIENCIAS DE LA COMPUTACION E INTELIGENCIA ARTIFICIAL ,Detection theory ,Maximum likelihood detection ,Software ,Information Systems ,Mimo systems - Abstract
[EN] Generalized Spatial Modulation is a recently developed technique that is designed to enhance the efficiency of transmissions in MIMO Systems. However, the procedure for correctly retrieving the sent signal at the receiving end is quite demanding. Specifically, the computation of the maximum likelihood solution is computationally very expensive. In this paper, we propose a parallel method for the computation of the maximum likelihood solution using the parallel computing library OpenMP. The proposed parallel algorithm computes the maximum likelihood solution faster than the sequential version, and substantially reduces the worst-case computing times., This work has been partially supported by the Spanish Ministry of Science, Innovation and Universities and by the European Union through grant RTI2018- 098085-BC41 (MCUI/AEI/FEDER), by GVA through PROMETEO/2019/109, and by RED 2018-102668-T. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
- Published
- 2022
50. Operations planning test bed under rolling horizons, multiproduct,multiechelon, multiprocess for capacitated production planning modelling with strokes
- Author
-
Gregorio Rius-Sorolla, José P. García-Sabater, Julien Maheut, and Sofia Estelles-Miguel
- Subjects
021103 operations research ,Supply chain management ,Operations research ,GMOP ,Computer science ,Total cost ,Scheduling ,Demand patterns ,0211 other engineering and technologies ,Scheduling (production processes) ,02 engineering and technology ,Management Science and Operations Research ,Alternative process ,Production planning ,Service level ,ORGANIZACION DE EMPRESAS ,Rolling horizon ,Bill of materials - Abstract
[EN] One of the problems when conducting research in mathematical programming models for operations planning is having an adequate database of experiments that can be used to verify advances and developments with enough factors to understand different consequences. This paper presents a test bed generator and instances database for a rolling horizons analysis for multiechelon planning, multiproduct with alternatives processes, multistroke, multicapacity with different stochastic demand patterns to be used with a stroke-like bill of materials considering production costs, setup, storage and delays for operations management. From the analysis of the operations planning obtained from this test bed, it is concluded that a product structure with an alternative process obtains the lowest total cost and the highest service level. In addition, decreasing seasonal demand could present a lower total cost than constant demand, but would generate a worse service level. This test bed will allow researchers further investigation so as to verify improvements in forecast methods, rolling horizons parameters, employed software, etc.
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.