424 results on '"Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació"'
Search Results
2. A proposal to develop and assess professional skills in Engineering Final Year Projects
- Author
-
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. CRAAX - Centre de Recerca d'Arquitectures Avançades de Xarxes, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Sánchez Carracedo, Fermín, Climent Vilaró, Joan, Corbalán González, Julita, Fonseca Casas, Pau, García Almiñana, Jordi, Herrero Zaragoza, José Ramón, Rodríguez Hontoria, Horacio, Sancho Samsó, María Ribera, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. Departament de Ciències de la Computació, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. CRAAX - Centre de Recerca d'Arquitectures Avançades de Xarxes, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Sánchez Carracedo, Fermín, Climent Vilaró, Joan, Corbalán González, Julita, Fonseca Casas, Pau, García Almiñana, Jordi, Herrero Zaragoza, José Ramón, Rodríguez Hontoria, Horacio, and Sancho Samsó, María Ribera
- Abstract
In this paper we discuss the result of piloting a methodology for Engineering Final Year Projects (FYP) assessment that takes into consideration professional skills acquisition. The FYP is structured around three milestones; skills are assigned to each milestone according to the tasks required in each phase, and a list of indicators have been designed for every phase. The criteria are specified in a rubric and are made available to students. The FYP implementation includes evaluation methods and a homogeneous assessment throughout the project development in order to provide students with valuable project implementation support, to facilitate the project organization, to improve the quality of projects and thereby to reduce the academic drop-out rate. The proposed methodology has been implemented and piloted at the Barcelona School of Informatics, and the conclusions can easily be generalized to any other Engineering degree. This paper presents the results of the FYP for 1,569 students. The average percentage of students finishing FYP in previous degrees was 65% on average, whereas in the case of the Bachelor Degree in Informatics the percentage rose to 90% with the methodology proposed in this paper. In addition, 95% of these students finished their FYP in less than one year, compared to only 65% who finished it in less than one year in previous Degrees., Peer Reviewed, Postprint (author's final draft)
- Published
- 2018
3. SETL: A programmable semantic extract-transform-load framework for semantic data warehouses
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Deb Nath, Rudra Pratap, Hose, Katja, Bach Pedersen, Torben, Romero Moral, Óscar, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Deb Nath, Rudra Pratap, Hose, Katja, Bach Pedersen, Torben, and Romero Moral, Óscar
- Abstract
In order to create better decisions for business analytics, organizations increasingly use external structured, semi-structured, and unstructured data in addition to the (mostly structured) internal data. Current Extract-Transform-Load (ETL) tools are not suitable for this “open world scenario” because they do not consider semantic issues in the integration processing. Current ETL tools neither support processing semantic data nor create a semantic Data Warehouse (DW), a repository of semantically integrated data. This paper describes our programmable Semantic ETL (SETL) framework. SETL builds on Semantic Web (SW) standards and tools and supports developers by offering a number of powerful modules, classes, and methods for (dimensional and semantic) DW constructs and tasks. Thus it supports semantic data sources in addition to traditional data sources, semantic integration, and creating or publishing a semantic (multidimensional) DW in terms of a knowledge base. A comprehensive experimental evaluation comparing SETL to a solution made with traditional tools (requiring much more hand-coding) on a concrete use case, shows that SETL provides better programmer productivity, knowledge base quality, and performance., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
4. Big data management challenges in SUPERSEDE
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Nadal Francesch, Sergi, Abelló Gamazo, Alberto, Romero Moral, Óscar, Varga, Jovan, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Nadal Francesch, Sergi, Abelló Gamazo, Alberto, Romero Moral, Óscar, and Varga, Jovan
- Abstract
The H2020 SUPERSEDE (www.supersede.eu) project aims to support decision-making in the evolution and adaptation of software services and applications by exploiting end-user feedback and runtime data, with the overall goal of improving the end-users quality of experience (QoE). Such QoE is defined as the overall performance of a system from the point of view of users, which must consider both feedback and runtime data gathered. End-user’s feedback is extracted from online forums, app stores, social networks and novel direct feedback channels, which connect software applications and service users to developers. Runtime data is primarily gathered by monitoring environmental sensors, infrastructures and usage logs. Hereafter, we discuss our solutions for the main data management challenges in SUPERSEDE., Peer Reviewed, Postprint (published version)
- Published
- 2017
5. A heuristic method for a congested capacitated transit assignment model with strategies
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Codina Sancho, Esteve, Rosell Camps, Francisca, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Codina Sancho, Esteve, and Rosell Camps, Francisca
- Abstract
This paper addresses the problem of solving the congested transit assignment problem with strict capacities. The model under consideration is the extension made by Cominetti and Correa (2001), for which the only solution method capable of resolving large transit networks is the one proposed by Cepeda et al. (2006). This transit assignment model was recently formulated by the authors as both a variational inequality problem and a fixed point inclusion problem. As a consequence of these results, this paper proposes an algo- rithm for solving the congested transit assignment problem with strict line capacities. The proposed method consists of using an MSA-based heuristic for finding a solution for the fixed point inclusion formulation. Additionally, it offers the advantage of always obtain- ing capacity-feasible flows with equal computational performance in cases of moderate congestion and with greater computational performance in cases of highly congested net- works. A set of computational tests on realistic small- and large-scale transit networks un- der various congestion levels are reported, and the characteristics of the proposed method are analyzed., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
6. Using specification and description language for life cycle assesment in buildings
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Pau, Fonseca Casas, Antoni, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Pau, and Fonseca Casas, Antoni
- Abstract
The definition of a Life Cycle Assesment (LCA) for a building or an urban area is a complex task due to the inherent complexity of all the elements that must be considered. Furthermore, a multidisciplinary approach is required due to the different sources of knowledge involved in this project. This multidisciplinary approach makes it necessary to use formal language to fully represent the complexity of the used models. In this paper, we explore the use of Specification and Description Language (SDL) to represent the LCA of a building and residential area. We also introduce a tool that uses this idea to implement an optimization and simulation mechanism to define the optimal solution for the sustainability of a specific building or residential., Peer Reviewed, Postprint (published version)
- Published
- 2017
7. A Data-driven approach to improve the process of data-intensive API creation and evolution
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Ayala Martínez, Claudia Patricia, Farré Tost, Carles, Gómez Seoane, Cristina, Oriol Hilari, Marc, Romero Moral, Óscar, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Ayala Martínez, Claudia Patricia, Farré Tost, Carles, Gómez Seoane, Cristina, Oriol Hilari, Marc, and Romero Moral, Óscar
- Abstract
The market of data-intensive Application Programming Interfaces (APIs) has recently experienced an exponential growth, but the creation and evolution of such APIs is still done ad-hoc, with little automated support and reported deficiencies. These drawbacks hinder the productivity of developers of those APIs and the services built on top of them. In this exploratory paper, we promote a data-driven approach to improve the automatization of data-intensive API creation and evolution. In a release cycle, data coming from API usage and developers will be gathered to compute several indicators whose analysis will guide the planning of the next release. This data will also help to generate complete documentation facilitating APIs adoption by third parties., Peer Reviewed, Postprint (published version)
- Published
- 2017
8. Practical update management in ontology-based data access
- Author
-
Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Lembo, Domenico, Oriol Hilari, Xavier, Savo, Domenico Fabio, Teniente López, Ernest, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Lembo, Domenico, Oriol Hilari, Xavier, Savo, Domenico Fabio, and Teniente López, Ernest
- Abstract
Ontology-based Data Access (OBDA) is gaining importance both scientifically and practically. However, little attention has been paid so far to the problem of updating OBDA systems. This is an essential issue if we want to be able to cope with modifications of data both at the ontology and at the source level, while maintaining the independence of the data sources. In this paper, we propose mechanisms to properly handle updates in this context. We show that updating data both at the ontology and source level is first-order rewritable. We also provide a practical implementation of such updating mechanisms based on non-recursive Datalog., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
9. OCLuniv: Expressive UML/OCL conceptual schemas for finite reasoning
- Author
-
Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, Teniente López, Ernest, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, and Teniente López, Ernest
- Abstract
Full UML/OCL is so expressive that most reasoning tasks are known to be undecidable in schemas defined with these languages. To tackle this situation, literature has proposed mainly three decidable fragments of UML/OCL: UML with no OCL, UML with limited OCL and no maximum cardinality constraints (OCL-Lite), and UML with limited OCL with no minimum cardinality constraints (OCL UNIVUNIV ). Since most conceptual schemas make use of OCL together with min and max cardinalities, this poses a strong limitation to current proposals. In this paper, we go beyond these limits by showing that OCL UNIVUNIV with acyclic min cardinality constraints and path acyclicity constraints also preserves decidability. In this way, we establish a language that can deal with most of UML/OCL identified constraint patterns. We also empirically test the expressiveness of this language through different UML/OCL case studies., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
10. The universal ontology: A vision for conceptual modeling and the semantic web
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Olivé Ramon, Antoni, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, and Olivé Ramon, Antoni
- Abstract
This paper puts forward a vision of a universal ontology (UO) aiming at solving, or at least greatly alleviating, the semantic integration problem in the field of conceptual modeling and the understandability problem in the field of the semantic web. So far it has been assumed that the UO is not feasible in practice, but we think that it is time to revisit that assumption in the light of the current state-of-the-art. This paper aims to be a step in this direction. We try to make an initial proposal of a feasible UO. We present the scope of the UO, the kinds of its concepts, and the elements that could comprise the specification of each concept. We propose a modular structure for the UO consisting of four levels. We argue that the UO needs a complete set of concept composition operators, and we sketch three of them. We also tackle a few issues related to the feasibility of the UO, which we think that they could be surmountable. Finally, we discuss the desirability of the UO, and we explain why we conjecture that there are already organizations that have the knowledge and resources needed to develop it, and that might have an interest in its development in the near future., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
11. Process conformance checking by relaxing data dependencies
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Mazuran, Mirjana, Oriol Hilari, Xavier, Tanca, Letizia, Teniente López, Ernest, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Mazuran, Mirjana, Oriol Hilari, Xavier, Tanca, Letizia, and Teniente López, Ernest
- Abstract
Given the events modeled by a business process, it may happen in the presence of alternative execution paths that the data required by a certain event determines somehow what event is executed next. Then, the process can be modeled by using an approximate functional dependency between the data required by both events. We apply this approach in the context of conformance checking: given a business process model with a functional dependency (FD) that no longer corresponds to the observed reality, we propose corrections to the FD to make it exact or at least to improve its confidence and produce a more accurate model., Peer Reviewed, Postprint (published version)
- Published
- 2017
12. Integrated approach to network design and frequency setting problem in railway rapid transit systems
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, López Ramos, Francesc, Codina Sancho, Esteve, Marín Gracia, Ángel, Guarnaschelli, Armando, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, López Ramos, Francesc, Codina Sancho, Esteve, Marín Gracia, Ángel, and Guarnaschelli, Armando
- Abstract
© <2017>. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0, This work presents an optimization-based approach to simultaneously solve the Network Design and the Frequency Setting phases on the context of railway rapid transit networks. The Network Design phase allows expanding existing networks as well as building new ones from scratch, considering infrastructure costs. In the Frequency Setting phase, local and/or express services are established considering transportation resources capacities and operation costs. Integrated approaches to these phases improve the transit planning process. Nevertheless, this integration is challenging both at modeling and computational effort to obtain solutions. In this work, a Lexicographic Goal Programming problem modeling this integration is introduced, together with a solving strategy. A solution to the problem is obtained by first applying a Corridor Generation Algorithm and then a Line Splitting Algorithm to deal with multiple line construction. Two case studies are used for validation, including the Seville and Santiago de Chile rapid transit networks. Detailed solution reports are shown and discussed. Conclusions and future research directions are given., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
13. Notes on using simulation-optimization techniques in traffic simulation
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Ros Oton, Xavier, Montero Mercadé, Lídia, Barceló Bugeda, Jaime, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Ros Oton, Xavier, Montero Mercadé, Lídia, and Barceló Bugeda, Jaime
- Abstract
©
. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0, Mathematical and simulation models of systems lay at the core of many decision support systems, and their role becomes more critical when the system is more complex. The decision process usually involves optimizing some utility function that evaluates the performance indicators measuring the impacts of the decisions. The complexity of the system directly increases the difficulty when the associated function to be optimized is a non-analytical, non-differentiable, non-linear function that can only be evaluated by simulation. Simulation-optimization techniques are especially suited to these cases, and its use is becoming increasingly used with traffic models, which represent an archetypal case of complex, dynamic systems that exhibit highly stochastic characteristics. In this approach, simulation is used to evaluate the objective function, and it is combined with a non-differentiable optimization technique for solving the associated optimization problem. Of these techniques, one of the most commonly used is Stochastic Perturbation Stochastic Approximation (SPSA). This paper analyses, discusses and presents the computational results from applying this technique in the calibration of traffic simulation models. This study uses variants of the SPSA by replacing the usual gradient approach with a combination of projected gradient and trust region methods. A special approach has also been analyzed for parameter calibration cases in which each variable has a different magnitude., Peer Reviewed, Postprint (published version) - Published
- 2017
14. Frequent patterns in ETL workflows: An empirical approach
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Theodorou, Vasileios, Abelló Gamazo, Alberto, Thiele, Maik, Lehner, Wolfgang, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Theodorou, Vasileios, Abelló Gamazo, Alberto, Thiele, Maik, and Lehner, Wolfgang
- Abstract
The complexity of Business Intelligence activities has driven the proposal of several approaches for the effective modeling of Extract-Transform-Load (ETL) processes, based on the conceptual abstraction of their operations. Apart from fostering automation and maintainability, such modeling also provides the building blocks to identify and represent frequently recurring patterns. Despite some existing work on classifying ETL components and functionality archetypes, the issue of systematically mining such patterns and their connection to quality attributes such as performance has not yet been addressed. In this work, we propose a methodology for the identification of ETL structural patterns. We logically model the ETL workflows using labeled graphs and employ graph algorithms to identify candidate patterns and to recognize them on different workflows. We showcase our approach through a use case that is applied on implemented ETL processes from the TPC-DI specification and we present mined ETL patterns. Decomposing ETL processes to identified patterns, our approach provides a stepping stone for the automatic translation of ETL logical models to their conceptual representation and to generate fine-grained cost models at the granularity level of patterns., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
15. Ensuring the semantic correctness of a BAUML artifact-centric BPM
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Sancho Samsó, María Ribera, Teniente López, Ernest, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Sancho Samsó, María Ribera, and Teniente López, Ernest
- Abstract
Context: Using models to represent business processes provides several advantages, such as facilitating the communication between the stakeholders or being able to check the correctness of the processes before their implementation. In contrast to traditional process modeling approaches, the artifact-centric approach treats data as a key element of the process, also considering the tasks or activities that are performed in it. Objective: This paper presents a way to verify and validate the semantic correctness of an artifact-centric business process model defined using a combination of UML and OCL models - a BAUML model. Method: We achieve our goal by presenting several algorithms that encode the initial models into first-order logic, which then allows to use an existing satisfiability checking tool to determine their correctness. Results: An approach to verify and validate an artifact-centric BPM specified in BAUML, which uses a combination of UML and OCL models. To do this, we provide a method to translate all BAUML components into a set of logic formulas. The result of this translation ensures that the only changes allowed are those specified in the model, and that those changes are taking place according the order established by the model. Having obtained this logic representation, these models can be validated by any existing reasoning method able to deal with negation of derived predicates. Moreover, we show how to automatically generate the relevant tests to validate the models. We also show the feasibility of our approach by implementing a prototype tool and applying it to a running example. Conclusion: It is feasible to ensure the semantic correctness of an artifact-centric business process model in practice., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
16. The malaria system microApp: A new, mobile device-based tool for malaria diagnosis
- Author
-
Universitat Politècnica de Catalunya. Departament de Física, Universitat Politècnica de Catalunya. BIOCOM-SC - Grup de Biologia Computacional i Sistemes Complexos, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oliveira, Alisson Dantas, Prats Soler, Clara, Espasa, Mateu, Zarzuela, Francesc, Montañola Sales, Cristina, Silgado, Aroa, López Codina, Daniel, Arruda, Mercia Eliane, Gómez, Jordi, Albuquerque, Jones O., Universitat Politècnica de Catalunya. Departament de Física, Universitat Politècnica de Catalunya. BIOCOM-SC - Grup de Biologia Computacional i Sistemes Complexos, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oliveira, Alisson Dantas, Prats Soler, Clara, Espasa, Mateu, Zarzuela, Francesc, Montañola Sales, Cristina, Silgado, Aroa, López Codina, Daniel, Arruda, Mercia Eliane, Gómez, Jordi, and Albuquerque, Jones O.
- Abstract
Background: Malaria is a public health problem that affects remote areas worldwide. Climate change has contributed to the problem by allowing for the survival of Anopheles in previously uninhabited areas. As such, several groups have made developing news systems for the automated diagnosis of malaria a priority. Objective: The objective of this study was to develop a new, automated, mobile device-based diagnostic system for malaria. The system uses Giemsa-stained peripheral blood samples combined with light microscopy to identify the Plasmodium falciparum species in the ring stage of development. Methods: The system uses image processing and artificial intelligence techniques as well as a known face detection algorithm to identify Plasmodium parasites. The algorithm is based on integral image and haar-like features concepts, and makes use of weak classifiers with adaptive boosting learning. The search scope of the learning algorithm is reduced in the preprocessing step by removing the background around blood cells. Results: As a proof of concept experiment, the tool was used on 555 malaria-positive and 777 malaria-negative previously-made slides. The accuracy of the system was, on average, 91%, meaning that for every 100 parasite-infected samples, 91 were identified correctly. Conclusions: Accessibility barriers of low-resource countries can be addressed with low-cost diagnostic tools. Our system, developed for mobile devices (mobile phones and tablets), addresses this by enabling access to health centers in remote communities, and importantly, not depending on extensive malaria expertise or expensive diagnostic detection equipment., Peer Reviewed, Postprint (published version)
- Published
- 2017
17. E-assessment of relational database skills by means of LearnSQL
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. SUSHITOS - Grup de recerca en serveis per a tecnologies d'informació socials, ubiqües i humanístiques, i per a software lliure, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Quer, Carme, Abelló Gamazo, Alberto, Burgués Illa, Xavier, Casany Guerrero, María José, Martín Escofet, Carme, Rodríguez González, M. Elena, Romero Moral, Óscar, Urpí Tubella, Antoni, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. SUSHITOS - Grup de recerca en serveis per a tecnologies d'informació socials, ubiqües i humanístiques, i per a software lliure, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Quer, Carme, Abelló Gamazo, Alberto, Burgués Illa, Xavier, Casany Guerrero, María José, Martín Escofet, Carme, Rodríguez González, M. Elena, Romero Moral, Óscar, and Urpí Tubella, Antoni
- Abstract
LearnSQL is a software system that allows the automatic and efficient e-learning and e-assessment of relational database skills. It has been used at the Barcelona School of Informatics for 18 semesters with an average of 200 students per semester. This paper shows the functionalities of LearnSQL subsystems by means of specific and understandable examples., Peer Reviewed, Postprint (published version)
- Published
- 2017
18. Simplification of UML/OCL schemas for efficient reasoning
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, Teniente López, Ernest, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, and Teniente López, Ernest
- Abstract
Ensuring the correctness of a conceptual schema is an essential task in order to avoid the propagation of errors during software development. The kind of reasoning required to perform such task is known to be exponential for UML class diagrams alone and even harder when considering OCL constraints. Motivated by this issue, we propose an innovative method aimed at removing constraints and other UML elements of the schema to obtain a simplified one that preserve the same reasoning outcomes. In this way, we can reason about the correctness of the initial artifact by reasoning on a simplified version of it. Thus, the efficiency of the reasoning process is significantly improved. In addition, since our method is independent from the reasoning engine used, any reasoning method may benefit from it., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
19. Enabling IoT ecosystems through platform interoperability
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Bröring, Arne, Schmid, Stefan, Schindhelm, Corina-Kim, Khelil, Abdelmajid, Kabisch, Sebastian, Kramer, Denis, Le Phuoc, Danh, Mitic, Jelena, Anicic, Darko, Teniente López, Ernest, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Bröring, Arne, Schmid, Stefan, Schindhelm, Corina-Kim, Khelil, Abdelmajid, Kabisch, Sebastian, Kramer, Denis, Le Phuoc, Danh, Mitic, Jelena, Anicic, Darko, and Teniente López, Ernest
- Abstract
Today, the Internet of Things (IoT) comprises vertically oriented platforms for things. Developers who want to use them need to negotiate access individually and adapt to the platform-specific API and information models. Having to perform these actions for each platform often outweighs the possible gains from adapting applications to multiple platforms. This fragmentation of the IoT and the missing interoperability result in high entry barriers for developers and prevent the emergence of broadly accepted IoT ecosystems. The BIG IoT (Bridging the Interoperability Gap of the IoT) project aims to ignite an IoT ecosystem as part of the European Platforms Initiative. As part of the project, researchers have devised an IoT ecosystem architecture. It employs five interoperability patterns that enable cross-platform interoperability and can help establish successful IoT ecosystems., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
20. A visualization tool based on traffic simulation for the analysis and evaluation of smart city policies, innovative vehicles and mobility concepts
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Linares Herreros, María Paz, Serch, Oriol, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Linares Herreros, María Paz, Serch, Oriol, and Casanovas Garcia, Josep
- Abstract
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The CitScale tool is a software platform for visualizing, analyzing and comparing the impacts of smart city policies based on innovative mobility concepts in urban areas. It places emphasis on new automotive vehicles aimed at reducing traffic or environmental impacts. This paper introduces this traffic simulation-based tool, and two case studies developed for different scenarios in Barcelona City are briefly presented to demonstrate the capabilities of the tool when it is combined with microscopic traffic simulation software. The first case presents an extensive evaluation of new innovative vehicles (electric vehicles, bikes and three-wheeled scooters) and mobility concepts (trip-sharing). In the second one, data provided by connected cars is analyzed in order to compare different developed navigation strategies and how they affect the city. Finally, some of the obtained results from both cases are concisely presented in order to show the potential of the proposed tool., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
21. Linking data and BPMN processes to achieve executable models
- Author
-
Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Oriol Hilari, Xavier, Estañol Lamarca, Montserrat, Teniente López, Ernest, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Oriol Hilari, Xavier, Estañol Lamarca, Montserrat, and Teniente López, Ernest
- Abstract
We describe a formally well founded approach to link data and processes conceptually, based on adopting UML class diagrams to represent data, and BPMN to represent the process. The UML class diagram together with a set of additional process variables, called Artifact, form the information model of the process. All activities of the BPMN process refer to such an information model by means of OCL operation contracts. We show that the resulting semantics while abstract is fully executable. We also provide an implementation of the executor., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
22. Sustainable technology results for sewage networks in smart cities
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Grau Saldes, Antoni, Bolea Monte, Yolanda, Puig-Pey Clavería, Ana María, Sanfeliu Cortés, Alberto, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Grau Saldes, Antoni, Bolea Monte, Yolanda, Puig-Pey Clavería, Ana María, Sanfeliu Cortés, Alberto, and Casanovas Garcia, Josep
- Abstract
The objective of this paper is to explain the importance of research in wastewater transportation (sewage systems) using new technologies such as robotics systems and information and communication technologies. ECHORD++ (European Coordination Hub for Open Robotics Development) is a very useful tool to foster this research and to meet needs and solutions. In this paper, authors explain the tool as well as the methodology to promote robotics research in urban environments, and the on-going experience will demonstrate that huge advances are made in this field., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
23. Load-sharing policies in parallel simulation of agent-based demographic models
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Pellegrini, Alessandro, Montañola Sales, Cristina, Quaglia, Francesco, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Pellegrini, Alessandro, Montañola Sales, Cristina, Quaglia, Francesco, and Casanovas Garcia, Josep
- Abstract
Execution parallelism in agent-Based Simulation (ABS) allows to deal with complex/large-scale models. This raises the need for runtime environments able to fully exploit hardware parallelism, while jointly offering ABS-suited programming abstractions. In this paper, we target last-generation Parallel Discrete Event Simulation (PDES) platforms for multicore systems. We discuss a programming model to support both implicit (in-place access) and explicit (message passing) interactions across concurrent Logical Processes (LPs). We discuss different load-sharing policies combining event rate and implicit/explicit LPs’ interactions. We present a performance study conducted on a synthetic test case, representative of a class of agent-based models., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
24. Simulation model to find the best comfort, energy and cost scenarios for building refurbishment
- Author
-
Universitat Politècnica de Catalunya. Departament de Mecànica de Fluids, Universitat Politècnica de Catalunya. Departament de Màquines i Motors Tèrmics, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Institut de Recerca en Energía de Catalunya, Universitat Politècnica de Catalunya. SUMMLab - Sustainability Measurement and Modeling Lab, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Antoni, Ortiz, Joana Aina, Garrido Soriano, Núria, Fonseca Casas, Pau, Salom Tormo, Jaume, Universitat Politècnica de Catalunya. Departament de Mecànica de Fluids, Universitat Politècnica de Catalunya. Departament de Màquines i Motors Tèrmics, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Institut de Recerca en Energía de Catalunya, Universitat Politècnica de Catalunya. SUMMLab - Sustainability Measurement and Modeling Lab, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Antoni, Ortiz, Joana Aina, Garrido Soriano, Núria, Fonseca Casas, Pau, and Salom Tormo, Jaume
- Abstract
This article proposes a methodology to assess building behaviour, whilst taking its life cycle into account. Understanding of the system can be obtained by combining well-known energy consumption calculation engines (TRNSYS) with co-simulation processes defined using Specification and Description Language (SDL). In this instance, to find the best comfort, energy and cost scenarios for energy rehabilitation, Co-simulation is conducted in two phases: the best scenes of passive systems are found, those presented as a priority; and, the active systems are made with ‘brute force analysis’. The article provides the results for a case study: a single-family home built between 1991 and 2007 and located in Mediterranean climate zone. The methodology provides a set of passive energy efficiency measures, to improve until two scales in the building energy labelling system. Using the methodology and the proposed model has enabled us to dramatically reduce the run time until 75% and therefore., Postprint (published version)
- Published
- 2017
25. Table identification and reconstruction in spreadsheets
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Koci, Elvis, Thiele, Maik, Romero Moral, Óscar, Lehner, Wolfgang, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Koci, Elvis, Thiele, Maik, Romero Moral, Óscar, and Lehner, Wolfgang
- Abstract
Spreadsheets are one of the most successful content generation tools, used in almost every enterprise to perform data transformation, visualization, and analysis. The high degree of freedom provided by these tools results in very complex sheets, intermingling the actual data with formatting, formulas, layout artifacts, and textual metadata. To unlock the wealth of data contained in spreadsheets, a human analyst will often have to understand and transform the data manually. To overcome this cumbersome process, we propose a framework that is able to automatically infer the structure and extract the data from these documents in a canonical form. In this paper, we describe our heuristics-based method for discovering tables in spreadsheets, given that each cell is classified as either header, attribute, metadata, data, or derived. Experimental results on a real-world dataset of 439 worksheets (858 tables) show that our approach is feasible and effectively identifies tables within partially structured spreadsheets., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
26. SM4MQ: a semantic model for multidimensional queries
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Varga, Jovan, Dobrokhotova, Ekaterina, Romero Moral, Óscar, Bach Pedersen, Torben, Thomsen, Christian, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Varga, Jovan, Dobrokhotova, Ekaterina, Romero Moral, Óscar, Bach Pedersen, Torben, and Thomsen, Christian
- Abstract
On-Line Analytical Processing (OLAP) is a data analysis approach to support decision-making. On top of that, Exploratory OLAP is a novel initiative for the convergence of OLAP and the Semantic Web (SW) that enables the use of OLAP techniques on SW data. Moreover, OLAP approaches exploit different metadata artifacts (e.g., queries) to assist users with the analysis. However, modeling and sharing of most of these artifacts are typically overlooked. Thus, in this paper we focus on the query metadata artifact in the Exploratory OLAP context and propose an RDF-based vocabulary for its representation, sharing, and reuse on the SW. As OLAP is based on the underlying multidimensional (MD) data model we denote such queries as MD queries and define SM4MQ: A Semantic Model for Multidimensional Queries. Furthermore, we propose a method to automate the exploitation of queries by means of SPARQL. We apply the method to a use case of transforming queries from SM4MQ to a vector representation. For the use case, we developed the prototype and performed an evaluation that shows how our approach can significantly ease and support user assistance such as query recommendation., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
27. Validation of Service Blueprint models by means of formal simulation techniques
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Marcos, Esperanza, Oriol Hilari, Xavier, Pérez, Francisco J., Teniente López, Ernest, Vara, Juan M., Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Marcos, Esperanza, Oriol Hilari, Xavier, Pérez, Francisco J., Teniente López, Ernest, and Vara, Juan M.
- Abstract
As service design has gained interest in the last years, so has gained one of its primary tools: the Service Blueprint. In essence, a service blueprint is a graphical tool for the design of business models, specifically for the design of business service operations. Despite its level of adoption, tool support for service design tasks is still on its early days and available tools for service blueprint modeling are mainly focused on enhancing usability and enabling collaborative edition, disregarding the formal aspects of modeling. In this paper we present a way to support the validation of service blueprint models by simulation. This approach is based on annotating the models with formal semantics, so that each task can be translated into formal logics, and from them, to executable SQL statements. This works opens a new direction in the way to bridge formal techniques and creative service design processes., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
28. An integration-oriented ontology to govern evolution in big data ecosystems
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Nadal Francesch, Sergi, Romero Moral, Óscar, Abelló Gamazo, Alberto, Vassiliadis, Panos, Vansummeren, Stijn, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Nadal Francesch, Sergi, Romero Moral, Óscar, Abelló Gamazo, Alberto, Vassiliadis, Panos, and Vansummeren, Stijn
- Abstract
Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in its original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving, forcing data analysts using it need to adapt their analytical processes after each release. This gets more challenging when aiming to perform an integrated or historical analysis of multiple sources. To cope with such complexity, in this paper we present the Big Data Integration ontology, the core construct for a data governance protocol that systematically annotates and integrates data from multiple sources in its original format. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. A functional evaluation on real world APIs is performed in order to validate our approach., Peer Reviewed, Postprint (published version)
- Published
- 2017
29. Using simulation to estimate critical paths and survival functions in aircraft turnaround processes
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Pau, Guimarans Serrano, Daniel, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Pau, and Guimarans Serrano, Daniel
- Abstract
In the context of aircraft turnaround processes, this paper illustrates how simulation can be used not only to analyze critical activities and paths, but also to generate the associated survival functions –thus providing the probabilities that the turnaround can be completed before a series of target times. After motivating the relevance of the topic for both airlines and airports, the paper reviews some related work and proposes the use of Monte Carlo simulation to obtain the critical paths of the turnaround process and generate the associated survival function. This analysis is performed assuming stochastic completion times for each activity in the process –which contrast with current practices in which deterministic times are usually assumed. A series of numerical experiments contribute to illustrate these ideas. These experiments are based on a realistic environment considering the Boeing 737-800 aircraft, although the analysis can be easily extended to any other configuration. Different levels of passengers’ occupancy are analyzed, as well as two alternative designs for the turnaround stage., Peer Reviewed, Postprint (published version)
- Published
- 2017
30. On-line analytical processing
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Romero Moral, Óscar, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, and Romero Moral, Óscar
- Abstract
On-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence., Peer Reviewed, Postprint (author's final draft)
- Published
- 2017
31. Updating DL-Lite ontologies through first-order queries
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Oriol Hilari, Xavier, Rosati, Riccardo, Savo, Domenico Fabio, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Giacomo, Giuseppe De, Oriol Hilari, Xavier, Rosati, Riccardo, and Savo, Domenico Fabio
- Abstract
In this paper we study instance-level update in DL-LiteA, the description logic underlying the OWL 2 QL standard. In particular we focus on formula-based approaches to ABox insertion and deletion. We show that DL-LiteA, which is well-known for enjoying first-order rewritability of query answering, enjoys a first-order rewritability property also for updates. That is, every update can be reformulated into a set of insertion and deletion instructions computable through a nonrecursive datalog program. Such a program is readily translatable into a first-order query over the ABox considered as a database, and hence into SQL. By exploiting this result, we implement an update component for DLLiteA-based systems and perform some experiments showing that the approach works in practice., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
32. Automated data pre-processing via meta-learning
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. LIAM - Laboratori de Modelització i Anàlisi de la Informació, Bilalli, Besim, Abelló Gamazo, Alberto, Aluja Banet, Tomàs, Wrembel, Robert, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. LIAM - Laboratori de Modelització i Anàlisi de la Informació, Bilalli, Besim, Abelló Gamazo, Alberto, Aluja Banet, Tomàs, and Wrembel, Robert
- Abstract
The final publication is available at link.springer.com, A data mining algorithm may perform differently on datasets with different characteristics, e.g., it might perform better on a dataset with continuous attributes rather than with categorical attributes, or the other way around. As a matter of fact, a dataset usually needs to be pre-processed. Taking into account all the possible pre-processing operators, there exists a staggeringly large number of alternatives and nonexperienced users become overwhelmed. We show that this problem can be addressed by an automated approach, leveraging ideas from metalearning. Specifically, we consider a wide range of data pre-processing techniques and a set of data mining algorithms. For each data mining algorithm and selected dataset, we are able to predict the transformations that improve the result of the algorithm on the respective dataset. Our approach will help non-expert users to more effectively identify the transformations appropriate to their applications, and hence to achieve improved results., Peer Reviewed, Postprint (published version)
- Published
- 2016
33. Case study on cooperative car data for estimating traffic states in an urban network
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Pacheco, Meritxell, Barceló Bugeda, Jaime, Homoceanu, Silviu, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Pacheco, Meritxell, Barceló Bugeda, Jaime, Homoceanu, Silviu, and Casanovas Garcia, Josep
- Abstract
The use of floating car data as a particular case of probe vehicle data has been the object of extensive research for estimating traffic conditions, travel times, and origin-to-destination trip matrices. It is based on data collected from a GPS-equipped vehicle fleet or available cell phones. Cooperative cars with vehicle-to-vehicle and vehicle-to-infrastructure communication capabilities represent a step forward, as they also allow tracking of vehicles surrounding the equipped car. This paper presents the results of a limited experiment with a small fleet of cooperative cars in the central business district of Barcelona, Spain, known as L’Eixample District. Data collected from the experiment were used to build and calibrate the emulation of cooperative functions in a microscopic simulation model that captured the behavior of vehicle sensors in Barcelona’s central business district. Such a calibrated model allows emulating fleet data on a large scale that goes far beyond what a small fleet of cooperative vehicles could capture. To determine the traffic state, several approaches were developed for estimating traffic variables—whose accuracy depends on the penetration level of the technology—on the basis of extensions of Edie’s generalized definitions of the fundamental traffic variables with the emulated data., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
34. A machine learning approach for layout inference in spreadsheets
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Koci, Elvis, Thiele, Maik, Romero Moral, Óscar, Lehner, Wolfgang, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Koci, Elvis, Thiele, Maik, Romero Moral, Óscar, and Lehner, Wolfgang
- Abstract
Spreadsheet applications are one of the most used tools for content generation and presentation in industry and the Web. In spite of this success, there does not exist a comprehensive approach to automatically extract and reuse the richness of data maintained in this format. The biggest obstacle is the lack of awareness about the structure of the data in spreadsheets, which otherwise could provide the means to automatically understand and extract knowledge from these files. In this paper, we propose a classification approach to discover the layout of tables in spreadsheets. Therefore, we focus on the cell level, considering a wide range of features not covered before by related work. We evaluated the performance of our classifiers on a large dataset covering three different corpora from various domains. Finally, our work includes a novel technique for detecting and repairing incorrectly classified cells in a post-processing step. The experimental results show that our approach deliver s very high accuracy bringing us a crucial step closer towards automatic table extraction., Peer Reviewed, Postprint (published version)
- Published
- 2016
35. A unified view of data-intensive flows in business intelligence systems : a survey
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Jovanovic, Petar, Romero Moral, Óscar, Abelló Gamazo, Alberto, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Jovanovic, Petar, Romero Moral, Óscar, and Abelló Gamazo, Alberto
- Abstract
Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
36. Towards exploratory OLAP on linked data
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Gallinucci, Enrico, Golfarelli, Matteo, Rizzi Bach, Stefano, Romero Moral, Óscar, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Gallinucci, Enrico, Golfarelli, Matteo, Rizzi Bach, Stefano, and Romero Moral, Óscar
- Abstract
In the context of exploratory OLAP, coupling the information wealth of linked data with the precision and detail of corporate data can greatly improve the effectiveness of the decision-making process. In this paper we outline an approach that enables users to extend the hierarchies in their corporate cubes through a user-guided process that explores selected linked data and derives hierarchies from them. This is done by identifying in the linked data the recurring modeling patterns that express roll-up relationships between RDF concepts and translating them into multidimensional knowledge., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
37. Towards intelligent data analysis : the metadata challenge
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. LIAM - Laboratori de Modelització i Anàlisi de la Informació, Bilalli, Besim, Abelló Gamazo, Alberto, Aluja Banet, Tomàs, Wrembel, Robert, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. LIAM - Laboratori de Modelització i Anàlisi de la Informació, Bilalli, Besim, Abelló Gamazo, Alberto, Aluja Banet, Tomàs, and Wrembel, Robert
- Abstract
Once analyzed correctly, data can yield substantial benefits. The process of analyzing the data and transforming it into knowledge is known as Knowledge Discovery in Databases (KDD). The plethora and subtleties of algorithms in the different steps of KDD, render it challenging. An effective user support is of crucial importance, even more now, when the analysis is performed on Big Data. Metadata is the necessary component to drive the user support. In this paper we study the metadata required to provide user support on every stage of the KDD process. We show that intelligent systems addressing the problem of user assistance in KDD are incomplete in this regard. They do not use the whole potential of metadata to enable assistance during the whole process. We present a comprehensive classification of all the metadata required to provide user support. Furthermore, we present our implementation of a metadata repository for storing and managing this metadata and explain its benefits in a real Big Data analytics project., Peer Reviewed, Postprint (published version)
- Published
- 2016
38. Indústria 4.0 / Status Report Marc de referència sobre la Indústria 4.0 octubre 2016
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Pau, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, and Fonseca Casas, Pau
- Abstract
L¿objecte d¿aquest document és donar a conèixer els elements de la Indústria 4.0 als enginyers, al teixit industrial català i a la societat, podent ser utilitzat com a instrument que faciliti el debat i la construcció d'un discurs normalitzat al voltant de la mateixa. Existeix el debat sobre fins a quin punt el màrqueting de la Indústria 4.0 va per davant de la realitat o a l¿inrevés. En qualsevol cas, l¿objectiu de la Comissió i4.0 d¿Enginyers de Catalunya és contribuir a l¿establiment de bases sòlides i a la formalització del cos de coneixent de la Indústria 4.0., Preprint
- Published
- 2016
39. Kopernik : modeling business processes for digital customers
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Castro, Manuel, Díaz-Montenegro, Sylvia, Teniente López, Ernest, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Estañol Lamarca, Montserrat, Castro, Manuel, Díaz-Montenegro, Sylvia, and Teniente López, Ernest
- Abstract
This paper presents the Kopernik methodology for modeling business processes for digital customers. These processes require a high degree of flexibility in the execution of their tasks or actions. We achieve this by using the artifact-centric approach to process modeling and the use of condition-action rules. The processes modeled following Kopernik can then be implemented in an existing commercial tool, Balandra., Preprint
- Published
- 2016
40. QB2OLAP : enabling OLAP on statistical linked open data
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Varga, Jovan, Etcheverry, Lorena, Vaisman, Alejandro, Romero Moral, Óscar, Bach Pedersen, Torben, Thomsen, Christian, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Varga, Jovan, Etcheverry, Lorena, Vaisman, Alejandro, Romero Moral, Óscar, Bach Pedersen, Torben, and Thomsen, Christian
- Abstract
Publication and sharing of multidimensional (MD) data on the Semantic Web (SW) opens new opportunities for the use of On-Line Analytical Processing (OLAP). The RDF Data Cube (QB) vocabulary, the current standard for statistical data publishing, however, lacks key MD concepts such as dimension hierarchies and aggregate functions. QB4OLAP was proposed to remedy this. However, QB4OLAP requires extensive manual annotation and users must still write queries in SPARQL, the standard query language for RDF, which typical OLAP users are not familiar with. In this demo, we present QB2OLAP, a tool for enabling OLAP on existing QB data. Without requiring any RDF, QB(4OLAP), or SPARQL skills, it allows semi-automatic transformation of a QB data set into a QB4OLAP one via enrichment with QB4OLAP semantics, exploration of the enriched schema, and querying with the high-level OLAP language QL that exploits the QB4OLAP semantics and is automatically translated to SPARQL., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
41. A software tool for e-assessment of relational database skills
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Burgués Illa, Xavier, Casany Guerrero, María José, Martín Escofet, Carme, Quer, Carme, Rodríguez González, M. Elena, Romero Moral, Óscar, Urpí Tubella, Antoni, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Universitat Politècnica de Catalunya. GPLN - Grup de Processament del Llenguatge Natural, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Abelló Gamazo, Alberto, Burgués Illa, Xavier, Casany Guerrero, María José, Martín Escofet, Carme, Quer, Carme, Rodríguez González, M. Elena, Romero Moral, Óscar, and Urpí Tubella, Antoni
- Abstract
The objective of this paper is to present a software tool for the e-assessment of relational database skills. The tool is referred to as LearnSQL (Learning Environment for Automatic Rating of Notions of SQL). LearnSQL is able to provide automatic feedback, and grade the responses of relational database exercises. It can assess the acquisition of knowledge and practical skills in relational database that are not assessed by other systems. The paper also reports on the impact of using the tool over the past 8 years by 2500 students., Peer Reviewed, Postprint (published version)
- Published
- 2016
42. Resilient store: a heuristic-based data format selector for intermediate results
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Munir, Rana Faisal, Romero Moral, Óscar, Abelló Gamazo, Alberto, Bilalli, Besim, Thiele, Maik, Lehner, Wolfgang, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Munir, Rana Faisal, Romero Moral, Óscar, Abelló Gamazo, Alberto, Bilalli, Besim, Thiele, Maik, and Lehner, Wolfgang
- Abstract
The final publication is available at link.springer.com, Large-scale data analysis is an important activity in many organizations that typically requires the deployment of data-intensive workflows. As data is processed these workflows generate large intermediate results, which are typically pipelined from one operator to the following. However, if materialized, these results become reusable, hence, subsequent workflows need not recompute them. There are already many solutions that materialize intermediate results but all of them assume a fixed data format. A fixed format, however, may not be the optimal one for every situation. For example, it is well-known that different data fragmentation strategies (e.g., horizontal and vertical) behave better or worse according to the access patterns of the subsequent operations. In this paper, we present ResilientStore, which assists on selecting the most appropriate data format for materializing intermediate results. Given a workflow and a set of materialization points, it uses rule-based heuristics to choose the best storage data format based on subsequent access patterns.We have implemented ResilientStore for HDFS and three different data formats: SequenceFile, Parquet and Avro. Experimental results show that our solution gives 18% better performance than any solution based on a single fixed format., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
43. H-word: Supporting job scheduling in Hadoop with workload-driven data redistribution
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Jovanovic, Petar, Romero Moral, Óscar, Calders, Toon, Abelló Gamazo, Alberto, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Jovanovic, Petar, Romero Moral, Óscar, Calders, Toon, and Abelló Gamazo, Alberto
- Abstract
The final publication is available at http://link.springer.com/chapter/10.1007/978-3-319-44039-2_21, Today’s distributed data processing systems typically follow a query shipping approach and exploit data locality for reducing network traffic. In such systems the distribution of data over the cluster resources plays a significant role, and when skewed, it can harm the performance of executing applications. In this paper, we addressthe challenges of automatically adapting the distribution of data in a cluster to the workload imposed by the input applications. We propose a generic algorithm, named H-WorD, which, based on the estimated workload over resources, suggests alternative execution scenarios of tasks, and hence identifies required transfers of input data a priori, for timely bringing data close to the execution. We exemplify our algorithm in the context of MapReduce jobs in a Hadoop ecosystem. Finally, we evaluate our approach and demonstrate the performance gains of automatic data redistribution., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
44. TINTIN : comprobación incremental de aserciones SQL
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, Teniente López, Ernest, Rull, Guillem, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Oriol Hilari, Xavier, Teniente López, Ernest, and Rull, Guillem
- Abstract
Ninguno de los SGBD más populares del momento implementa aserciones SQL, obligando así a implementar manualmente su comprobación. Por ello, presentamos TINTIN: una aplicación que genera automáticamente el código SQL para comprobar aserciones. Dicho código captura las tuplas insertadas/borradas en una transacción, comprueba que ninguna de ellas viole ninguna aserción mediante consultas SQL, y materializa los cambios en caso que sean satisfechas. La eficiencia del código se basa en la comprobación incremental de las aserciones., Peer Reviewed, Postprint (published version)
- Published
- 2016
45. A software reference architecture for semantic-aware big data systems
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Nadal Francesch, Sergi, Herrero Otal, Víctor, Romero Moral, Óscar, Abelló Gamazo, Alberto, Franch Gutiérrez, Javier, Vansummeren, Stijn, Valerio, Danilo, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering, Nadal Francesch, Sergi, Herrero Otal, Víctor, Romero Moral, Óscar, Abelló Gamazo, Alberto, Franch Gutiérrez, Javier, Vansummeren, Stijn, and Valerio, Danilo
- Abstract
Context: Big Data systems are a class of software systems that ingest, store, process and serve massive amounts of heterogeneous data, from multiple sources. Despite their undisputed impact in current society, their engineering is still in its infancy and companies find it difficult to adopt them due to their inherent complexity. Existing attempts to provide architectural guidelines for their engineering fail to take into account important Big Data characteristics, such as the management, evolution and quality of the data. Objective: In this paper, we follow software engineering principles to refine the ¿-architecture, a reference model for Big Data systems, and use it as seed to create Bolster, a software reference architecture (SRA) for semantic-aware Big Data systems. Method: By including a new layer into the ¿-architecture, the Semantic Layer, Bolster is capable of handling the most representative Big Data characteristics (i.e., Volume, Velocity, Variety, Variability and Veracity). Results: We present the successful implementation of Bolster in three industrial projects, involving five organizations. The validation results show high level of agreement among practitioners from all organizations with respect to standard quality factors. Conclusion: As an SRA, Bolster allows organizations to design concrete architectures tailored to their specific needs. A distinguishing feature is that it provides semantic-awareness in Big Data Systems. These are Big Data system implementations that have components to simplify data definition and exploitation. In particular, they leverage metadata (i.e., data describing data) to enable (partial) automation of data exploitation and to aid the user in their decision making processes. This simplification supports the differentiation of responsibilities into cohesive roles enhancing data governance., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
46. Analysis of applications to improve the energy savings in residential buildings based on Systemic Quality Model
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Antoni, Fonseca Casas, Pau, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Fonseca Casas, Antoni, Fonseca Casas, Pau, and Casanovas Garcia, Josep
- Abstract
Creating a definition of the features and the architecture of a new Energy Management Software (EMS) is complex because different professionals will be involved in creating that definition and in using the tool. To simplify this definition and aid in the eventual selection of an existing EMS to fit a specific need, a set of metrics that considers the primary issues and drawbacks of the EMS is decisive. This study proposes a set of metrics to evaluate and compare EMS applications. Using these metrics will allow professionals to highlight the tendencies and detect the drawbacks of current EMS applications and to eventually develop new EMS applications based on the results of the analysis. This study presents a list of the applications to be examined and describes the primary issues to be considered in the development of a new application. This study follows the Systemic Quality Model (SQMO), which has been used as a starting point to develop new EMS, but can also be used to select an existing EMS that fits the goals of a company. Using this type of analysis, we were able to detect the primary features desired in an EMS software. These features are numerically scaled, allowing professionals to select the most appropriate EMS that fits for their purposes. This allows the development of EMS utilizing an iterative and user-centric approach. We can apply this methodology to guide the development of future EMS and to define the priorities that are desired in this type of software., Peer Reviewed, Postprint (published version)
- Published
- 2016
47. BSC best practices in professional training and teaching for the HPC ecosystem
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Sancho Samsó, María Ribera, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, and Sancho Samsó, María Ribera
- Abstract
This paper outlines the key components of the European HPC ecosystems, analyses the major challenges as well as corresponding specific technical focus areas which needs to be addressed in European strategic research agenda for HPC leadership. Further the need for education and training is clearly identified and the BSC approach, BSC model and best practices are presented and analyzed. Further a generalization and analysis of the approach are given., Peer Reviewed, Postprint (published version)
- Published
- 2016
48. A case study on cooperative car data for traffic state estimation in an urban network
- Author
-
Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Pacheco, Meritxell, Barceló Bugeda, Jaime, Homoceau, Silvio, Casanovas Garcia, Josep, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Montero Mercadé, Lídia, Pacheco, Meritxell, Barceló Bugeda, Jaime, Homoceau, Silvio, and Casanovas Garcia, Josep
- Abstract
The use of Floating Car Data (FCD) as a particular case of Probe Vehicle Data (PVD) has been the object of extensive research for estimating traffic conditions, travel times and Origin to Destination trip matrices. It is based on data collected from a GPS-equipped vehicle fleet or available cell phones. Cooperative Cars with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication capabilities represent a step forward, as they also allow tracking vehicles surrounding the equipped car. This paper presents the results of a limited experiment with a small fleet of cooperative cars in Barcelona’s Central Business District (CBD) known as L’Eixample. Data collected from the experiment were used to build and calibrate the emulation of cooperative functions in a microscopic simulation model that captured the behavior of vehicle sensors in Barcelona’s CBD. Such a calibrated model allows emulating fleet data on a large scale that goes far beyond what a small fleet of cooperative vehicles could capture. To determine the traffic state, several approaches are developed for estimating traffic variables based on extensions of Edie’s definition of the fundamental traffic variables with the emulated data, whose accuracy depends on the penetration level of the technology., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
49. Aggregating energy flexibilities under constraints
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Valsomatzis, Emmanouil, Bach Pedersen, Torben, Abelló Gamazo, Alberto, Hose, Katja, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Valsomatzis, Emmanouil, Bach Pedersen, Torben, Abelló Gamazo, Alberto, and Hose, Katja
- Abstract
The flexibility of individual energy prosumers (producers and/or consumers) has drawn a lot of attention in recent years. Aggregation of such flexibilities provides prosumers with the opportunity to directly participate in the energy market and at the same time reduces the complexity of scheduling the energy units. However, aggregated flexibility should support normal grid operation. In this paper, we build on the flex-offer (FO) concept to model the inherent flexibility of a prosumer (e.g., a single flexible consumption device such as a clothes washer). An FO captures flexibility in both time and amount dimensions. We define the problem of aggregating FOs taking into account grid power constraints. We also propose two constraint-based aggregation techniques that efficiently aggregate FOs while retaining flexibility. We show through a comprehensive evaluation that our techniques, in contrast to state-of-the-art techniques, respect the constraints imposed by the electrical grid. Moreover, our techniques also reduce the scheduling input size significantly and improve the quality of scheduling results., Peer Reviewed, Postprint (author's final draft)
- Published
- 2016
50. Analysis and operational challenges of dynamic ride sharing demand responsive transportation models
- Author
-
Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Linares Herreros, María Paz, Barceló Bugeda, Jaime, Carmona Bautista, Carlos, Montero Mercadé, Lídia, Facultat d'Informàtica de Barcelona, Universitat Politècnica de Catalunya. Departament d'Estadística i Investigació Operativa, Universitat Politècnica de Catalunya. MPI - Modelització i Processament de la Informació, Linares Herreros, María Paz, Barceló Bugeda, Jaime, Carmona Bautista, Carlos, and Montero Mercadé, Lídia
- Abstract
There is a wide body of evidence that suggests sustainable mobility is not only a technological question, but that automotive technology will be a part of the solution in becoming a necessary albeit insufficient condition. Sufficiency is emerging as a paradigm shift from car ownership to vehicle usage, which is a consequence of socio-economic changes. Information and Communication Technologies (ICT) now make it possible for a user to access a mobility service to go anywhere at any time. Among the many emerging mobility services, Multiple Passenger Ridesharing and its variants look the most promising. However, challenges arise in implementing these systems while accounting specifically for time dependencies and time windows that reflect users’ needs, specifically in terms of real-time fleet dispatching and dynamic route calculation. On the other hand, we must consider the feasibility and impact analysis of the many factors influencing the behavior of the system – as, for example, service demand, the size of the service fleet, the capacity of the shared vehicles and whether the time window requirements are soft or tight. This paper analyzes - a Decision Support System that computes solutions with ad hoc heuristics applied to variants of Pick Up and Delivery Problems with Time Windows, as well as to Feasibility and Profitability criteria rooted in Dynamic Insertion Heuristics. To evaluate the applications, a Simulation Framework is proposed. It is based on a microscopic simulation model that emulates real-time traffic conditions and a real traffic information system. It also interacts with the Decision Support System by feeding it with the required data for making decisions in the simulation that emulate the behavior of the shared fleet. The proposed simulation framework has been implemented in a model of Barcelona’s Central Business District. The obtained results prove the potential feasibility of the mobility concept., Postprint (published version)
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.