221 results
Search Results
2. Outage probabilities of dual selection combiners in a correlated Nakagami fading with arbitrary fading parameters
- Abstract
[EN] In this paper, an infinite convergent series for outage probabilities of dual selection combiners (SC) in a correlated Nakagami fading with arbitrary fading parameters (not necessary identical) is derived. Moments of signal-to-noise ratio (SNR) and diversity gain at the output of the combiner are analyzed. The influence of the correlation coefficient on the outage probability is also presented.
- Published
- 2005
3. Accurate broadband measurement of electromagnetic tissue phantoms using open-ended coaxial systems
- Abstract
[EN] New technologies and devices for wireless communication networks are continually developed. In order to assess their performance, they have to be tested in realistic environments taking into account the influence of the body in wireless communications. Thus, the development of phantoms, which are synthetic materials that can emulate accurately the electromagnetic behaviour of different tissues, is mandatory. An accurate dielectric measurement of these phantoms requires using a measurement method with a low uncertainty. The open-ended coaxial technique is the most spread technique but its accuracy is strongly conditioned by the calibration procedure. A typical calibration is performed using an open circuit, a short circuit and water. However, this basic calibration is not the most accurate approach for measuring all kinds of materials. In this paper, an uncertainty analysis of the calibration process of open-ended coaxial characterization systems when a polar liquid is added to the typical calibration is provided. Measurements are performed on electromagnetically well-known liquids in the 0.5 - 8.5 GHz band. Results show that adding methanol improves the accuracy in the whole solution domain of the system, mainly when measuring phantoms that mimic high water content tissues, whereas ethanol is more suitable for measuring low water content tissue phantoms.
- Published
- 2007
4. Accurate broadband measurement of electromagnetic tissue phantoms using open-ended coaxial systems
- Abstract
[EN] New technologies and devices for wireless communication networks are continually developed. In order to assess their performance, they have to be tested in realistic environments taking into account the influence of the body in wireless communications. Thus, the development of phantoms, which are synthetic materials that can emulate accurately the electromagnetic behaviour of different tissues, is mandatory. An accurate dielectric measurement of these phantoms requires using a measurement method with a low uncertainty. The open-ended coaxial technique is the most spread technique but its accuracy is strongly conditioned by the calibration procedure. A typical calibration is performed using an open circuit, a short circuit and water. However, this basic calibration is not the most accurate approach for measuring all kinds of materials. In this paper, an uncertainty analysis of the calibration process of open-ended coaxial characterization systems when a polar liquid is added to the typical calibration is provided. Measurements are performed on electromagnetically well-known liquids in the 0.5 - 8.5 GHz band. Results show that adding methanol improves the accuracy in the whole solution domain of the system, mainly when measuring phantoms that mimic high water content tissues, whereas ethanol is more suitable for measuring low water content tissue phantoms.
- Published
- 2007
5. A Methodological Framework for Evaluating Software Testing Techniques and Tools
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to compare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.
- Published
- 2012
6. A Methodological Framework for Evaluating Software Testing Techniques and Tools
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to compare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.
- Published
- 2012
7. CU2rCU: towards the Complete rCUDA Remote GPU Virtualization and Sharing Solution
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., GPUs are being increasingly embraced by the high performance computing and computational communities as an effective way of considerably reducing execution time by accelerating significant parts of their application codes. However, despite their extraordinary computing capabilities, the adoption of GPUs in current HPC clusters may present certain negative side-effects. In particular, to ease job scheduling in these platforms, a GPU is usually attached to every node of the cluster. In addition to increasing acquisition costs this favors that GPUs may frequently remain idle, as applications usually do not fully utilize them. On the other hand, idle GPUs consume non-negligible amounts of energy, which translates into very poor energy efficiency during idle cycles. rCUDA was recently developed as a software solution to address these concerns. Specifically, it is a middleware that allows transparently sharing a reduced number of GPUs among the nodes in a cluster. rCUDA thus increases the GPU-utilization rate, taking care of job scheduling. While the initial prototype versions of rCUDA demonstrated its functionality, they also revealed several concerns related with usability and performance. With respect to usability, in this paper we present a new component of the rCUDA suite that allows an automatic transformation of any CUDA source code, so that it can be effectively accommodated within this technology. In response to performance, we briefly show some interesting results, which will be deeply analyzed in future publications. The net outcome is a new version of rCUDA that allows, for any CUDA-compatible program, to use remote GPUs in a cluster with minimum overhead.
- Published
- 2012
8. CU2rCU: towards the Complete rCUDA Remote GPU Virtualization and Sharing Solution
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., GPUs are being increasingly embraced by the high performance computing and computational communities as an effective way of considerably reducing execution time by accelerating significant parts of their application codes. However, despite their extraordinary computing capabilities, the adoption of GPUs in current HPC clusters may present certain negative side-effects. In particular, to ease job scheduling in these platforms, a GPU is usually attached to every node of the cluster. In addition to increasing acquisition costs this favors that GPUs may frequently remain idle, as applications usually do not fully utilize them. On the other hand, idle GPUs consume non-negligible amounts of energy, which translates into very poor energy efficiency during idle cycles. rCUDA was recently developed as a software solution to address these concerns. Specifically, it is a middleware that allows transparently sharing a reduced number of GPUs among the nodes in a cluster. rCUDA thus increases the GPU-utilization rate, taking care of job scheduling. While the initial prototype versions of rCUDA demonstrated its functionality, they also revealed several concerns related with usability and performance. With respect to usability, in this paper we present a new component of the rCUDA suite that allows an automatic transformation of any CUDA source code, so that it can be effectively accommodated within this technology. In response to performance, we briefly show some interesting results, which will be deeply analyzed in future publications. The net outcome is a new version of rCUDA that allows, for any CUDA-compatible program, to use remote GPUs in a cluster with minimum overhead.
- Published
- 2012
9. Understanding cache hierarchy contention in CMPs to improve job scheduling
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., In order to improve CMP performance, recent research has focused on scheduling to mitigate contention produced by the limited memory bandwidth. Nowadays, commercial CMPs implement multi-level cache hierarchies where last level caches are shared by at least two cache structures located at the immediately lower cache level. In turn, these caches can be shared by several multithreaded cores. In this microprocessor design, contention points may appear along the whole memory hierarchy. Moreover, this problem is expected to aggravate in future technologies, since the number of cores and hardware threads, and consequently the size of the shared caches increases with each microprocessor generation. In this paper we characterize the impact on performance of the different contention points that appear along the memory subsystem. Then, we propose a generic scheduling strategy for CMPs that takes into account the available bandwidth at each level of the cache hierarchy. The proposed strategy selects the processes to be co-scheduled and allocates them to cores in order to minimize contention effects. The proposal has been implemented and evaluated in a commercial single-threaded quad-core processor with a relatively small two-level cache hierarchy. Despite these potential contention limitations are less than in recent processor designs, compared to the Linux scheduler, the proposal reaches performance improvements up to 9% while these benefits (across the studied benchmark mixes) are always lower than 6% for a memory-aware scheduler that does not take into account the cache hierarchy. Moreover, in some cases the proposal doubles the speedup achieved by the memory-aware scheduler.
- Published
- 2012
10. Understanding cache hierarchy contention in CMPs to improve job scheduling
- Abstract
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., In order to improve CMP performance, recent research has focused on scheduling to mitigate contention produced by the limited memory bandwidth. Nowadays, commercial CMPs implement multi-level cache hierarchies where last level caches are shared by at least two cache structures located at the immediately lower cache level. In turn, these caches can be shared by several multithreaded cores. In this microprocessor design, contention points may appear along the whole memory hierarchy. Moreover, this problem is expected to aggravate in future technologies, since the number of cores and hardware threads, and consequently the size of the shared caches increases with each microprocessor generation. In this paper we characterize the impact on performance of the different contention points that appear along the memory subsystem. Then, we propose a generic scheduling strategy for CMPs that takes into account the available bandwidth at each level of the cache hierarchy. The proposed strategy selects the processes to be co-scheduled and allocates them to cores in order to minimize contention effects. The proposal has been implemented and evaluated in a commercial single-threaded quad-core processor with a relatively small two-level cache hierarchy. Despite these potential contention limitations are less than in recent processor designs, compared to the Linux scheduler, the proposal reaches performance improvements up to 9% while these benefits (across the studied benchmark mixes) are always lower than 6% for a memory-aware scheduler that does not take into account the cache hierarchy. Moreover, in some cases the proposal doubles the speedup achieved by the memory-aware scheduler.
- Published
- 2012
11. A diffusion-based ACO resource discovery framework for dynamic p2p networks
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, The Ant Colony Optimization (ACO) has been a very resourceful metaheuristic over the past decade and it has been successfully used to approximately solve many static NP-Hard problems. There is a limit, however, of its applicability in the field of p2p networks; derived from the fact that such networks have the potential to evolve constantly and at a high pace, rendering the already-established results useless. In this paper we approach the problem by proposing a generic knowledge diffusion mechanism that extends the classical ACO paradigm to better deal with the p2p's dynamic nature. Focusing initially on the appearance of new resources in the network we have shown that it is possible to increase the efficiency of ant routing by a significant margin.
- Published
- 2013
12. A diffusion-based ACO resource discovery framework for dynamic p2p networks
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, The Ant Colony Optimization (ACO) has been a very resourceful metaheuristic over the past decade and it has been successfully used to approximately solve many static NP-Hard problems. There is a limit, however, of its applicability in the field of p2p networks; derived from the fact that such networks have the potential to evolve constantly and at a high pace, rendering the already-established results useless. In this paper we approach the problem by proposing a generic knowledge diffusion mechanism that extends the classical ACO paradigm to better deal with the p2p's dynamic nature. Focusing initially on the appearance of new resources in the network we have shown that it is possible to increase the efficiency of ant routing by a significant margin.
- Published
- 2013
13. A Domain Specific Language for Enabling Doctors to Specify Biomechanical Protocols
- Abstract
“©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., New technologies are entering medical practice at an astounding pace. However, these technologies often cause to doctors learn and use difficulties. Then, doctors require assistance of a biomedical engineer. This is currently happening in a local hospital that has new technology to analyze biomechanical protocols in patients. Protocols are used to measure performances and identify changes in human body movements and muscles. Doctors are neither familiar with the concepts nor tools used, so biomedical engineers carry out descriptions of protocols rather than doctors. In this paper, we present the design of a domainspecific language that enables doctors to specify biomechanical protocols by addressing learning barriers (using design patterns). We also make doctors’ descriptions compatible with the existing tools, and we also support legacy biomedical descriptions (combining meta-modeling and model transformations).
- Published
- 2013
14. A Domain Specific Language for Enabling Doctors to Specify Biomechanical Protocols
- Abstract
“©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., New technologies are entering medical practice at an astounding pace. However, these technologies often cause to doctors learn and use difficulties. Then, doctors require assistance of a biomedical engineer. This is currently happening in a local hospital that has new technology to analyze biomechanical protocols in patients. Protocols are used to measure performances and identify changes in human body movements and muscles. Doctors are neither familiar with the concepts nor tools used, so biomedical engineers carry out descriptions of protocols rather than doctors. In this paper, we present the design of a domainspecific language that enables doctors to specify biomechanical protocols by addressing learning barriers (using design patterns). We also make doctors’ descriptions compatible with the existing tools, and we also support legacy biomedical descriptions (combining meta-modeling and model transformations).
- Published
- 2013
15. An analytical evaluation of a Map-based Sensor-data Delivery Protocol for VANETs
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The Delay Tolerant Networks (DTN) approach is considered the best strategy to address the specific issues of the VANETs, namely high mobility, variable node density or frequent radio obstacles. Several protocols have been proposed for DTNs, being the epidemic routing (and variations of it) the most representative protocol. Nevertheless, the availability of navigation systems, thanks to which each vehicle is aware of its location within a map, introduces the possibility for a new routing approach, known as Geographic Routing. In this paper we analytically evaluate the performance of our previously presented Map-based Sensor-data Delivery Protocol (MSDP). We introduce an analytical model that takes into account the effect of constrained buffers. The results show that adopting the Map-based Sensor-data Delivery Protocol (MSDP) routing mechanism allows achieving a reasonable delivery time with an insignificant overhead compared with epidemic routing.
- Published
- 2013
16. An analytical evaluation of a Map-based Sensor-data Delivery Protocol for VANETs
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The Delay Tolerant Networks (DTN) approach is considered the best strategy to address the specific issues of the VANETs, namely high mobility, variable node density or frequent radio obstacles. Several protocols have been proposed for DTNs, being the epidemic routing (and variations of it) the most representative protocol. Nevertheless, the availability of navigation systems, thanks to which each vehicle is aware of its location within a map, introduces the possibility for a new routing approach, known as Geographic Routing. In this paper we analytically evaluate the performance of our previously presented Map-based Sensor-data Delivery Protocol (MSDP). We introduce an analytical model that takes into account the effect of constrained buffers. The results show that adopting the Map-based Sensor-data Delivery Protocol (MSDP) routing mechanism allows achieving a reasonable delivery time with an insignificant overhead compared with epidemic routing.
- Published
- 2013
17. Analysis of results in Dependability Benchmarking: Can we do better?'
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Dependability benchmarking has become through the years more and more important in the process of systems evaluation. The increasing need for making systems more dependable in presence of perturbations has contributed to this fact. Nevertheless, even though many studies have focused on different areas related to dependability benchmarking, and some others have focused on the need of providing these benchmarks with good quality measures, there is still a gap in the process of the analysis of results. This paper focuses on providing a first glance at different approaches that may help filling this gap by making explicit the criteria followed in the decision making process.
- Published
- 2013
18. Analysis of results in Dependability Benchmarking: Can we do better?'
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Dependability benchmarking has become through the years more and more important in the process of systems evaluation. The increasing need for making systems more dependable in presence of perturbations has contributed to this fact. Nevertheless, even though many studies have focused on different areas related to dependability benchmarking, and some others have focused on the need of providing these benchmarks with good quality measures, there is still a gap in the process of the analysis of results. This paper focuses on providing a first glance at different approaches that may help filling this gap by making explicit the criteria followed in the decision making process.
- Published
- 2013
19. Assessing the effectiveness of DTN techniques under realistic urban environments
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] Intelligent Transportation Systems (ITS) require collecting and distributing as much relevant information as possible to provide their services. Such information could also offer new possibilities to various service providers in the wider Smart City context. The distribution of this intelligence is carried out through various vehicular networking strategies, the most flexible of all being Delay Tolerant Networking (DTN). DTN protocols can cope with the problems derived from high mobility and the possibility of high node sparsity. Nevertheless, achieving a fair comparison of DTN solutions in an urban environment is a hard task. In this paper we present a generic DTN model that we use to compare various representative DTN solutions in a metropolitan scenario. We highlight the weak and strong points of each evaluated proposal by also taking into consideration different sending strategies adopted to improve the performance of DTN protocols.
- Published
- 2013
20. Assessing the effectiveness of DTN techniques under realistic urban environments
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] Intelligent Transportation Systems (ITS) require collecting and distributing as much relevant information as possible to provide their services. Such information could also offer new possibilities to various service providers in the wider Smart City context. The distribution of this intelligence is carried out through various vehicular networking strategies, the most flexible of all being Delay Tolerant Networking (DTN). DTN protocols can cope with the problems derived from high mobility and the possibility of high node sparsity. Nevertheless, achieving a fair comparison of DTN solutions in an urban environment is a hard task. In this paper we present a generic DTN model that we use to compare various representative DTN solutions in a metropolitan scenario. We highlight the weak and strong points of each evaluated proposal by also taking into consideration different sending strategies adopted to improve the performance of DTN protocols.
- Published
- 2013
21. Assessing Vehicular Density Estimation Using Vehicle-to-Infrastructure Communications
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Vehicle density is one of the main metrics used for assessing the road traffic conditions. In this paper, we present a solution to estimate the density of vehicles that has been specially designed for Vehicular Networks. Our proposal allows Intelligent Transportation Systems to continuously estimate the vehicular density by accounting for the number of beacons received per Road Side Unit, as well as the roadmap topology. Simulation results indicate that our approach accurately estimates the vehicular density, and therefore automatic traffic controlling systems may use it to predict traffic jams and introduce countermeasures. Index Terms—Vehicular Networks, vehicular density estimation, Road Side Unit, VANETs.
- Published
- 2013
22. Assessing Vehicular Density Estimation Using Vehicle-to-Infrastructure Communications
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Vehicle density is one of the main metrics used for assessing the road traffic conditions. In this paper, we present a solution to estimate the density of vehicles that has been specially designed for Vehicular Networks. Our proposal allows Intelligent Transportation Systems to continuously estimate the vehicular density by accounting for the number of beacons received per Road Side Unit, as well as the roadmap topology. Simulation results indicate that our approach accurately estimates the vehicular density, and therefore automatic traffic controlling systems may use it to predict traffic jams and introduce countermeasures. Index Terms—Vehicular Networks, vehicular density estimation, Road Side Unit, VANETs.
- Published
- 2013
23. Characterizing the driving style behavior using artificial intelligence techniques
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] The On Board Diagnosis (OBD-II) standard allows accessing the vehicles’ Electronic Control Unit (ECU) easily through a Bluetooth OBD-II connector. This paper presents the DrivingStyles architecture, which adopts data mining techniques and neural networks to analyze and generate a classification of driving styles by analysing the characteristics of the driver along the route followed. The final goal is to assist drivers at correcting the bad habits in their driving behavior, while offering helpful tips to improve fuel economy. Since it is well known that smart driving can lead to a lower fuel consumption, the environmental impact is also reduced. A study involving more than 180 users is being carried out, where their real time traces (with different traffic conditions) is sent periodically to the platform. DrivingStyles is currently available on the Google Play Store platform for free download, and has achieved more than 2800 downloads from different countries in just a few months
- Published
- 2013
24. Characterizing the driving style behavior using artificial intelligence techniques
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] The On Board Diagnosis (OBD-II) standard allows accessing the vehicles’ Electronic Control Unit (ECU) easily through a Bluetooth OBD-II connector. This paper presents the DrivingStyles architecture, which adopts data mining techniques and neural networks to analyze and generate a classification of driving styles by analysing the characteristics of the driver along the route followed. The final goal is to assist drivers at correcting the bad habits in their driving behavior, while offering helpful tips to improve fuel economy. Since it is well known that smart driving can lead to a lower fuel consumption, the environmental impact is also reduced. A study involving more than 180 users is being carried out, where their real time traces (with different traffic conditions) is sent periodically to the platform. DrivingStyles is currently available on the Google Play Store platform for free download, and has achieved more than 2800 downloads from different countries in just a few months
- Published
- 2013
25. Evaluating the feasibility of using smartphones for ITS safety applications
- Abstract
“©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Driving security and comfort can be improved by applying Intelligent Transportation Systems (ITS) proposals. The low adoption rate of new ITS hardware and software products is slowing down the market introduction of these solutions. In this paper we present a driving safety application for smartphones based on a warning dissemination protocol called eMDR. The use of smartphones minimizes the hardware cost and eliminates most of the adoption barriers; users will no longer have to install new dedicated devices in their vehicles. Instead, they will simply have to install an application in their smartphone. Our application is integrated with a Navigation System which provides access to road maps, current location, and route information. We analyzed the behavior of the wireless channel and the GPS location service under different conditions to assess the feasibility of our proposal. Results showed that, in C2C communications, smartphones are able to provide a reasonable degree of connectivity, and that the degree of precision achieved is enough for certain types of driving safety applications.
- Published
- 2013
26. Influence of InfiniBand FDR on the performance of remote GPU virtualization
- Abstract
2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream today, but their adoption in current high-performance computing clusters is impaired primarily by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster can be remarkable for many applications. This approach, usually referred to as remote GPU virtualization, aims at reducing the number of GPUs present in a cluster, while increasing their utilization rate. The performance of the interconnection network is key to achieving reasonable performance results by means of remote GPU virtualization. To this end, several networking technologies with throughput comparable to that of PCI Express have appeared recently. In this paper we analyze the influence of InfiniBand FDR on the performance of remote GPU virtualization, comparing its impact on a variety of GPU-accelerated applications with other networking technologies, such as Infini-Band QDR and Gigabit Ethernet. Given the severe limitations of freely available remote GPU virtualization solutions, the rCUDA framework is used as the case study for this analysis. Results show that the new FDR interconnect, featuring higher bandwidth than its predecessors, allows the reduction of the overhead of using GPUs remotely, thus making this approach even more appealing.
- Published
- 2013
27. Influence of InfiniBand FDR on the performance of remote GPU virtualization
- Abstract
2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream today, but their adoption in current high-performance computing clusters is impaired primarily by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster can be remarkable for many applications. This approach, usually referred to as remote GPU virtualization, aims at reducing the number of GPUs present in a cluster, while increasing their utilization rate. The performance of the interconnection network is key to achieving reasonable performance results by means of remote GPU virtualization. To this end, several networking technologies with throughput comparable to that of PCI Express have appeared recently. In this paper we analyze the influence of InfiniBand FDR on the performance of remote GPU virtualization, comparing its impact on a variety of GPU-accelerated applications with other networking technologies, such as Infini-Band QDR and Gigabit Ethernet. Given the severe limitations of freely available remote GPU virtualization solutions, the rCUDA framework is used as the case study for this analysis. Results show that the new FDR interconnect, featuring higher bandwidth than its predecessors, allows the reduction of the overhead of using GPUs remotely, thus making this approach even more appealing.
- Published
- 2013
28. Influence of InfiniBand FDR on the performance of remote GPU virtualization
- Abstract
2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream today, but their adoption in current high-performance computing clusters is impaired primarily by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster can be remarkable for many applications. This approach, usually referred to as remote GPU virtualization, aims at reducing the number of GPUs present in a cluster, while increasing their utilization rate. The performance of the interconnection network is key to achieving reasonable performance results by means of remote GPU virtualization. To this end, several networking technologies with throughput comparable to that of PCI Express have appeared recently. In this paper we analyze the influence of InfiniBand FDR on the performance of remote GPU virtualization, comparing its impact on a variety of GPU-accelerated applications with other networking technologies, such as Infini-Band QDR and Gigabit Ethernet. Given the severe limitations of freely available remote GPU virtualization solutions, the rCUDA framework is used as the case study for this analysis. Results show that the new FDR interconnect, featuring higher bandwidth than its predecessors, allows the reduction of the overhead of using GPUs remotely, thus making this approach even more appealing.
- Published
- 2013
29. L1-Bandwidth Aware Thread Allocation in Multicore SMT Processors
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Improving the utilization of shared resources is a key issue to increase performance in SMT processors. Recent work has focused on resource sharing policies to enhance the processor performance, but their proposals mainly concentrate on novel hardware mechanisms that adapt to the dynamic resource requirements of the running threads. This work addresses the L1 cache bandwidth problem in SMT processors experimentally on real hardware. Unlike previous work, this paper concentrates on thread allocation, by selecting the proper pair of co-runners to be launched to the same core. The relation between L1 bandwidth requirements of each benchmark and its performance (IPC) is analyzed. We found that for individual benchmarks, performance is strongly connected to L1 bandwidth consumption, and this observation remains valid when several co-runners are launched to the same SMT core. Based on these findings we propose two L1 bandwidth aware thread to core (t2c) allocation policies, namely Static and Dynamic t2c allocation, respectively. The aim of these policies is to properly balance L1 bandwidth requirements of the running threads among the processor cores. Experiments on a Xeon E5645 processor show that the proposed policies significantly improve the performance of the Linux OS kernel regardless the number of cores considered.
- Published
- 2013
30. L1-Bandwidth Aware Thread Allocation in Multicore SMT Processors
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Improving the utilization of shared resources is a key issue to increase performance in SMT processors. Recent work has focused on resource sharing policies to enhance the processor performance, but their proposals mainly concentrate on novel hardware mechanisms that adapt to the dynamic resource requirements of the running threads. This work addresses the L1 cache bandwidth problem in SMT processors experimentally on real hardware. Unlike previous work, this paper concentrates on thread allocation, by selecting the proper pair of co-runners to be launched to the same core. The relation between L1 bandwidth requirements of each benchmark and its performance (IPC) is analyzed. We found that for individual benchmarks, performance is strongly connected to L1 bandwidth consumption, and this observation remains valid when several co-runners are launched to the same SMT core. Based on these findings we propose two L1 bandwidth aware thread to core (t2c) allocation policies, namely Static and Dynamic t2c allocation, respectively. The aim of these policies is to properly balance L1 bandwidth requirements of the running threads among the processor cores. Experiments on a Xeon E5645 processor show that the proposed policies significantly improve the performance of the Linux OS kernel regardless the number of cores considered.
- Published
- 2013
31. L1-Bandwidth Aware Thread Allocation in Multicore SMT Processors
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Improving the utilization of shared resources is a key issue to increase performance in SMT processors. Recent work has focused on resource sharing policies to enhance the processor performance, but their proposals mainly concentrate on novel hardware mechanisms that adapt to the dynamic resource requirements of the running threads. This work addresses the L1 cache bandwidth problem in SMT processors experimentally on real hardware. Unlike previous work, this paper concentrates on thread allocation, by selecting the proper pair of co-runners to be launched to the same core. The relation between L1 bandwidth requirements of each benchmark and its performance (IPC) is analyzed. We found that for individual benchmarks, performance is strongly connected to L1 bandwidth consumption, and this observation remains valid when several co-runners are launched to the same SMT core. Based on these findings we propose two L1 bandwidth aware thread to core (t2c) allocation policies, namely Static and Dynamic t2c allocation, respectively. The aim of these policies is to properly balance L1 bandwidth requirements of the running threads among the processor cores. Experiments on a Xeon E5645 processor show that the proposed policies significantly improve the performance of the Linux OS kernel regardless the number of cores considered.
- Published
- 2013
32. Reducing Channel Contention in Vehicular Environments Through an Adaptive Contention Window Solution
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Intelligent Transportation Systems (ITS) are attracting growing attention both in industry and academia due to the advances in wireless communication technologies, and a significant demand for a wide variety of applications targeting this kind of environments are expected. In order to make it usable in real vehicular environments, achieving a well-designed Medium Access Control (MAC) protocol is a challenging issue due to the dynamic nature of Vehicular Ad Hoc Networks (VANETs), scalability issues, and the variety of application requirements. Different standardization organizations have selected IEEE 802.11 as the first choice for VANET environments considering its availability, maturity, and cost. The contention window is a critical parameter for handling medium access collisions by the IEEE 802.11 MAC protocol, and it highly affects the communications performance. The impact of adjusting the contention window has been studied in Mobile Ad-Hoc Networks (MANETs), but the vehicular communications community has not yet addressed this issue thoroughly. This paper proposes a new contention window control scheme, called DBM-ACW, for VANET environments. Analysis and simulation results using OMNeT++ in a highway scenario show that DBM-ACW provides better overall performance compared with previous proposals, even with high network densities.
- Published
- 2013
33. Reducing Channel Contention in Vehicular Environments Through an Adaptive Contention Window Solution
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Intelligent Transportation Systems (ITS) are attracting growing attention both in industry and academia due to the advances in wireless communication technologies, and a significant demand for a wide variety of applications targeting this kind of environments are expected. In order to make it usable in real vehicular environments, achieving a well-designed Medium Access Control (MAC) protocol is a challenging issue due to the dynamic nature of Vehicular Ad Hoc Networks (VANETs), scalability issues, and the variety of application requirements. Different standardization organizations have selected IEEE 802.11 as the first choice for VANET environments considering its availability, maturity, and cost. The contention window is a critical parameter for handling medium access collisions by the IEEE 802.11 MAC protocol, and it highly affects the communications performance. The impact of adjusting the contention window has been studied in Mobile Ad-Hoc Networks (MANETs), but the vehicular communications community has not yet addressed this issue thoroughly. This paper proposes a new contention window control scheme, called DBM-ACW, for VANET environments. Analysis and simulation results using OMNeT++ in a highway scenario show that DBM-ACW provides better overall performance compared with previous proposals, even with high network densities.
- Published
- 2013
34. Using evolution strategies to reduce emergency services arrival time in case of accident
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] A critical issue, especially in urban areas, is the occurrence of traffic accidents, since it could generate traffic jams. Additionally, these traffic jams will negatively affect to the rescue process, increasing the emergency services arrival time, which can determine the difference between life or death for injured people involved in the accident. In this paper, we propose four different approaches addressing the traffic congestion problem, comparing them to obtain the best solution. Using V2I communications, we are able to accurately estimate the traffic density in a certain area, which represents a key parameter to perform efficient traffic redirection, thereby reducing the emergency services arrival time, and avoiding traffic jams when an accident occurs. Specifically, we propose two approaches based on the Dijkstra algorithm, and two approaches based on Evolution Strategies. Results indicate that the Density-Based Evolution Strategy system is the best one among all the proposed solutions, since it offers the lowest emergency services travel times.
- Published
- 2013
35. Using evolution strategies to reduce emergency services arrival time in case of accident
- Abstract
© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] A critical issue, especially in urban areas, is the occurrence of traffic accidents, since it could generate traffic jams. Additionally, these traffic jams will negatively affect to the rescue process, increasing the emergency services arrival time, which can determine the difference between life or death for injured people involved in the accident. In this paper, we propose four different approaches addressing the traffic congestion problem, comparing them to obtain the best solution. Using V2I communications, we are able to accurately estimate the traffic density in a certain area, which represents a key parameter to perform efficient traffic redirection, thereby reducing the emergency services arrival time, and avoiding traffic jams when an accident occurs. Specifically, we propose two approaches based on the Dijkstra algorithm, and two approaches based on Evolution Strategies. Results indicate that the Density-Based Evolution Strategy system is the best one among all the proposed solutions, since it offers the lowest emergency services travel times.
- Published
- 2013
36. VACaMobil: VANET Car Mobility Manager for OMNeT++
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The performance of communication protocols in vehicular networks highly depends on the mobility pattern. Therefore, one of the most important issues when simulating this kind of protocols is how to properly model vehicular mobility. In this paper we present VACaMobil, a VANET Car Mobility Manager for the OMNeT++ simulator which allows researchers to completely define vehicular mobility by setting the desired average number of vehicles along with its upper and lower bounds. We compare VACaMobil against other common methods employed to generate vehicular mobility. Results clearly show the advantages of the VACaMobil tool when distributing vehicles in a real scenario, becoming one of the best mobility generators to evaluate the performance of different communication protocols and algorithms in VANET environments.
- Published
- 2013
37. VACaMobil: VANET Car Mobility Manager for OMNeT++
- Abstract
©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The performance of communication protocols in vehicular networks highly depends on the mobility pattern. Therefore, one of the most important issues when simulating this kind of protocols is how to properly model vehicular mobility. In this paper we present VACaMobil, a VANET Car Mobility Manager for the OMNeT++ simulator which allows researchers to completely define vehicular mobility by setting the desired average number of vehicles along with its upper and lower bounds. We compare VACaMobil against other common methods employed to generate vehicular mobility. Results clearly show the advantages of the VACaMobil tool when distributing vehicles in a real scenario, becoming one of the best mobility generators to evaluate the performance of different communication protocols and algorithms in VANET environments.
- Published
- 2013
38. A knowledge growth and consolidation framework for lifelong machine learning systems
- Abstract
2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., A more effective vision of machine learning systems entails tools that are able to improve task after task and to reuse the patterns and knowledge that are acquired previously for future tasks. This incremental, long-life view of machine learning goes beyond most of state-of-the-art machine learning techniques that learn throw-away models. In this paper we present a long-life knowledge acquisition, evaluation and consolidation framework that is designed to work with any rule-based machine learning or inductive inference engine and integrate it into a long-life learner. In order to do that we work over the graph of working memory rules and introduce several topological metrics over it from which we derive an oblivion criterion to drop useless rules from working memory and a consolidation process to promote the rules to the knowledge base. We evaluate the framework on a series of tasks in a chess rule learning domain.
- Published
- 2014
39. A knowledge growth and consolidation framework for lifelong machine learning systems
- Abstract
2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., A more effective vision of machine learning systems entails tools that are able to improve task after task and to reuse the patterns and knowledge that are acquired previously for future tasks. This incremental, long-life view of machine learning goes beyond most of state-of-the-art machine learning techniques that learn throw-away models. In this paper we present a long-life knowledge acquisition, evaluation and consolidation framework that is designed to work with any rule-based machine learning or inductive inference engine and integrate it into a long-life learner. In order to do that we work over the graph of working memory rules and introduce several topological metrics over it from which we derive an oblivion criterion to drop useless rules from working memory and a consolidation process to promote the rules to the knowledge base. We evaluate the framework on a series of tasks in a chess rule learning domain.
- Published
- 2014
40. A Statistical Learning Reputation System for Opportunistic Networks
- Abstract
©2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Contacts are essential to guarantee the performance of opportunistic networks, but due to resource constraints, some nodes may not cooperate. In reputation systems, the perception of an agent depends on past observations to classify its actual behavior. Few studies have investigated the effectiveness of robust learning models for classifying selfish nodes in opportunistic networks. In this paper, we propose a distributed reputation algorithm based on the game theory to achieve reliable information dissemination in opportunistic networks. A contact is modeled as a game, and the nodes can cooperate or not. By using statistical inference methods, we derive the reputation of a node based on learning from past observations. We applied the proposed algorithm to a set of traces to obtain a distributed forecasting base for future action when selfish nodes are involved in the communication. We evaluate the conditions in which the accuracy of data collection becomes reliable.
- Published
- 2014
41. A Statistical Learning Reputation System for Opportunistic Networks
- Abstract
©2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., Contacts are essential to guarantee the performance of opportunistic networks, but due to resource constraints, some nodes may not cooperate. In reputation systems, the perception of an agent depends on past observations to classify its actual behavior. Few studies have investigated the effectiveness of robust learning models for classifying selfish nodes in opportunistic networks. In this paper, we propose a distributed reputation algorithm based on the game theory to achieve reliable information dissemination in opportunistic networks. A contact is modeled as a game, and the nodes can cooperate or not. By using statistical inference methods, we derive the reputation of a node based on learning from past observations. We applied the proposed algorithm to a set of traces to obtain a distributed forecasting base for future action when selfish nodes are involved in the communication. We evaluate the conditions in which the accuracy of data collection becomes reliable.
- Published
- 2014
42. Boosting the performance of remote GPU virtualization using InfiniBand Connect-IB and PCIe 3.0
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., [EN] A clear trend has emerged involving the acceleration of scientific applications by using GPUs. However, the capabilities of these devices are still generally underutilized. Remote GPU virtualization techniques can help increase GPU utilization rates, while reducing acquisition and maintenance costs. The overhead of using a remote GPU instead of a local one is introduced mainly by the difference in performance between the internode network and the intranode PCIe link. In this paper we show how using the new InfiniBand Connect-IB network adapters (attaining similar throughput to that of the most recently emerged GPUs) boosts the performance of remote GPU virtualization, reducing the overhead to a mere 0.19% in the application tested.
- Published
- 2014
43. Boosting the performance of remote GPU virtualization using InfiniBand Connect-IB and PCIe 3.0
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., [EN] A clear trend has emerged involving the acceleration of scientific applications by using GPUs. However, the capabilities of these devices are still generally underutilized. Remote GPU virtualization techniques can help increase GPU utilization rates, while reducing acquisition and maintenance costs. The overhead of using a remote GPU instead of a local one is introduced mainly by the difference in performance between the internode network and the intranode PCIe link. In this paper we show how using the new InfiniBand Connect-IB network adapters (attaining similar throughput to that of the most recently emerged GPUs) boosts the performance of remote GPU virtualization, reducing the overhead to a mere 0.19% in the application tested.
- Published
- 2014
44. Evaluating H.265 real-time video flooding quality in highway V2V environments
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] Video transmission over VANETs is an extremely difficult task not only due to the high bandwidth requirements, but also due to typical VANET characteristics such as signal attenuation, packet losses, high relative speeds and fast topology changes. In future scenarios, vehicles will provide other vehicles with information about accidents or congestion on the road, and in these cases offering visual information can be a really valuable resource for both drivers and traffic authorities. Hence, achieving an efficient transmission is critical to maximize the user-perceived quality. In this paper we evaluate solutions that combine different flooding techniques, and different video codecs to assess the effectiveness of long-distance real-time video streaming. In particular, we will compare the most effective video coding standard available (H.264) with the upcoming H.265 codec in terms of both frame loss and PSNR.
- Published
- 2014
45. Evaluating H.265 real-time video flooding quality in highway V2V environments
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, [EN] Video transmission over VANETs is an extremely difficult task not only due to the high bandwidth requirements, but also due to typical VANET characteristics such as signal attenuation, packet losses, high relative speeds and fast topology changes. In future scenarios, vehicles will provide other vehicles with information about accidents or congestion on the road, and in these cases offering visual information can be a really valuable resource for both drivers and traffic authorities. Hence, achieving an efficient transmission is critical to maximize the user-perceived quality. In this paper we evaluate solutions that combine different flooding techniques, and different video codecs to assess the effectiveness of long-distance real-time video streaming. In particular, we will compare the most effective video coding standard available (H.264) with the upcoming H.265 codec in terms of both frame loss and PSNR.
- Published
- 2014
46. Gaining confidence on dependability benchmarks conclusions through back-to-back testing
- Abstract
©2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works., The main goal of any benchmark is to guide decisions through system ranking, but surprisingly little research has been focused so far on providing means to gain confidence on the analysis carried out with benchmark results. The inclusion of a back-to-back testing approach in the benchmark analysis process to compare conclusions and gain confidence on the final adopted choices seems convenient to cope with this challenge. The proposal is to look for the coherence of rankings issued from the application of independent multiple-criteria decision making (MCDM) techniques on results. Although any MCDM method can be potentially used, this paper reports our experience using the Logic Score of Preferences (LSP) and the Analytic Hierarchy Process (AHP). Discrepancies in provided rankings invalidate conclusions and must be tracked to discover incoherences and correct the related analysis errors. Once rankings are coherent, the underlying analysis also does, thus increasing our confidence on supplied conclusions.
- Published
- 2014
47. Learning supported by peer production and digital ink
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, This paper describes experiences that combine digital peer production with digital ink affordances. Rather than preparing papers to obtain a summative final mark, students work over the course of the term producing different small learning resources such as short engineering problems, reasoning or synthesis where the lecturer acts as manager and supervisor. Teacher intervention is carried out using digital ink over each individual student production being possible to share the results throughout a public or group repository and in class offering a pro-active argument about preventing common mistakes. In order to enhance students programming skills important efforts are oriented to produce learning objects in the form of Java applets. It has the additional advantage of fostering collaborative knowledge construction because any object serves to the whole group as learning material as soon as it is already produced and validated. Qualitative and quantitative results show both an overall satisfaction from students participating in the experiences, and better results in the common written exams, when compared to the other groups following the traditional method.
- Published
- 2014
48. Learning supported by peer production and digital ink
- Abstract
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works, This paper describes experiences that combine digital peer production with digital ink affordances. Rather than preparing papers to obtain a summative final mark, students work over the course of the term producing different small learning resources such as short engineering problems, reasoning or synthesis where the lecturer acts as manager and supervisor. Teacher intervention is carried out using digital ink over each individual student production being possible to share the results throughout a public or group repository and in class offering a pro-active argument about preventing common mistakes. In order to enhance students programming skills important efforts are oriented to produce learning objects in the form of Java applets. It has the additional advantage of fostering collaborative knowledge construction because any object serves to the whole group as learning material as soon as it is already produced and validated. Qualitative and quantitative results show both an overall satisfaction from students participating in the experiences, and better results in the common written exams, when compared to the other groups following the traditional method.
- Published
- 2014
49. Offline Features for Classifying Handwritten Math Symbols with Recurrent Neural Networks
- Abstract
In mathematical expression recognition, symbol classification is a crucial step. Numerous approaches for recognizing handwritten math symbols have been published, but most of them are either an online approach or a hybrid approach. There is an absence of a study focused on offline features for handwritten math symbol recognition. Furthermore, many papers provide results difficult to compare. In this paper we assess the performance of several well-known offline features for this task. We also test a novel set of features based on polar histograms and the vertical repositioning method for feature extraction. Finally, we report and analyze the results of several experiments using recurrent neural networks on a large public database of online handwritten math expressions. The combination of online and offline features significantly improved the recognition rate.
- Published
- 2014
50. Offline Features for Classifying Handwritten Math Symbols with Recurrent Neural Networks
- Abstract
In mathematical expression recognition, symbol classification is a crucial step. Numerous approaches for recognizing handwritten math symbols have been published, but most of them are either an online approach or a hybrid approach. There is an absence of a study focused on offline features for handwritten math symbol recognition. Furthermore, many papers provide results difficult to compare. In this paper we assess the performance of several well-known offline features for this task. We also test a novel set of features based on polar histograms and the vertical repositioning method for feature extraction. Finally, we report and analyze the results of several experiments using recurrent neural networks on a large public database of online handwritten math expressions. The combination of online and offline features significantly improved the recognition rate.
- Published
- 2014
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.