258 results on '"Latré, Steven"'
Search Results
252. QoE management of HTTP adaptive streaming services
- Author
-
Bouten, Niels, De Turck, Filip, and Latré, Steven
- Subjects
Technology and Engineering ,IBCN - Published
- 2016
253. Monitoring and securing virtualized networks and services
- Author
-
Sperotto, A., Doyen, G., Latré, Steven, Charalambides, M., and Stiller, B.
- Subjects
Computer. Automation - Published
- 2014
254. An encoding framework for binarized images using hyperdimensional computing.
- Author
-
Smets L, Van Leekwijck W, Tsang IJ, and Latré S
- Abstract
Introduction: Hyperdimensional Computing (HDC) is a brain-inspired and lightweight machine learning method. It has received significant attention in the literature as a candidate to be applied in the wearable Internet of Things, near-sensor artificial intelligence applications, and on-device processing. HDC is computationally less complex than traditional deep learning algorithms and typically achieves moderate to good classification performance. A key aspect that determines the performance of HDC is encoding the input data to the hyperdimensional (HD) space., Methods: This article proposes a novel lightweight approach relying only on native HD arithmetic vector operations to encode binarized images that preserves the similarity of patterns at nearby locations by using point of interest selection and local linear mapping ., Results: The method reaches an accuracy of 97.92% on the test set for the MNIST data set and 84.62% for the Fashion-MNIST data set., Discussion: These results outperform other studies using native HDC with different encoding approaches and are on par with more complex hybrid HDC models and lightweight binarized neural networks. The proposed encoding approach also demonstrates higher robustness to noise and blur compared to the baseline encoding., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2024 Smets, Van Leekwijck, Tsang and Latré.)
- Published
- 2024
- Full Text
- View/download PDF
255. Co-learning synaptic delays, weights and adaptation in spiking neural networks.
- Author
-
Deckers L, Van Damme L, Van Leekwijck W, Tsang IJ, and Latré S
- Abstract
Spiking neural network (SNN) distinguish themselves from artificial neural network (ANN) because of their inherent temporal processing and spike-based computations, enabling a power-efficient implementation in neuromorphic hardware. In this study, we demonstrate that data processing with spiking neurons can be enhanced by co-learning the synaptic weights with two other biologically inspired neuronal features: (1) a set of parameters describing neuronal adaptation processes and (2) synaptic propagation delays. The former allows a spiking neuron to learn how to specifically react to incoming spikes based on its past. The trained adaptation parameters result in neuronal heterogeneity, which leads to a greater variety in available spike patterns and is also found in the brain. The latter enables to learn to explicitly correlate spike trains that are temporally distanced. Synaptic delays reflect the time an action potential requires to travel from one neuron to another. We show that each of the co-learned features separately leads to an improvement over the baseline SNN and that the combination of both leads to state-of-the-art SNN results on all speech recognition datasets investigated with a simple 2-hidden layer feed-forward network. Our SNN outperforms the benchmark ANN on the neuromorphic datasets (Spiking Heidelberg Digits and Spiking Speech Commands), even with fewer trainable parameters. On the 35-class Google Speech Commands dataset, our SNN also outperforms a GRU of similar size. Our study presents brain-inspired improvements in SNN that enable them to excel over an equivalent ANN of similar size on tasks with rich temporal dynamics., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2024 Deckers, Van Damme, Van Leekwijck, Tsang and Latré.)
- Published
- 2024
- Full Text
- View/download PDF
256. Missing Value Imputation of Wireless Sensor Data for Environmental Monitoring.
- Author
-
Decorte T, Mortier S, Lembrechts JJ, Meysman FJR, Latré S, Mannens E, and Verdonck T
- Abstract
Over the past few years, the scale of sensor networks has greatly expanded. This generates extended spatiotemporal datasets, which form a crucial information resource in numerous fields, ranging from sports and healthcare to environmental science and surveillance. Unfortunately, these datasets often contain missing values due to systematic or inadvertent sensor misoperation. This incompleteness hampers the subsequent data analysis, yet addressing these missing observations forms a challenging problem. This is especially the case when both the temporal correlation of timestamps within a single sensor and the spatial correlation between sensors are important. Here, we apply and evaluate 12 imputation methods to complete the missing values in a dataset originating from large-scale environmental monitoring. As part of a large citizen science project, IoT-based microclimate sensors were deployed for six months in 4400 gardens across the region of Flanders, generating 15-min recordings of temperature and soil moisture. Methods based on spatial recovery as well as time-based imputation were evaluated, including Spline Interpolation, MissForest, MICE, MCMC, M-RNN, BRITS, and others. The performance of these imputation methods was evaluated for different proportions of missing data (ranging from 10% to 50%), as well as a realistic missing value scenario. Techniques leveraging the spatial features of the data tend to outperform the time-based methods, with matrix completion techniques providing the best performance. Our results therefore provide a tool to maximize the benefit from costly, large-scale environmental monitoring efforts.
- Published
- 2024
- Full Text
- View/download PDF
257. Age of peak performance in professional road cycling.
- Author
-
Kholkine L, Latré S, Verdonck T, and de Leeuw AW
- Subjects
- Humans, Male, Bicycling, Athletic Performance
- Abstract
In this study, we investigated the relationship between age and performance in professional road cycling. We considered 1864 male riders present in the yearly top 500 ranking of ProCyclingStats (PCS) since 1993 until 2021 with more than 700 PCS Points. We applied a data-driven approach for finding natural clusters of the rider's speciality (General Classification, One Day, Sprinter or All-Rounder). For each cluster, we divided the riders into the top 50% and bottom 50% based on their total number of PCS points. The athlete's yearly performance was defined as the average number of points collected per race. Age-performance models were constructed using polynomial regression and we obtained that the top 50% of the riders in each cluster have a statistically significant ( p < 0.05) higher peak performance age. Considering the best 50% of the riders, general classification riders peak at an older age than the other rider types ( p < 0.05). For those top riders, we found ages of peak performance of 26.3, 26.5, 26.2 and 27.5 years for sprinters, all-rounders, one day specialists and general classification riders, respectively. Our findings can be used for scouting purposes, assisting coaches in designing long-term training programmes and benchmarking the athletes' performance development.
- Published
- 2023
- Full Text
- View/download PDF
258. Extended liquid state machines for speech recognition.
- Author
-
Deckers L, Tsang IJ, Van Leekwijck W, and Latré S
- Abstract
A liquid state machine (LSM) is a biologically plausible model of a cortical microcircuit. It exists of a random, sparse reservoir of recurrently connected spiking neurons with fixed synapses and a trainable readout layer. The LSM exhibits low training complexity and enables backpropagation-free learning in a powerful, yet simple computing paradigm. In this work, the liquid state machine is enhanced by a set of bio-inspired extensions to create the extended liquid state machine (ELSM), which is evaluated on a set of speech data sets. Firstly, we ensure excitatory/inhibitory (E/I) balance to enable the LSM to operate in edge-of-chaos regime. Secondly, spike-frequency adaptation (SFA) is introduced in the LSM to improve the memory capabilities. Lastly, neuronal heterogeneity, by means of a differentiation in time constants, is introduced to extract a richer dynamical LSM response. By including E/I balance, SFA, and neuronal heterogeneity, we show that the ELSM consistently improves upon the LSM while retaining the benefits of the straightforward LSM structure and training procedure. The proposed extensions led up to an 5.2% increase in accuracy while decreasing the number of spikes in the ELSM up to 20.2% on benchmark speech data sets. On some benchmarks, the ELSM can even attain similar performances as the current state-of-the-art in spiking neural networks. Furthermore, we illustrate that the ELSM input-liquid and recurrent synaptic weights can be reduced to 4-bit resolution without any significant loss in classification performance. We thus show that the ELSM is a powerful, biologically plausible and hardware-friendly spiking neural network model that can attain near state-of-the-art accuracy on speech recognition benchmarks for spiking neural networks., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (Copyright © 2022 Deckers, Tsang, Van Leekwijck and Latré.)
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.