16 results on '"Kansizoglou, Ioannis"'
Search Results
2. Enhancing satellite semantic maps with ground-level imagery
- Author
-
Balaska, Vasiliki, Bampis, Loukas, Kansizoglou, Ioannis, and Gasteratos, Antonios
- Published
- 2021
- Full Text
- View/download PDF
3. A Biologically Inspired Movement Recognition System with Spiking Neural Networks for Ambient Assisted Living Applications.
- Author
-
Passias, Athanasios, Tsakalos, Karolos-Alexandros, Kansizoglou, Ioannis, Kanavaki, Archontissa Maria, Gkrekidis, Athanasios, Menychtas, Dimitrios, Aggelousis, Nikolaos, Michalopoulou, Maria, Gasteratos, Antonios, and Sirakoulis, Georgios Ch.
- Subjects
ARTIFICIAL neural networks ,CONGREGATE housing ,OLDER people ,MOTION capture (Human mechanics) ,ACTION potentials ,ELDER care - Abstract
This study presents a novel solution for ambient assisted living (AAL) applications that utilizes spiking neural networks (SNNs) and reconfigurable neuromorphic processors. As demographic shifts result in an increased need for eldercare, due to a large elderly population that favors independence, there is a pressing need for efficient solutions. Traditional deep neural networks (DNNs) are typically energy-intensive and computationally demanding. In contrast, this study turns to SNNs, which are more energy-efficient and mimic biological neural processes, offering a viable alternative to DNNs. We propose asynchronous cellular automaton-based neurons (ACANs), which stand out for their hardware-efficient design and ability to reproduce complex neural behaviors. By utilizing the remote supervised method ( R e S u M e ), this study improves spike train learning efficiency in SNNs. We apply this to movement recognition in an elderly population, using motion capture data. Our results highlight a high classification accuracy of 83.4 % , demonstrating the approach's efficacy in precise movement activity classification. This method's significant advantage lies in its potential for real-time, energy-efficient processing in AAL environments. Our findings not only demonstrate SNNs' superiority over conventional DNNs in computational efficiency but also pave the way for practical neuromorphic computing applications in eldercare. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A Deep Learning Approach for Autonomous Compression Damage Identification in Fiber-Reinforced Concrete Using Piezoelectric Lead Zirconate Titanate Transducers.
- Author
-
Sapidis, George M., Kansizoglou, Ioannis, Naoum, Maria C., Papadopoulos, Nikos A., and Chalioris, Constantin E.
- Subjects
- *
FIBER-reinforced concrete , *LEAD zirconate titanate , *DEEP learning , *STRUCTURAL health monitoring , *CONVOLUTIONAL neural networks , *TRANSDUCERS , *CONCRETE fatigue - Abstract
Effective damage identification is paramount to evaluating safety conditions and preventing catastrophic failures of concrete structures. Although various methods have been introduced in the literature, developing robust and reliable structural health monitoring (SHM) procedures remains an open research challenge. This study proposes a new approach utilizing a 1-D convolution neural network to identify the formation of cracks from the raw electromechanical impedance (EMI) signature of externally bonded piezoelectric lead zirconate titanate (PZT) transducers. Externally bonded PZT transducers were used to determine the EMI signature of fiber-reinforced concrete specimens subjected to monotonous and repeatable compression loading. A leave-one-specimen-out cross-validation scenario was adopted for the proposed SHM approach for a stricter and more realistic validation procedure. The experimental study and the obtained results clearly demonstrate the capacity of the introduced approach to provide autonomous and reliable damage identification in a PZT-enabled SHM system, with a mean accuracy of 95.24% and a standard deviation of 5.64%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis.
- Author
-
Moutsis, Stavros N., Tsintotas, Konstantinos A., Kansizoglou, Ioannis, and Gasteratos, Antonios
- Subjects
HUMAN activity recognition ,TRANSFORMER models ,DEEP learning ,COMPUTER vision ,RECURRENT neural networks ,TIME-varying networks - Abstract
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Editorial: Enhanced human modeling in robotics for socially-aware place navigation.
- Author
-
Tsintotas, Konstantinos A., Kansizoglou, Ioannis, Pastra, Katerina, Aloimonos, Yiannis, Gasteratos, Antonios, Ch. Sirakoulis, Giorgios, Sandini, Giulio, and Boccignone, Giuseppe
- Subjects
ROBOTICS ,AFFECTIVE computing ,PATTERN recognition systems ,ARTIFICIAL intelligence ,INDUSTRIAL robots - Abstract
This article discusses the concept of enhanced human modeling in robotics for socially-aware place navigation. It highlights the challenges faced by robots when navigating in unfamiliar environments and the importance of understanding human activities, intentions, and social dynamics in order to navigate spaces shared with humans. The article also explores various dimensions of human modeling, such as human pose estimation, action recognition, language understanding, and affective computing, and how they contribute to socially aware navigation. It concludes by emphasizing the need for robust and lightweight solutions that enhance human understanding and modeling in robot navigation. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
7. Methane Concentration Forecasting Based on Sentinel-5P Products and Recurrent Neural Networks.
- Author
-
Psomouli, Theofani, Kansizoglou, Ioannis, and Gasteratos, Antonios
- Subjects
RECURRENT neural networks ,ARTIFICIAL neural networks ,GREENHOUSE gas mitigation ,ATMOSPHERE ,CLIMATE change ,ATMOSPHERIC methane ,ENGINEERING geology - Abstract
The increase in the concentration of geological gas emissions in the atmosphere and particularly the increase of methane is considered by the majority of the scientific community as the main cause of global climate change. The main reasons that place methane at the center of interest, lie in its high global warming potential (GWP) and its lifetime in the atmosphere. Anthropogenic processes, like engineering geology ones, highly affect the daily profile of gasses in the atmosphere. Should direct measures be taken to reduce emissions of methane, immediate global warming mitigation could be achieved. Due to its significance, methane has been monitored by many space missions over the years and as of 2017 by the Sentinel-5P mission. Considering the above, we conclude that monitoring and predicting future methane concentration based on past data is of vital importance for the course of climate change over the next decades. To that end, we introduce a method exploiting state-of-the-art recurrent neural networks (RNNs), which have been proven particularly effective in regression problems, such as time-series forecasting. Aligned with the green artificial intelligence (AI) initiative, the paper at hand investigates the ability of different RNN architectures to predict future methane concentration in the most active regions of Texas, Pennsylvania and West Virginia, by using Sentinel-5P methane data and focusing on computational and complexity efficiency. We conduct several empirical studies and utilize the obtained results to conclude the most effective architecture for the specific use case, establishing a competitive prediction performance that reaches up to a 0.7578 mean squared error on the evaluation set. Yet, taking into consideration the overall efficiency of the investigated models, we conclude that the exploitation of RNN architectures with less number of layers and a restricted number of units, i.e., one recurrent layer with 8 neurons, is able to better compensate for competitive prediction performance, meanwhile sustaining lower computational complexity and execution time. Finally, we compare RNN models against deep neural networks along with the well-established support vector regression, clearly highlighting the supremacy of the recurrent ones, as well as discuss future extensions of the introduced work. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Visual Place Recognition in Changing Environments with Sequence Representations on the Distance-Space Domain.
- Author
-
Papapetros, Ioannis Tsampikos, Kansizoglou, Ioannis, Bampis, Loukas, and Gasteratos, Antonios
- Subjects
ROBOTICS - Abstract
Navigating in a perpetually changing world can provide the basis for numerous challenging autonomous robotic applications. With a view to long-term autonomy, visual place recognition (vPR) systems should be able to robustly operate under extreme appearance changes in their environment. Typically, the utilized data representations are heavily influenced by those changes, negatively affecting the vPR performance. In this article, we propose a sequence-based technique that decouples such changes from the similarity estimation procedure. This is achieved by remapping the sequential representation data into the distance-space domain, i.e., a domain in which we solely consider the distances between image instances, and subsequently normalize them. In such a way, perturbations related to different environmental conditions and embedded into the original representation vectors are avoided, therefore the scene recognition efficacy is enhanced. We evaluate our framework under multiple different instances, with results indicating a significant performance improvement over other approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. A Hybrid Spiking Neural Network Reinforcement Learning Agent for Energy-Efficient Object Manipulation †.
- Author
-
Oikonomou, Katerina Maria, Kansizoglou, Ioannis, and Gasteratos, Antonios
- Subjects
ARTIFICIAL neural networks ,OBJECT manipulation ,REINFORCEMENT learning ,INDUSTRIAL robots ,ROBOTICS ,CONGREGATE housing - Abstract
Due to the wide spread of robotics technologies in everyday activities, from industrial automation to domestic assisted living applications, cutting-edge techniques such as deep reinforcement learning are intensively investigated with the aim to advance the technological robotics front. The mandatory limitation of power consumption remains an open challenge in contemporary robotics, especially in real-case applications. Spiking neural networks (SNN) constitute an ideal compromise as a strong computational tool with low-power capacities. This paper introduces a spiking neural network actor for a baseline robotic manipulation task using a dual-finger gripper. To achieve that, we used a hybrid deep deterministic policy gradient (DDPG) algorithm designed with a spiking actor and a deep critic network to train the robotic agent. Thus, the agent learns to obtain the optimal policies for the three main tasks of the robotic manipulation approach: target-object reach, grasp, and transfer. The proposed method has one of the main advantages that an SNN possesses, namely, its neuromorphic hardware implementation capacity that results in energy-efficient implementations. The latter accomplishment is highly demonstrated in the evaluation results of the SNN actor since the deep critic network was exploited only during training. Aiming to further display the capabilities of the introduced approach, we compare our model with the well-established DDPG algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Deep Feature Space: A Geometrical Perspective.
- Author
-
Kansizoglou, Ioannis, Bampis, Loukas, and Gasteratos, Antonios
- Subjects
- *
REINFORCEMENT learning , *VECTOR spaces , *IMPLICIT bias - Abstract
One of the most prominent attributes of Neural Networks (NNs) constitutes their capability of learning to extract robust and descriptive features from high dimensional data, like images. Hence, such an ability renders their exploitation as feature extractors particularly frequent in an abundance of modern reasoning systems. Their application scope mainly includes complex cascade tasks, like multi-modal recognition and deep Reinforcement Learning (RL). However, NNs induce implicit biases that are difficult to avoid or to deal with and are not met in traditional image descriptors. Moreover, the lack of knowledge for describing the intra-layer properties -and thus their general behavior- restricts the further applicability of the extracted features. With the paper at hand, a novel way of visualizing and understanding the vector space before the NNs’ output layer is presented, aiming to enlighten the deep feature vectors’ properties under classification tasks. Main attention is paid to the nature of overfitting in the feature space and its adverse effect on further exploitation. We present the findings that can be derived from our model’s formulation and we evaluate them on realistic recognition scenarios, proving its prominence by improving the obtained results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Continuous Emotion Recognition for Long-Term Behavior Modeling through Recurrent Neural Networks †.
- Author
-
Kansizoglou, Ioannis, Misirlis, Evangelos, Tsintotas, Konstantinos, and Gasteratos, Antonios
- Subjects
EMOTION recognition ,RECURRENT neural networks ,ARTIFICIAL neural networks ,NONVERBAL cues ,EMOTIONAL state ,HUMAN-robot interaction - Abstract
One's internal state is mainly communicated through nonverbal cues, such as facial expressions, gestures and tone of voice, which in turn shape the corresponding emotional state. Hence, emotions can be effectively used, in the long term, to form an opinion of an individual's overall personality. The latter can be capitalized on in many human–robot interaction (HRI) scenarios, such as in the case of an assisted-living robotic platform, where a human's mood may entail the adaptation of a robot's actions. To that end, we introduce a novel approach that gradually maps and learns the personality of a human, by conceiving and tracking the individual's emotional variations throughout their interaction. The proposed system extracts the facial landmarks of the subject, which are used to train a suitably designed deep recurrent neural network architecture. The above architecture is responsible for estimating the two continuous coefficients of emotion, i.e., arousal and valence, following the broadly known Russell's model. Finally, a user-friendly dashboard is created, presenting both the momentary and the long-term fluctuations of a subject's emotional state. Therefore, we propose a handy tool for HRI scenarios, where robot's activity adaptation is needed for enhanced interaction performance and safety. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. An Active Learning Paradigm for Online Audio-Visual Emotion Recognition.
- Author
-
Kansizoglou, Ioannis, Bampis, Loukas, and Gasteratos, Antonios
- Abstract
The advancement of Human-Robot Interaction (HRI) drives research into the development of advanced emotion identification architectures that fathom audio-visual (A-V) modalities of human emotion. State-of-the-art methods in multi-modal emotion recognition mainly focus on the classification of complete video sequences, leading to systems with no online potentialities. Such techniques are capable of predicting emotions only when the videos are concluded, thus restricting their applicability in practical scenarios. This article provides a novel paradigm for online emotion classification, which exploits both audio and visual modalities and produces a responsive prediction when the system is confident enough. We propose two deep Convolutional Neural Network (CNN) models for extracting emotion features, one for each modality, and a Deep Neural Network (DNN) for their fusion. In order to conceive the temporal quality of human emotion in interactive scenarios, we train in cascade a Long Short-Term Memory (LSTM) layer and a Reinforcement Learning (RL) agent –which monitors the speaker– thus stopping feature extraction and making the final prediction. The comparison of our results on two publicly available A-V emotional datasets viz., RML and BAUM-1s, against other state-of-the-art models, demonstrates the beneficial capabilities of our work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Attention! A Lightweight 2D Hand Pose Estimation Approach.
- Author
-
Santavas, Nicholas, Kansizoglou, Ioannis, Bampis, Loukas, Karakasis, Evangelos, and Gasteratos, Antonios
- Abstract
Vision based human pose estimation is an non-invasive technology for Human-Computer Interaction (HCI). The direct use of the hand as an input device provides an attractive interaction method, with no need for specialized sensing equipment, such as exoskeletons, gloves etc, but a camera. Traditionally, HCI is employed in various applications spreading in areas including manufacturing, surgery, entertainment industry and architecture, to mention a few. Deployment of vision based human pose estimation algorithms can give a breath of innovation to these applications. In this article, we present a novel Convolutional Neural Network architecture, reinforced with a Self-Attention module. Our proposed model can be deployed on an embedded system due to its lightweight nature with just 1.9 Million parameters. The source code and qualitative results are publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. MARMA: A Mobile Augmented Reality Maintenance Assistant for Fast-Track Repair Procedures in the Context of Industry 4.0.
- Author
-
Konstantinidis, Fotios K., Kansizoglou, Ioannis, Santavas, Nicholas, Mouroutsos, Spyridon G., and Gasteratos, Antonios
- Subjects
AUGMENTED reality ,INDUSTRY 4.0 ,MANUFACTURING processes ,COMPUTER vision ,SYSTEM integration - Abstract
The integration of exponential technologies in the traditional manufacturing processes constitutes a noteworthy trend of the past two decades, aiming to reshape the industrial environment. This kind of digital transformation, which is driven by the Industry 4.0 initiative, not only affects the individual manufacturing assets, but the involved human workforce, as well. Since human operators should be placed in the centre of this revolution, they ought to be endowed with new tools and through-engineering solutions that improve their efficiency. In addition, vivid visualization techniques must be utilized, in order to support them during their daily operations in an auxiliary and comprehensive way. Towards this end, we describe a user-centered methodology, which utilizes augmented reality (AR) and computer vision (CV) techniques, supporting low-skilled operators in the maintenance procedures. The described mobile augmented reality maintenance assistant (MARMA) makes use of the handheld's camera and locates the asset on the shop floor and generates AR maintenance instructions. We evaluate the performance of MARMA in a real use case scenario, using an automotive industrial asset provided by a collaborative manufacturer. During the evaluation procedure, manufacturer experts confirmed its contribution as an application that can effectively support the maintenance engineers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Do Neural Network Weights Account for Classes Centers?
- Author
-
Kansizoglou I, Bampis L, and Gasteratos A
- Abstract
The exploitation of deep neural networks (DNNs) as descriptors in feature learning challenges enjoys apparent popularity over the past few years. The above tendency focuses on the development of effective loss functions that ensure both high feature discrimination among different classes, as well as low geodesic distance between the feature vectors of a given class. The vast majority of the contemporary works rely their formulation on an empirical assumption about the feature space of a network's last hidden layer, claiming that the weight vector of a class accounts for its geometrical center in the studied space. This article at hand follows a theoretical approach and indicates that the aforementioned hypothesis is not exclusively met. This fact raises stability issues regarding the training procedure of a DNN, as shown in our experimental study. Consequently, a specific symmetry is proposed and studied both analytically and empirically that satisfies the above assumption, addressing the established convergence issues. More specifically, the aforementioned symmetry suggests that all weight vectors are unit, coplanar, and their vector summation equals zero. Such a layout is proven to ensure a more stable learning curve compared against the corresponding ones succeeded by popular models in the field of feature learning.
- Published
- 2023
- Full Text
- View/download PDF
16. Gait analysis comparison between manual marking, 2D pose estimation algorithms, and 3D marker-based system.
- Author
-
Menychtas D, Petrou N, Kansizoglou I, Giannakou E, Grekidis A, Gasteratos A, Gourgoulis V, Douda E, Smilios I, Michalopoulou M, Sirakoulis GC, and Aggelousis N
- Abstract
Introduction: Recent advances in Artificial Intelligence (AI) and Computer Vision (CV) have led to automated pose estimation algorithms using simple 2D videos. This has created the potential to perform kinematic measurements without the need for specialized, and often expensive, equipment. Even though there's a growing body of literature on the development and validation of such algorithms for practical use, they haven't been adopted by health professionals. As a result, manual video annotation tools remain pretty common. Part of the reason is that the pose estimation modules can be erratic, producing errors that are difficult to rectify. Because of that, health professionals prefer the use of tried and true methods despite the time and cost savings pose estimation can offer., Methods: In this work, the gait cycle of a sample of the elderly population on a split-belt treadmill is examined. The Openpose (OP) and Mediapipe (MP) AI pose estimation algorithms are compared to joint kinematics from a marker-based 3D motion capture system (Vicon), as well as from a video annotation tool designed for biomechanics (Kinovea). Bland-Altman (B-A) graphs and Statistical Parametric Mapping (SPM) are used to identify regions of statistically significant difference., Results: Results showed that pose estimation can achieve motion tracking comparable to marker-based systems but struggle to identify joints that exhibit small, but crucial motion., Discussion: Joints such as the ankle, can suffer from misidentification of their anatomical landmarks. Manual tools don't have that problem, but the user will introduce a static offset across the measurements. It is proposed that an AI-powered video annotation tool that allows the user to correct errors would bring the benefits of pose estimation to professionals at a low cost., Competing Interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (© 2023 Menychtas, Petrou, Kansizoglou, Giannakou, Grekidis, Gasteratos, Gourgoulis, Douda, Smilios, Michalopoulou, Sirakoulis and Aggelousis.)
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.