10 results on '"Ajili, Insaf"'
Search Results
2. Expressive motions recognition and analysis with learning and statistical methods
- Author
-
Ajili, Insaf, Ramezanpanah, Zahra, Mallem, Malik, and Didier, Jean-Yves
- Published
- 2019
- Full Text
- View/download PDF
3. Robust human action recognition system using Laban Movement Analysis
- Author
-
Ajili, Insaf, Mallem, Malik, and Didier, Jean-Yves
- Published
- 2017
- Full Text
- View/download PDF
4. Relevant LMA Features for Human Motion Recognition
- Author
-
Ajili, Insaf, Mallem, Malik, Didier, Jean-Yves, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), and Université d'Évry-Val-d'Essonne (UEVE)
- Subjects
Discriminative LMA fea-tures ,Random Forest ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Features Reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Human motion recognition - Abstract
Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.
- Published
- 2018
- Full Text
- View/download PDF
5. An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
- Author
-
Ajili, Insaf, Mallem, Malik, Didier, Jean-Yves, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), and Université d'Évry-Val-d'Essonne (UEVE)
- Subjects
Motion representation ,Laban Movement Analysis ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Human Motion Recognition ,Discrete Hidden Markov Model - Abstract
International audience; Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications , such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a novel descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures (Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.
- Published
- 2018
- Full Text
- View/download PDF
6. Reconnaissance des gestes expressifs inspirée du modèle LMA pour une interaction naturelle homme-robot
- Author
-
Ajili, Insaf, Davesne, Frédéric, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), Université d'Évry-Val-d'Essonne (UEVE), Université Paris-Saclay, Université d'Evry-Val-d'Essonne, Malik Mallem(malik.mallem@ibisc.univ-evry.fr), and Jean-Yves Didier
- Subjects
Expressive body gestures ,Gestures recognition ,LMA model ,Interaction homme-robot ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,Emotions ,Apprentissage supervisé ,[INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG] ,Gestes corporels expressifs ,Modèle LMA ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Reconnaissance des gestes ,Human-robot interaction ,Supervised learning - Abstract
In this thesis, we deal with the problem of gesture recognition in a human-robot interactioncontext. New contributions are being made on this subject. Our system consists in recognizinghuman gestures based on a motion analysis method that describes movement in a precise way.As part of this study, a higher level module is integrated to recognize the emotions of the personthrough the movement of her body. Three approaches are carried out : the _rst deals with therecognition of dynamic gestures by applying the hidden Markov model (HMM) as a classi_cationmethod. A local motion descriptor is implemented based on a motion analysis method, called LMA(Laban Movement Analysis), which describes the movement of the person in its di_erent aspects.Our system is invariant to the initial positions and orientations of people. A sampling algorithmhas been developed in order to reduce the size of our descriptor and also adapt the data to hiddenMarkov models. A contribution is made to HMMs to analyze the movement in two directions (itsnatural and opposite directions) and thus improve the classi_cation of similar gestures. Severalexperiments are done using public action databases, as well as our database composed of controlgestures. In the second approach, an expressive gestures recognition system is set up to recognizethe emotions of people through their gestures. A second contribution consists of the choice of aglobal motion descriptor based on the local characteristics proposed in the _rst approach to describethe entire gesture. The LMA E_ort component is quanti_ed to describe the expressiveness of thegesture with its four factors (space, time, weight and _ow). The classi_cation of expressive gesturesis carried out with four well-known machine learning methods (random decision forests, multilayerperceptron, support vector machines : one-against-one and one-against-all. A comparative study ismade between these 4 methods in order to choose the best one. The approach is validated with publicdatabases and our database of expressive gestures. The third approach is a statistical study basedon human perception to evaluate the recognition system as well as the proposed motion descriptor.This allows us to estimate the ability of our system to classify and analyze emotions as a human. Inthis part, two tasks are carried out with the two classi_ers (the RDF learning method that gave thebest results in the second approach and the human classi_er) : the classi_cation of emotions and the5 study of the importance of our motion features to discriminate each emotion., Dans cette thèse, nous traitons le problème de la reconnaissance des gestes dans un contexte d'interaction homme-robot. De nouvelles contributions sont apportées à ce sujet. Notre système consiste à reconnaitre les gestes humains en se basant sur une méthode d'analyse de mouvement qui décrit le geste humain d'une manière précise. Dans le cadre de cette étude, un module de niveau supérieur est intégré afin de reconnaître les émotions de la personne à travers le mouvement de son corps. Trois approches sont réalisées : la première porte sur la reconnaissance des gestes dynamiques en appliquant le modèle de Markov caché (MMC) comme méthode de classification. Un descripteur de mouvement local est implémenté basé sur une méthode d'analyse de mouvement, nommée LMA (Laban Movement Analysis) qui permet de décrire le mouvement de la personne dans ses différents aspects. Notre système est invariant aux positions et orientations initiales des personnes. Un algorithme d'échantillonnage a été développé afin de réduire la taille de notre descripteur et aussi adapter les données aux modèles de Markov cachés. Une contribution est réalisée aux MMCs pour analyser le mouvement dans deux sens (son sens naturel et le sens inverse) et ainsi améliorer la classification des gestes similaires. Plusieurs expériences sont faites en utilisant des bases de données d'actions publiques, ainsi que notre base de données composée de gestes de contrôle. Dans la seconde approche, un système de reconnaissance des gestes expressifs est mis en place afin de reconnaitre les émotions des personnes à travers leurs gestes. Une deuxième contribution consiste en le choix d'un descripteur de mouvement global basé sur les caractéristiques locales proposées dans la première approche afin de décrire l'entièreté du geste. La composante Effort de LMA est quantifiée afin de décrire l'expressivité du geste avec ses 4 facteurs (espace, temps, poids et flux). La classification des gestes expressifs est réalisée avec 4 méthodes d'apprentissage automatique réputées (les forêts d'arbres décisionnels, le perceptron multicouches, les machines à vecteurs de support : un-contre-un et un-contre-tous). Une étude comparative est faite entre ces 4 méthodes afin de choisir la meilleure. L'approche est validée avec des bases publiques et notre propre base des gestes expressifs. La troisième approche consiste en une étude statistique basée sur la perception humaine afin d'évaluer le système de reconnaissance ainsi que le descripteur de mouvement proposé. Cela permet d'estimer la capacité de notre système à pouvoir classifier et analyser les émotions comme un humain. Dans cette partie deux tâches sont réalisées avec les deux classifieurs (la méthode d'apprentissage RDF quia donné les meilleurs résultats dans la deuxième approche et le classifieur humain) : la classification des émotions et l'étude de l'importance des caractéristiques de mouvement pour discriminer chaque émotion.
- Published
- 2018
7. Human motions and emotions recognition inspired by LMA qualities
- Author
-
Ajili, Insaf, primary, Mallem, Malik, additional, and Didier, Jean-Yves, additional
- Published
- 2018
- Full Text
- View/download PDF
8. Expressive motions recognition and analysis with learning and statistical methods
- Author
-
Ajili, Insaf, primary, Ramezanpanah, Zahra, additional, Mallem, Malik, additional, and Didier, Jean-Yves, additional
- Published
- 2018
- Full Text
- View/download PDF
9. Gestural Human-Robot Interaction
- Author
-
Ajili, Insaf, Mallem, Malik, Didier, Jean-Yves, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), and Université d'Évry-Val-d'Essonne (UEVE)
- Subjects
Kinect ,Skeleton Tracking ,Gesture Recognition ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,ROS ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,HMM ,Human-Robot Interaction - Abstract
National audience; Interactive robotics is a vast and expanding researchfield. Interactions must be sufficiently natural, with robots havingsocially acceptable behavior for humans, adaptable to userexpectations, thus allowing easy integration in our daily livesin various fields (science, industry, domestic, health ...). In thiscontext, we will achieve a system that involves the interactionbetween the human and the NAO robot. This system is basedon gesture recognition via Kinect sensor. We choose the HiddenMarkov Model (HMM) to recognize four gestures (move forward,move back, turn, and stop) in order to teleoperate the NAOrobot. To improve recognition rate, data are extracted withKinect depth camera under ROS, which provides a node thattracks human skeleton. We tried to choose a feature vector asrelevant as possible to be the input of the HMM. We performed3 different experiments with two types of features extracted fromhuman skeleton. Experimental results indicates that the averagerecognition accuracy is near 100%.
- Published
- 2016
10. Gesture recognition for humanoid robot teleoperation
- Author
-
Ajili, Insaf, primary, Mallem, Malik, additional, and Didier, Jean-Yves, additional
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.