16 results on '"Zaraki, A."'
Search Results
2. I-CLIPS Brain: A Hybrid Cognitive System for Social Robots
- Author
-
Mazzei, Daniele, Cominelli, Lorenzo, Lazzeri, Nicole, Zaraki, Abolfazl, De Rossi, Danilo, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Siekmann, Jörg, Series editor, Duff, Armin, editor, Lepora, Nathan F., editor, Mura, Anna, editor, Prescott, Tony J., editor, and Verschure, Paul F. M. J., editor
- Published
- 2014
- Full Text
- View/download PDF
3. Optimal feature set for smartphone-based activity recognition
- Author
-
Abolfazl Zaraki, Maryam Banitalebi Dehkordi, and Rossitza Setchi
- Subjects
Computational complexity theory ,business.industry ,Computer science ,Decision tree ,Wearable computer ,Feature selection ,Machine learning ,computer.software_genre ,Activity recognition ,Statistical classification ,Feature (machine learning) ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,Mobile device ,computer ,General Environmental Science - Abstract
Human activity recognition using wearable and mobile devices is used for decades to monitor humans’ daily behaviours. In recent years as smartphones being widely integrated into our daily lives, the use of smartphone’s built-in sensors in human activity recognition has been receiving more attention, in which smartphone accelerometer plays the main role. However, in comparison to the standard machine, when developing human activity recognition using a smartphone, the limitations such as processing capability and energy consumption should be taken into consideration, and therefore, a trade-off between performance and computational complexity should be considered. In this paper, we shed light on the importance of feature selection and its impact on simplifying the activity classification process, which enhances the computational complexity of the system. The novelty of this work is related to identifying the most efficient features for the detection of each individual activity uniquely. In an experimental study with human users and using different smartphones, we investigated how to achieve an optimal feature set, using which the system complexity can be decreased while the activity recognition accuracy remains high. For that, in the considered scenario, we instructed the participants to perform different activities, including static, dynamic, going up and down the stairs, and walking fast and slow while freely holding a smartphone in their hands. To evaluate the obtained optimal feature set implementing two major classification algorithms, the decision tree and the Bayesian network, we investigated activity recognition accuracy for different activities. We further evaluated the optimal feature set by comparing the performance of the activity recognition system using the optimal feature set and three feature sets taken from the state-of-the-art. The experimental results demonstrated that replacing a large number of conventional features with an optimal feature set has only a negligible impact on the overall activity recognition system performance while it can significantly decrease the system’s complexity, which is essential for smartphone-based systems.
- Published
- 2021
4. Developing a protocol and experimental setup for using a humanoid robot to assist children with autism to develop visual perspective taking skills
- Author
-
Gabriella Lakatos, Ben Robins, Dag Sverre Syrdal, Luke Jai Wood, Kerstin Dautenhahn, and Abolfazl Zaraki
- Subjects
Technology ,Cognitive Neuroscience ,autism ,050105 experimental psychology ,Human–robot interaction ,human-robot interaction ,Behavioral Neuroscience ,Developmental Neuroscience ,Artificial Intelligence ,social robotics ,medicine ,0501 psychology and cognitive sciences ,050107 human factors ,assistive robotics ,Social robot ,05 social sciences ,Perspective (graphical) ,Viewpoints ,medicine.disease ,Human-Computer Interaction ,Autism ,Robot ,Psychology ,Inclusion (education) ,Humanoid robot ,Cognitive psychology - Abstract
Visual Perspective Taking (VPT) is the ability to see the world from another person’s perspective, taking into account what they see and how they see it, drawing upon both spatial and social information. Children with autism often find it difficult to understand that other people might have perspectives, viewpoints, beliefs and knowledge that are different from their own, which is a fundamental aspect of VPT. In this research we aimed to develop a methodology to assist children with autism develop their VPT skills using a humanoid robot and present results from our first long-term pilot study. The games we devised were implemented with the Kaspar robot and, to our knowledge, this is the first attempt to improve the VPT skills of children with autism through playing and interacting with a humanoid robot.We describe in detail the standard pre- and post-assessments that we performed with the children in order to measure their progress and also the inclusion criteria derived fromthe results for future studies in this field. Our findings suggest that some children may benefit from this approach of learning about VPT, which shows that this approach merits further investigation.
- Published
- 2019
- Full Text
- View/download PDF
5. Robot-mediated intervention can assist children with autism to develop visual perspective taking skills
- Author
-
Ben Robins, Dag Sverre Syrdal, Abolfazl Zaraki, Luke Jai Wood, Kerstin Dautenhahn, and Gabriella Lakatos
- Subjects
Technology ,Cognitive Neuroscience ,Applied psychology ,autism ,050105 experimental psychology ,Human–robot interaction ,Behavioral Neuroscience ,Developmental Neuroscience ,Artificial Intelligence ,Theory of mind ,Intervention (counseling) ,medicine ,human–robot interaction ,0501 psychology and cognitive sciences ,visual perspective taking ,Assistive robotics ,theory of mind ,assistive robotics ,05 social sciences ,medicine.disease ,Human-Computer Interaction ,Autism ,Robot ,Psychology ,050104 developmental & child psychology - Abstract
In this work, we tested a recently developed novel methodology to assist children with Autism Spectrum Disorder (ASD) improve their Visual Perspective Taking (VPT) and Theory of Mind (ToM) skills using the humanoid robot Kaspar. VPT is the ability to see the world from another person’s perspective, drawing upon both social and spatial information. Children with ASD often find it difficult to understand that others might have perspectives, viewpoints and beliefs that are different from their own, which is a fundamental aspect of both VPT and ToM. The games we designed were implemented as the first attempt to study if these skills can be improved in children with ASD through interacting with a humanoid robot in a series of trials. The games involved a number of different actions with the common goal of helping the children to see the world from the robot’s perspective. Children with ASD were recruited to the study according to specific inclusion criteria that were determined in a previous pilot study. In order to measure the potential impact of the games on the children, three pre- and post-tests (Smarties, Sally–Anne and Charlie tests) were conducted with the children. Our findings suggest that children with ASD can indeed benefit from this approach of robot-assisted therapy.
- Published
- 2020
6. Feature extraction and feature selection in smartphone-based activity recognition
- Author
-
Maryam Banitalebi Dehkordi, Abolfazl Zaraki, and Rossitza Setchi
- Subjects
business.industry ,Computer science ,Feature extraction ,Decision tree ,Bayesian network ,020206 networking & telecommunications ,Feature selection ,02 engineering and technology ,Machine learning ,computer.software_genre ,Activity recognition ,Set (abstract data type) ,Statistical classification ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,General Environmental Science - Abstract
Nowadays, smartphones are gradually being integrated in our daily lives, and they can be considered powerful tools for monitoring human activities. However, due to the limitations of processing capability and energy consumption of smartphones compared to standard machines, a trade-off between performance and computational complexity must be considered when developing smartphone-based systems. In this paper, we shed light on the importance of feature selection and its impact on simplifying the activity classification process which enhances the computational complexity of the system. Through an in-depth survey on the features that are widely used in state-of-the-art studies, we selected the most common features for sensor-based activity classification, namely conventional features. Then, in an experimental study with 10 participants and using 2 different smartphones, we investigated how to reduce system complexity while maintaining classification performance by replacing the conventional feature set with an optimal set. For this reason, in the considered scenario, the users were instructed to perform different static and dynamic activities, while freely holding a smartphone in their hands. In our comparison to the state-of-the-art approaches, we implemented and evaluated major classification algorithms, including the decision tree and Bayesian network. We demonstrated that replacing the conventional feature set with an optimal set can significantly reduce the complexity of the activity recognition system with only a negligible impact on the overall system performance.
- Published
- 2020
7. Design and Evaluation of a Unique Social Perception System for Human–Robot Interaction
- Author
-
Danilo De Rossi, Lorenzo Cominelli, Roberto Garofalo, Abolfazl Zaraki, Maryam Banitalebi Dehkordi, Michael Pieroni, and Daniele Mazzei
- Subjects
0209 industrial biotechnology ,Computer science ,media_common.quotation_subject ,scene analysis ,Context-aware social perception ,humanoid social robots ,human-robot interaction (HRI) ,meta-scene ,platform-independent system ,02 engineering and technology ,Human–robot interaction ,020901 industrial engineering & automation ,Software ,Artificial Intelligence ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,media_common ,Social robot ,business.industry ,Robotics ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Humanoid robot ,Gesture - Abstract
Robot’s perception is essential for performing high-level tasks such as understanding, learning, and in general, human–robot interaction (HRI). For this reason, different perception systems have been proposed for different robotic platforms in order to detect high-level features such as facial expressions and body gestures. However, due to the variety of robotics software architectures and hardware platforms, these highly customized solutions are hardly interchangeable and adaptable to different HRI contexts. In addition, most of the developed systems have one issue in common: they detect features without awareness of the real-world contexts (e.g., detection of environmental sound assuming that it belongs to a person who is speaking, or treating a face printed on a sheet of paper as belonging to a real subject). This paper presents a novel social perception system (SPS) that has been designed to address the previous issues. SPS is an out-of-the-box system that can be integrated into different robotic platforms irrespective of hardware and software specifications. SPS detects, tracks, and delivers in real-time to robots, a wide range of human- and environment- relevant features with the awareness of their real-world contexts. We tested SPS in a typical scenario of HRI for the following purposes: to demonstrate the system capability in detecting several high-level perceptual features as well as to test the system capability to be integrated into different robotics platforms. Results show the promising capability of the system in perceiving real world in different social robotics platforms, as tested in two humanoid robots, i.e., FACE and ZENO.
- Published
- 2017
- Full Text
- View/download PDF
8. A Novel Reinforcement-Based Paradigm for Children to Teach the Humanoid Kaspar Robot
- Author
-
Luke Jai Wood, Costas S. Tzafestas, Farshid Amirabdollahian, Ben Robins, Kerstin Dautenhahn, Mehdi Khamassi, Gabriella Lakatos, Abolfazl Zaraki, University of Hertfordshire [Hatfield] (UH), Institut cellule souche et cerveau / Stem Cell and Brain Research Institute (U1208 Inserm - UCBL1 / SBRI - USC 1361 INRAE), Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), School of of Electrical and Computer Engineering [Athens] (School of E.C.E), National Technical University of Athens [Athens] (NTUA), University of Waterloo [Waterloo], Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Centre National de la Recherche Scientifique (MITI ROBAUTISTE and PICS 279521), European Project: 687831,H2020,H2020-ICT-2015,BabyRobot(2016), Gestionnaire, HAL Sorbonne Université 5, and Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children - BabyRobot - - H20202016-01-01 - 2018-12-31 - 687831 - VALID
- Subjects
General Computer Science ,Social Psychology ,Process (engineering) ,media_common.quotation_subject ,Autism ,Control (management) ,050105 experimental psychology ,Human–robot interaction ,03 medical and health sciences ,0302 clinical medicine ,Reinforcement learning ,medicine ,Autonomous robotics ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,0501 psychology and cognitive sciences ,Electrical and Electronic Engineering ,Reinforcement learning algorithm ,Reinforcement ,media_common ,Enthusiasm ,business.industry ,Teaching ,[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO] ,05 social sciences ,Robotics ,medicine.disease ,Human-Computer Interaction ,Philosophy ,Control and Systems Engineering ,Robot ,Children social skills ,Artificial intelligence ,business ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
This paper presents a contribution aiming at testing novel child–robot teaching schemes that could be used in future studies to support the development of social and collaborative skills of children with autism spectrum disorders (ASD). We present a novel experiment where the classical roles are reversed: in this scenario the children are the teachers providing positive or negative reinforcement to the Kaspar robot in order for it to learn arbitrary associations between different toy names and the locations where they are positioned. The objective is to stimulate interaction and collaboration between children while teaching the robot, and also provide them tangible examples to understand that sometimes learning requires several repetitions. To facilitate this game, we developed a reinforcement learning algorithm enabling Kaspar to verbally convey its level of uncertainty during the learning process, so as to better inform the children about the reasons behind its successes and failures. Overall, 30 typically developing (TD) children aged between 7 and 8 (19 girls, 11 boys) and 9 children with ASD performed 25 sessions (16 for TD; 9 for ASD) of the experiment in groups, and managed to teach Kaspar all associations in 2 to 7 trials. During the course of study Kaspar only made rare unexpected associations (2 perseverative errors and 2 win-shifts, within a total of 314 trials), primarily due to exploratory choices, and eventually reached minimal uncertainty. Thus, the robot’s behaviour was clear and consistent for the children, who all expressed enthusiasm in the experiment.
- Published
- 2019
- Full Text
- View/download PDF
9. Developing Kaspar: A Humanoid Robot for Children with Autism
- Author
-
Abolfazl Zaraki, Kerstin Dautenhahn, Ben Robins, and Luke Jai Wood
- Subjects
General Computer Science ,Social Psychology ,User-centred design ,Computer science ,Humanoid social robots ,02 engineering and technology ,050105 experimental psychology ,Field (computer science) ,Autism therapy ,Multidisciplinary approach ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Electrical and Electronic Engineering ,business.industry ,05 social sciences ,Robotics ,Cognitive architecture ,Mechatronics ,medicine.disease ,Autonomous systems ,Human-Computer Interaction ,Philosophy ,S.I.: Embodied Interactive Robots ,Control and Systems Engineering ,Autism ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Humanoid robot - Abstract
In the late 1990s using robotic technology to assist children with Autistic Spectrum Condition (ASD) emerged as a potentially useful area of research. Since then the field of assistive robotics for children with ASD has grown considerably with many academics trialling different robots and approaches. One such robot is the humanoid robot Kaspar that was originally developed in 2005 and has continually been built upon since, taking advantage of technological developments along the way. A key principle in the development of Kaspar since its creation has been to ensure that all of the advances to the platform are driven by the requirements of the users. In this paper we discuss the development of Kaspar’s design and explain the rationale behind each change to the platform. Designing and building a humanoid robot to interact with and help children with ASD is a multidisciplinary challenge that requires knowledge of the mechanical engineering, electrical engineering, Human–Computer Interaction (HCI), Child–Robot Interaction (CRI) and knowledge of ASD. The Kaspar robot has benefited from the wealth of knowledge accrued over years of experience in robot-assisted therapy for children with ASD. By showing the journey of how the Kaspar robot has developed we aim to assist others in the field develop such technologies further.
- Published
- 2019
10. Effects of Previous Exposure on Children’s Perception of a Humanoid Robot
- Author
-
Luke Jai Wood, Abolfazl Zaraki, Ben Robins, Kerstin Dautenhahn, Gabriella Lakatos, and Farshid Amirabdollahian
- Subjects
0209 industrial biotechnology ,Future studies ,business.industry ,media_common.quotation_subject ,05 social sciences ,technology, industry, and agriculture ,Robotics ,02 engineering and technology ,050105 experimental psychology ,Developmental psychology ,body regions ,Age and gender ,surgical procedures, operative ,020901 industrial engineering & automation ,Perception ,Assistive robot ,Robot perception ,Robot ,0501 psychology and cognitive sciences ,Artificial intelligence ,Psychology ,business ,human activities ,Humanoid robot ,media_common - Abstract
The study described in this paper investigated the effects of previous exposure to robots on children’s perception of the Kaspar robot. 166 children aged between 7 and 11 participated in the study in the framework of a UK robotics week 2018 event, in which we visited a local primary school with a number of different robotic platforms to teach the children about robotics. Children’s perception of the Kaspar robot was measured using a questionnaire following a direct interaction with the robot in a teaching scenario. Children’s previous exposure to other robots and Kaspar itself was manipulated by controlling the order of children’s participation in the different activities over the event. Effects of age and gender were also examined. Results suggest significant effects of previous exposure and gender on children’s perception of Kaspar, while age had no significant effect. Important methodological implications for future studies are discussed.
- Published
- 2019
- Full Text
- View/download PDF
11. Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot
- Author
-
Manuel Giuliani, Daniele Mazzei, Danilo De Rossi, and Abolfazl Zaraki
- Subjects
Social robot ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Human Factors and Ergonomics ,Gaze ,Facial recognition system ,Human–robot interaction ,Computer Science Applications ,Human-Computer Interaction ,Proxemics ,Nonverbal communication ,InformationSystems_MODELSANDPRINCIPLES ,Artificial Intelligence ,Control and Systems Engineering ,Signal Processing ,Robot ,Computer vision ,Artificial intelligence ,business ,Humanoid robot - Abstract
This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time.
- Published
- 2014
- Full Text
- View/download PDF
12. Damasio’s Somatic Marker for Social Robotics: Preliminary Implementation and Test
- Author
-
Roberto Garofalo, Lorenzo Cominelli, Danilo De Rossi, Daniele Mazzei, Michael Pieroni, and Abolfazl Zaraki
- Subjects
Emotion ,Artificial intelligence ,Cognitive systems ,Decision-making ,Human-inspired ,Reinforcement learning ,Somatic marker hypothesis ,Computer Science (all) ,Theoretical Computer Science ,Social robot ,Computer science ,Mechanism (biology) ,Process (engineering) ,business.industry ,Context (language use) ,Task (project management) ,Test (assessment) ,business ,Cognitive psychology - Abstract
How experienced emotional states, induced by the events that emerge in our context, influence our behaviour? Are they an obstacle or a helpful assistant for our reasoning process? Antonio Damasio gave exhaustive answers to these questions through his studies on patients with brain injuries. He demonstrated how the emotions guide decision-making and he has identified a region of the brain which has a fundamental role in this process. Antoine Bechara devised a test to validate the proper functioning of that cortical region of the brain. Inspired from Damasio's theories we developed a mechanism in an artificial agent that enables it to represent emotional states and to exploit them for biasing its decisions. We also implement the card gambling task that Bechara used on his patients as a validating test. Finally we put our artificial agent through this test for 100 trials. The results of this experiment are analysed and discussed highlighting the demonstrated efficiency of the implemented somatic marker mechanism and the potential impact of this system in the field of social robotics.
- Published
- 2015
- Full Text
- View/download PDF
13. Recognition and expression of emotions by a symbiotic android head
- Author
-
Abolfazl Zaraki, Daniele Mazzei, Nicole Lazzeri, and Danilo De Rossi
- Subjects
Synthetic emotions Engineering main heading: Human robot interaction ,Computer science ,media_common.quotation_subject ,Robots Express emotions ,Social robots ,Social dimension ,Human–computer interaction ,Facial Expressions ,Computer vision ,Conversation ,Engines ,media_common ,Facial expression ,Social robot ,Social dimensions ,business.industry ,Human being ,Gaze ,Engineering controlled terms: Anthropomorphic robots ,Behavioral research ,Robot ,Android (robot) ,Artificial intelligence ,business ,Humanoid robot - Abstract
The creation of social empathie communication channels between social robots and humans has started to become reality. Nowadays, the development of empathie and affective agents is giving to scientists another way to explore the social dimension of human beings. In this work, we introduce the FACE humanoid project that aims at creating a social and emotional android. FACE is an android head with an articulated neck mounted on a passive body. In order to enable FACE to perceive and express emotions, two dedicated engines have been developed. A sensory apparatus able to perceive the "social world", and a facial expressions generation engine that allows the robot to express its synthetic emotions. The system has been also integrated with an attention-based gaze generation component that allows the robot to autonomously follow a conversation between its partners. The developed framework has been implemented and tested in several standard human-robot interaction settings. Results demonstrated the promising social capabilities of the robot to perceive and convey emotions to humans through the generation of emotional perceivable facial expressions and socially aligned behaviour.
- Published
- 2014
- Full Text
- View/download PDF
14. An RGB-D Based Social Behavior Interpretation System for a Humanoid Social Robot
- Author
-
Annamaria D'ursi, Manuel Giuliani, Abolfazl Zaraki, Daniele Mazzei, Maryam Banitalebi Dehkordi, and Danilo De Rossi
- Subjects
Engineering ,humanlike robot ,Modality (human–computer interaction) ,Social robot ,business.industry ,Human–robot interaction ,hidden Markov model ,Human-robot interaction ,social behavior recognition ,User experience design ,Rule-based machine translation ,Human–computer interaction ,Robustness (computer science) ,Robot ,Artificial intelligence ,Hidden Markov model ,business - Abstract
We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.
- Published
- 2014
15. I-CLIPS Brain: A Hybrid Cognitive System for Social Robots
- Author
-
Daniele Mazzei, Lorenzo Cominelli, Nicole Lazzeri, Danilo De Rossi, and Abolfazl Zaraki
- Subjects
Social robot ,Computer science ,business.industry ,Social environment ,Cognition ,Cognitive architecture ,computer.software_genre ,Expert system ,Social relation ,Human–computer interaction ,Robot ,Artificial intelligence ,Media Lab Europe's social robots ,business ,computer ,Social heuristics ,Humanoid robot - Abstract
Sensing and interpreting the interlocutor’s social behaviours is a core challenge in the development of social robots. Social robots require both an innovative sensory apparatus able to perceive the “social and emotional world” in which they act and a cognitive system able to manage this incoming sensory information and plan an organized and pondered response. In order to allow scientists to design cognitive models for this new generation of social machines, it is necessary to develop control architectures that can be easily used also by researchers without technical skills of programming such as psychologists and neuroscientists. In this work an innovative hybrid deliberative/reactive cognitive architecture for controlling a social humanoid robot is presented. Design and implementation of the overall architecture take inspiration from the human nervous system. In particular, the cognitive system is based on the Damasio’s thesis. The architecture has been preliminary tested with the FACE robot. A social behaviour has been modeled to make FACE able to properly follow a human subject during a basic social interaction task and perform facial expressions as a reaction to the social context.
- Published
- 2014
16. Preliminary Implementation of Context-Aware Attention System for Humanoid Robots
- Author
-
Abolfazl Zaraki, Nicole Lazzeri, Danilo De Rossi, Daniele Mazzei, and Michael Pieroni
- Subjects
Social robot ,business.industry ,Computer science ,scene analysis ,Context (language use) ,Modular design ,Gaze ,Social relation ,Data flow diagram ,Human–computer interaction ,Robot ,Computer vision ,Artificial intelligence ,Context-aware attention ,gaze control ,business ,multiparty social interaction ,Humanoid robot - Abstract
A context-aware attention system is fundamental for regulating the robot behaviour in a social interaction since it enables social robots to actively select the right environmental stimuli at the right time during a multiparty social interaction. This contribution presents a modular context-aware attention system which drives the robot gaze. It is composed by two modules: the scene analyzer module manages incoming data flow and provides a human-like understanding of the information coming from the surrounding environment; the attention module allows the robot to select the most important target in the perceived scene on the base of a computational model. After describing the motivation, we report the proposed system and the preliminary test.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.