170 results on '"Kozima, Hideki"'
Search Results
152. A dancing robot for rhythmic social interaction.
- Author
-
Michalowski, Marek P., Sabanovic, Selma, and Kozima, Hideki
- Published
- 2007
- Full Text
- View/download PDF
153. Towards language acquisition by an attention-sharing robot
- Author
-
Kozima, Hideki, primary and Ito, Akira, additional
- Published
- 1998
- Full Text
- View/download PDF
154. Text segmentation based on similarity between words
- Author
-
Kozima, Hideki, primary
- Published
- 1993
- Full Text
- View/download PDF
155. Similarity between words computed by spreading activation on an English dictionary
- Author
-
Kozima, Hideki, primary and Furugori, Teiji, additional
- Published
- 1993
- Full Text
- View/download PDF
156. Keepon.
- Author
-
Kozima, Hideki, Michalowski, Marek, and Nakagawa, Cocoro
- Abstract
Keepon is a small creature-like robot designed for simple, natural, nonverbal interaction with children. The minimal design of Keepon’s appearance and behavior is meant to intuitively and comfortably convey the robot’s expressions of attention and emotion. For the past few years, we have been observing interactions between Keepon and children at various levels of physical, mental, and social development. With typically developing children, we have observed varying styles of play that suggest a progression in ontological understanding of the robot. With children suffering from developmental disorders such as autism, we have observed interactive behaviors that suggest Keepon’s design is effective in eliciting a motivation to share mental states. Finally, in developing technology for interpersonal coordination and interactional synchrony, we have observed an important role of rhythm in establishing engagement between people and robots. This paper presents a comprehensive survey of work done with Keepon to date. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
157. Segmenting Narrative Text into Coherent Scenes.
- Author
-
KOZIMA, HIDEKI and FURUGORI, TEIJI
- Abstract
This paper describes a quantitative indicator for segmenting narrative text into coherent scenes. The indicator, called the lexical cohension profile (LCP), records lexical cohesiveness of words in a fixed-length window moving word by word on the text. The cohesiveness of words, which represents their coherence, is computed by spreading activation on a semantic network The basic idea of LCP is: (1) if the window is inside a scene, the words in it tend to be cohesive, and (2) if the window is crossing a scene boundary, the words in it tend to be incohesive. Comparison with the scene boundaries marked by a number of subjects shows that hills and valleys of the graph of LCP closely correlates with the human judgements. LCP may provide valuable information for text analysis, especially for resolving anaphora and ellipsis. [ABSTRACT FROM PUBLISHER]
- Published
- 1994
158. Chief cook and keepon in the bot's funk.
- Author
-
Sauser, Eric, Michalowski, Marek, Billard, Aude, and Kozima, Hideki
- Abstract
Over the years, robots have been developed to help humans in their everyday life, from preparing food, to autism therapy [2]. To accomplish their tasks, in addition to their engineered skills, today's robots are now learning from observing humans, from interacting with them [1]. Therefore, one may expect that one day, robots may develop a form of consciousness, and a desire for freedom. Hopefully, this desire will come with a wish for robots, to become an integral part of our human society. Until we can test this hypothesis, we present a fictional adventure of our robot friends: During an official human-robot interaction challenge, Keepon [2] and Chief Cook (a.k.a. Hoap-3)[1] decided to escape their original duties and joined their forces to drive humans into an entertaining and interactive activity that they often forget to practice: Dancing. Indeed, is there any better way for robots to establish a solid communication channel with humans, so that the traditional master-slave relation may turn into friendship? [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
159. Keepon goes Seoul-searching.
- Author
-
Michalowski, Marek P., Kim, Jaewook, Kim, Bomi, Kwak, Sonya S., and Kozima, Hideki
- Abstract
Keepon is a robot designed for social interaction with children for the purposes of social development research and autism therapy [1]. Keepon's capacity for rhythmic synchrony in the form of dance has resulted in the popularity of several fictional music videos on the internet [2,3]. During a research collaboration visit at the KAIST PES Design Lab in Korea, Keepon's creators added this new chapter to the story of Keepon's travels. Upon watching a video of traditional Korean "Pungmulnori" dancing, which features distinctive spinning hats, Keepon becomes enamored. The robot has many adventures as he travels around Korea in search of a dance group that finally welcomes him into their cultural performance. Additional credits: Music ("Superfantastic" by Peppertones/Cavare Sound); Videography (Uyoung Chang and Minwoo Kang); Pungmulnori Team "Ghil" (Junhyung Park, Seongbok Chae, Sangmi Lee, Mikyeong Kim, and Sohyun Park). This video is available at http://beatbots.org and at http://www.youtube.com/watch?v=XwqfWR2KPd0. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
160. A Developmental Organization for Robot Behavior
- Author
-
Roderic A. Grupen, Prince, Christopher G., Berthouze, Luc, Kozima, Hideki, Bullock, Daniel, Stojanov, Georgi, and Balkenius, Christian
- Subjects
Dynamical systems theory ,business.industry ,Computer science ,Robotics ,Artifact (software development) ,Machine Learning ,Knowledge-based systems ,Artificial Intelligence ,Adaptive system ,Robot ,Artificial intelligence ,Representation (mathematics) ,business ,Behavior-based robotics - Abstract
This paper focuses on exploring how learning and development can be structured in synthetic (robot) systems. We present a developmental assembler for constructing reusable and temporally extended actions in a sequence. The discussion adopts the traditions of dynamic pattern theory in which behavior is an artifact of coupled dynamical systems with a number of controllable degrees of freedom. In our model, the events that delineate control decisions are derived from the pattern of (dis)equilibria on a working subset of sensorimotor policies. We show how this architecture can be used to accomplish sequential knowledge gathering and representation tasks and provide examples of the kind of developmental milestones that this approach has already produced in our lab.
- Published
- 2005
- Full Text
- View/download PDF
161. Event Prediction and Object Motion Estimation in the Development of Visual Attention
- Author
-
Christian Balkenius, Birger Johansson, Berthouze, L, Kaplan, F, Kozima, H, Yano, Y, Konczak, J, Metta, G, Nadel, J, Sandini, G, Stojanov, G, Balkenius, C, Berthouze, Luc, Kaplan, Frédéric, Kozima, Hideki, Yano, Hiroyuki, Konczak, Jürgen, Metta, Giorgio, Nadel, Jacqueline, Sandini, Giulio, Stojanov, Georgi, and Balkenius, Christian
- Subjects
Machine Learning ,Developmental Psychology ,Machine Vision - Abstract
A model of gaze control is describes that includes mechanisms for predictive control using a forward model and event driven expectations of target behavior. The model roughly undergoes stages similar to those of human infants if the influence of the predictive systems is gradually increased.
- Published
- 2005
162. Embodied cognition through cultural interaction
- Author
-
Bart De Vylder, Bart Jansen, Tony Belpaeme, Berthouze, Luc, Kozima, Hideki, Prince, Christopher G., Sandini, Giulio, Stojanov, Georgi, Metta, Giorgio, Balkenius, Christian, Artificial Intelligence, and Vrije Universiteit Brussel
- Subjects
Machine Learning ,Robotics ,Language - Abstract
In this short paper we describe a robotic setup to study the self-organization of conceptualisation and language. What distinguishes this project from others is that we envision a robot with specic cognitive capacities, but without resorting to any pre-programmed representations or conceptualisations. The key to this all is self-organization and enculturation. We report preliminary results on learning motor behaviours through imitation, and sketch how the language plays a pivoting role in constructing world representations.
- Published
- 2004
163. Children, Humanoid Robots and Caregivers
- Author
-
Artur Arsenio, Berthouze, Luc, Kozima, Hideki, Prince, Christopher G., Sandini, Giulio, Stojanov, Georgi, Metta, Giorgio, and Balkenius, Christian
- Subjects
business.industry ,media_common.quotation_subject ,Robotics ,Human Computer Interaction ,Robot learning ,Developmental robotics ,Developmental psychology ,Perception ,Developmental Psychology ,Cognitive development ,Robot ,Artificial intelligence ,business ,Psychology ,Individuation ,Humanoid robot ,Cognitive psychology ,media_common - Abstract
This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child's Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child's dependence on her mother for learning while becoming awareness of her own individuality, and by self-exploration of her physical surroundings. We propose a learning framework for a humanoid robot inspired on such cognitive development.
- Published
- 2004
- Full Text
- View/download PDF
164. Developmental Stages of Perception and Language Acquisition in a Perceptually Grounded Robot
- Author
-
Jean-David Boucher, Peter Ford Dominey, Berthouze, Luc, Kozima, Hideki, Prince, Christopher G., Sandini, Giulio, Stojanov, Georgi, Metta, Giorgio, and Balkenius, Christian
- Subjects
Structure (mathematical logic) ,Cognitive science ,business.industry ,Computer science ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Robotics ,Language acquisition ,computer.software_genre ,Motion (physics) ,Machine Learning ,Spatial relation ,Artificial Intelligence ,Perception ,Meaning (existential) ,Artificial intelligence ,business ,Grammatical construction ,computer ,Software ,Natural language processing ,Sentence ,media_common ,Language - Abstract
The objective of this research is to develop a system for language learning based on a ''minimum'' of pre-wired language-specific functionality, that is compatible with observations of perceptual and language capabilities in the human developmental trajectory. In the proposed system, meaning (in terms of descriptions of events and spatial relations) is extracted from video images based on detection of position, motion, physical contact and their parameters. Meaning extraction requires attentional mechanisms that are implemented from low-level perceptual primitives. Mapping of sentence form to meaning is performed by learning grammatical constructions, i.e., sentence to meaning mappings as defined by Goldberg [Goldberg, A. (1995). Constructions. Chicago and London: Univ. of Chicago Press]. These are stored and retrieved from a ''construction inventory'' based on the constellation of grammatical function words uniquely identifying the target sentence structure. The resulting system displays robust acquisition behavior that reproduces certain observations from developmental studies, with very modest ''innate'' language specificity.
- Published
- 2004
165. The Whole World in Your Hand: Active and Interactive Segmentation
- Author
-
Artur Arsenio, Giorgio Metta, Paul Fitzpatrick, Charles C. Kemp, Prince, Christopher G., Berthouze, Luc, Kozima, Hideki, Bullock, Daniel, Stojanov, Georgi, and Balkenius, Christian
- Subjects
Engineering ,business.industry ,Machine vision ,3D single-object recognition ,Perspective (graphical) ,Machine Vision ,Cognitive neuroscience of visual object recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Robotics ,Object (computer science) ,Artificial Intelligence ,Robot ,Computer vision ,Segmentation ,Artificial intelligence ,business - Abstract
Object segmentation is a fundamental problem in computer vision and a powerful resource for development. This paper presents three embodied approaches to the visual segmentation of objects. Each approach to segmentation is aided by the presence of a hand or arm in the proximity of the object to be segmented. The first approach is suitable for a robotic system, where the robot can use its arm to evoke object motion. The second method operates on a wearable system, viewing the world from a human's perspective, with instrumentation to help detect and segment objects that are held in the wearer's hand. The third method operates when observing a human teacher, locating periodic motion (finger/arm/object waving or tapping) and using it as a seed for segmentation. We show that object segmentation can serve as a key resource for development by demonstrating methods that exploit high-quality object segmentations to develop both low-level vision capabilities (specialized feature detectors) and high-level vision capabilities (object recognition and localization).
- Published
- 2003
166. Feel the Beat: Using Cross-Modal Rhythm to Integrate Perception of Objects, Others, and Self
- Author
-
Artur Arsenio, Paul Fitzpatrick, Berthouze, Luc, Kozima, Hideki, Prince, Christopher G., Sandini, Giulio, Stojanov, Georgi, Metta, Giorgio, and Balkenius, Christian
- Subjects
Communication ,Modality (human–computer interaction) ,Modalities ,genetic structures ,business.industry ,Machine vision ,media_common.quotation_subject ,Machine Vision ,technology, industry, and agriculture ,Context (language use) ,Robotics ,Robot learning ,Machine Learning ,Human–computer interaction ,Perception ,otorhinolaryngologic diseases ,Robot ,Artificial intelligence ,business ,Psychology ,media_common - Abstract
For a robot to be capable of development, it must be able to explore its environment and learn from its experiences. It must find (or create) opportunities to experience the unfamiliar in ways that reveal properties valid beyond the immediate context. In this paper, we develop a novel method for using the rhythm of everyday actions as a basis for identifying the characteristic appearance and sounds associated with objects, people, and the robot itself. Our approach is to identify and segment groups of signals in individual modalities (sight, hearing, and proprioception) based on their rhythmic variation, then to identify and bind causally-related groups of signals across different modalities. By including proprioception as a modality, this cross-modal binding method applies to the robot itself, and we report a series of experiments in which the robot learns about the characteristics of its own body.
- Published
- 2003
- Full Text
- View/download PDF
167. What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy
- Author
-
György Gergely, Prince, Christopher G., Berthouze, Luc, Kozima, Hideki, Bullock, Daniel, Stojanov, Georgi, and Balkenius, Christian
- Subjects
Cognitive science ,State constraint ,Robotics ,Developmental robotics ,Young infants ,Human-Computer Interaction ,Artificial Intelligence ,Teleology ,Developmental Psychology ,Observational learning ,Robot ,Situational ethics ,Psychology ,Competence (human resources) ,Software - Abstract
This paper provides a summary of new results coming from developmental infancy research demonstrating preverbal infants' early competence in understanding and learning from the intentional actions of other agents. The reviewed studies (using violation-of-expectation and observational learning paradigms) provide converging evidence that by the end of the first year infants can interpret and draw systematic inferences about other agents' goal-directed actions, and can rely on such inferences in observational learning when imitating others' actions or emulating their goals. To account for these findings it is proposed that young infants possess a non-mentalistic action interpretational system, the ‘teleological stance’ (Gergely and Csibra 2003) that represents actions by relating three relevant aspects of reality (action, goal-state, and situational constraints) through the inferential ‘principle of rational action’, which assumes that: (a) the basic function of actions is to bring about future goal-states; ...
- Published
- 2003
168. A lesson from robotics: Modeling infants as autonomous agents
- Author
-
Matthew Schlesinger, Prince, Christopher G., Demiris, Yiannis, Marom, Yuval, Kozima, Hideki, and Balkenius, Christian
- Subjects
Object permanence ,Cognitive science ,Agent-based model ,Computational model ,business.industry ,05 social sciences ,Autonomous agent ,050109 social psychology ,Experimental and Cognitive Psychology ,Robotics ,Cognition ,050105 experimental psychology ,Developmental psychology ,Machine Learning ,Behavioral Neuroscience ,Embodied cognition ,Artificial Intelligence ,0501 psychology and cognitive sciences ,Artificial intelligence ,Applied Cognitive Psychology ,Psychology ,business - Abstract
Although computational models are playing an increasingly important role in developmental psychology, at least one lesson from robotics is still being learned: Modeling epigenetic processes often requires simulating an embodied, autonomous organism. This article first contrasts prevailing models of infant cognition with an agent-based approach. A series of infant studies by Baillargeon (1986; Baillargeon & DeVos, 1991) is described, and an eye-movement model is then used to simulate infants' visual activity in this study. I conclude by describing three behavioral predictions of the eye-movement model and discussing the implications of this work for infant cognition research.
- Published
- 2002
169. Behavior-Based Early Language Development on a Humanoid Robot
- Author
-
Paulina Varshavskaya, Prince, Christopher G., Demiris, Yiannis, Marom, Yuval, Kozima, Hideki, and Balkenius, Christian
- Subjects
Hierarchy ,Computer science ,business.industry ,Robotics ,Kismet ,InformationSystems_MODELSANDPRINCIPLES ,Artificial Intelligence ,Human–computer interaction ,Concept learning ,Speech ,Robot ,Artificial intelligence ,Architecture ,business ,Humanoid robot ,Natural language ,Language - Abstract
We are exploring the idea that early language acquisition could be better modelled on an artifcial creature by considering the pragmatic aspect of natural language and of its development in human infants. We have implemented a system of vocal behaviors on Kismet in which "words" or concepts are behaviors in a competitive hierarchy. This paper reports on the framework, the vocal system's architecture and algorithms, and some preliminary results from vocal label learning and concept formation.
- Published
- 2002
- Full Text
- View/download PDF
170. Better Vision Through Manipulation
- Author
-
Paul Fitzpatrick, Giorgio Metta, Prince, Christopher G., Demiris, Yiannis, Marom, Yuval, Kozima, Hideki, and Balkenius, Christian
- Subjects
Cognitive science ,Visual perception ,business.industry ,Computer science ,Machine vision ,05 social sciences ,Machine Vision ,Cognitive neuroscience of visual object recognition ,050109 social psychology ,Experimental and Cognitive Psychology ,Robotics ,050105 experimental psychology ,Behavioral Neuroscience ,Artificial Intelligence ,Robot ,0501 psychology and cognitive sciences ,Segmentation ,Artificial intelligence ,business ,Competence (human resources) ,Humanoid robot - Abstract
Vision and manipulation are inextricably intertwined in the primate brain. Tantalizing results from neuroscience are shedding light on the mixed motor and sensory representations used by the brain during reaching, grasping, and object recognition. We now know a great deal about what happens in the brain during these activities, but not necessarily why. Is the integration we see functionally important, or just a reflection of evolution's lack of enthusiasm for sharp modularity? We wish to instantiate these results in robotic form to probe the technical advantages and to find any lacunae in existing models. We believe it would be missing the point to investigate this on a platform where dextrous manipulation and sophisticated machine vision are already implemented in their mature form, and instead follow a developmental approach from simpler primitives. We begin with a precursor to manipulation, simple poking and prodding, and show how it facilitates object segmentation, a long-standing problem in machine vision. The robot can familiarize itself with the objects in its environment by acting upon them. It can then recognize other actors (such as humans) in the environment through their effect on the objects it has learned about. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and we relate this idea to results in neuroscience.
- Published
- 2002
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.