42 results on '"Nobuji Tetsutani"'
Search Results
2. Study on visual perception for beta motion of annular ring under peripheral vision
- Author
-
Nobuji Tetsutani, Hiroto Inoue, Kohei Kajiwara, and Hikaru Shibata
- Subjects
Physics ,Ring (mathematics) ,Visual perception ,Optics ,business.industry ,Peripheral vision ,Beta (velocity) ,sense organs ,business ,Retinal eccentricity ,Motion (physics) - Abstract
The authors have found a phenomenon in which the blinking speed appears to be faster when the beta motion is viewed in peripheral view. In this paper, we focused on the beta motion on the circle and conducted an experiment on how it looks in peripheral vision. As a result of the experiment, it is understood that the speed increases as the retinal eccentricity increases regardless of the annular ring size. In addition, it became clear that as the retinal eccentricity in the horizontal direction increases, the apparent shape of the beta motion tends to change from the annular ring shape.
- Published
- 2019
- Full Text
- View/download PDF
3. Study on evaluation method for 3D perspective image
- Author
-
Seiya Iwasaki and Nobuji Tetsutani
- Subjects
2d images ,business.industry ,Computer science ,media_common.quotation_subject ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,Image (mathematics) ,law.invention ,Feeling ,law ,Evaluation methods ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
We described the evaluation method of stereoscopic images by using perspective images with continuity that is easy to feel a depth in 2D images. In this paper, we propose a new 3D evaluation method, using reversed images and an evaluation item of an incongruity. As a result, in terms of the perspective image, we got the effectiveness of the evaluation item “incongruity” in three evaluation items of depth feeling, stereoscopic feeling, incongruity.
- Published
- 2019
- Full Text
- View/download PDF
4. Analyzing growth potential of young researchers’ h-Index and co-author networks in biological science fields
- Author
-
Kenta Ishido, Hiroto Inoue, Nobuji Tetsutani, Masanori Fujita, and Takao Terano
- Subjects
Index (economics) ,Betweenness centrality ,Computer science ,business.industry ,Measure (physics) ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer - Abstract
The h-Index is frequently used to evaluate researchers. However, the index is not suitable for young researchers because they typically have only a few prior achievements. In this paper, we evaluate the potential of JSPS research fellows using the h-index and the betweenness centrality of co-author networks. The results show that the new measure appropriately evaluates the potential of young researchers at their early stage, as compared with the h-index.
- Published
- 2019
- Full Text
- View/download PDF
5. Research on Troxler effect focusing on gazing time for edge blurred stripe image
- Author
-
Takahiro Miyamoto, Hiroto Inoue, and Nobuji Tetsutani
- Subjects
Physics ,Visual perception ,business.industry ,media_common.quotation_subject ,Illusion ,Mach bands ,Edge (geometry) ,Image (mathematics) ,Troxler's fading ,symbols.namesake ,Optics ,symbols ,Contrast (vision) ,business ,Focus (optics) ,media_common - Abstract
The following three visual phenomena have been confirmed as a visual effect occurring at an edge having a density change: (a) Mach band, (b) Chevreul illusion, and (c) the Craik-O'Brien-Cornsweet effect. These are considered to be different phenomena. In our previous research, it was found that by varying the blurred edge width the density value of the image changed and the contrast decreased approximately by 20% at the maximum. From this phenomenon, it was possible to explain to some extent the relevance of the above three phenomena by the edge width. Furthermore, under the research of gazing time and edge blurring phenomenon, we confirmed that edges disappear with low contrast. In this paper, we focus attention on the gazing time and conduct an evaluation experiment on how much the Troxler effect occurs, using the density difference of the presented image as a parameter. We also investigate the evaluation of the Mach band at that time. Through these experiments, it is possible to show the experimental results that clarify the relationship between the four phenomena.
- Published
- 2019
- Full Text
- View/download PDF
6. Depth Estimation and View Synthesis for Immersive Media
- Author
-
Hiroshi Yasuda, Takanori Senoh, and Nobuji Tetsutani
- Subjects
Reference software ,Image coding ,Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,View synthesis ,Software ,Computer vision ,Artificial intelligence ,business ,Light field ,Data compression ,Coding (social sciences) - Abstract
Light field images and multi-view images are a promising immersive media, which realizes naked-eye three-dimensional scenes or 6 Degree of Freedom (6DoF) Free Navigations. Since they require a huge number of pixels for a rich immersive media, an efficient data compression method is essential. Since the light field images are compatible to the multi-view images, we are investigating a decimated multi-view coding method using MPEG Depth Estimation Reference Software (DERS) and View Synthesis Reference Software (VSRS). In this paper, we analyze Homography projection in DERS and VSRS and propose their improvements. We also explain the depth estimation and view synthesis process in detail and report an experimental result with the improved software (pDERS8 and pVSRS4.3).
- Published
- 2018
- Full Text
- View/download PDF
7. Efficient Light Field Image Coding with Depth Estimation and View Synthesis
- Author
-
Hiroshi Yasuda, Nobuji Tetsutani, Takanori Senoh, and Kenji Yamamoto
- Subjects
Image coding ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inpainting ,020206 networking & telecommunications ,02 engineering and technology ,View synthesis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Light field - Abstract
Efficient light field image coding method is proposed based on conversion to multi-view image, depth estimation, multi-view coding, and view synthesis. Firstly, compatibility of light field image and multi-view image is discussed, and then depth estimation method based on texture-edge-aware horizontal and vertical view matching and depth-smoothing is explained. Secondly, a view-synthesis method from up to four reference views is proposed, which adopts depth-base occlusion hole inpainting. Finally, by combining these methods together with a hierarchical bi-directional inter-view coding of multi-view image and depth maps, coding results are reported.
- Published
- 2018
- Full Text
- View/download PDF
8. Study on appearance of beta motion in peripheral vision
- Author
-
Nobuji Tetsutani, Riku Yamanoi, Hikaru Shibata, Hiroto Inoue, and Hidetaka Masuda
- Subjects
Physics ,Speedup ,business.industry ,media_common.quotation_subject ,Motion (geometry) ,White point ,Optics ,Peripheral vision ,Moving speed ,White Spots ,Beta (velocity) ,Eccentricity (behavior) ,business ,media_common - Abstract
In this paper, we conducted experiments on obtaining the speeding-up rate and quantifying the disappearance phenomenon of the white spots for the speedup phenomenon caused by viewing the s motion of the white point in peripheral vision. For the speedup rate when the moving speed of white spots was 30 degrees / sec and the eccentricity of retina was 70 degrees, the speedup rate of white spots became 1.6 times. Disappearance of white spots and change in length were confirmed. It turned out that this is the cause of the increase in speed.
- Published
- 2018
- Full Text
- View/download PDF
9. Setting the Degree of Defocus for Video Images in a Monitoring System
- Author
-
Nobuji Tetsutani and Yukiya Horie
- Subjects
Degree (graph theory) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Monitoring system ,Computer vision ,Artificial intelligence ,Input method ,business ,Video image ,Image (mathematics) - Abstract
We develop a persons-existence extraction system to protect the privacy of a user’s image. In this paper, we propose a novel video input method that considers both privacy and security, and we describe the degree of the criteria of the blur with the video input method. In addition, we discuss the problems associated with our previous algorithm, which determines the number of people from blurred images. We improve the algorithm and then evaluate it.
- Published
- 2016
- Full Text
- View/download PDF
10. Visual Influence for Character Images Processed by Mosaic Method
- Author
-
Hiroki Osawa, Maiko Moriya, Nobuji Tetsutani, and Noriaki Kuwahara
- Subjects
Kanji ,Character (mathematics) ,business.industry ,Computer graphics (images) ,media_common.quotation_subject ,Contrast (vision) ,Mosaic (geodemography) ,Computer vision ,Artificial intelligence ,Anti-aliasing ,business ,media_common ,Mathematics - Published
- 2012
- Full Text
- View/download PDF
11. Vision-based human motion tracking using head-mounted cameras and fixed cameras
- Author
-
Hirotake Yamazoe, Akira Utsumi, Nobuji Tetsutani, and Masahiko Yachida
- Subjects
Vision based ,Computer Networks and Communications ,Head (linguistics) ,Computer science ,business.industry ,Track (disk drive) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Physics and Astronomy ,Tracking (particle physics) ,Human motion ,Human behavior ,Position (vector) ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Three-CCD camera ,business - Abstract
In this paper, we propose a method to track human motions by using multiple-viewpoint images taken by both mobile cameras and fixed-viewpoint cameras. In a vision-based system, the variety of viewpoints is important for effective detection/tracking of target motions. As image sequences observed from various viewpoints contain rich information regarding entire scenes, these images are also useful for video-based analysis of human behaviors/interactions. We employ multiple numbers of mobile cameras mounted on human heads and fixed-viewpoint cameras to observe a human interaction scene. In our method, 3D human positions are tracked by using both mobile and fixed-viewpoint camera images. Transitions of human and background positions in images are estimated by using the tracked 3D positions, and the positions and poses of the mobile camera are determined by comparing the observed images with the estimation. Consequently, our method can provide position information for all humans and head motions for the humans wearing head-mounted cameras. Experimental results show the effectiveness of the proposed method. © 2007 Wiley Periodicals, Inc. Electron Comm Jpn Pt 2, 90(2): 40–53, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjb.20331
- Published
- 2007
- Full Text
- View/download PDF
12. Scale-Adaptive Face Detection and Tracking in Real Time with SSR Filters and Support Vector Machine
- Author
-
Kenichi Hosaka, Nobuji Tetsutani, and Shinjiro Kawato
- Subjects
business.industry ,Computer science ,Facial motion capture ,Feature extraction ,Filter (signal processing) ,Facial recognition system ,Object detection ,Support vector machine ,Artificial Intelligence ,Hardware and Architecture ,Face (geometry) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Face detection ,Software - Abstract
In this paper, we propose a method for detecting and tracking faces in video sequences in real time. It can be applied to a wide range of face scales. Our basic strategy for detection is fast extraction of face candidates with a Six-Segmented Rectangular (SSR) filter and face verification by a support vector machine. A motion cue is used in a simple way to avoid picking up false candidates in the background. In face tracking, the patterns of between-the-eyes are tracked while updating the matching template. To cope with various scales of faces, we use a series of approximately 1/√2 scale-down images, and an appropriate scale is selected according to the distance between the eyes. We tested our algorithm on 7146 video frames of a news broadcast featuring sign language at 320 × 240 frame size, in which one or two persons appeared. Although gesturing hands often hid faces and interrupted tracking, 89% of faces were correctly tracked. We implemented the system on a PC with a Xeon 2.2-GHz CPU, running at 15 frames/second without any special hardware.
- Published
- 2005
- Full Text
- View/download PDF
13. Detection and tracking of eyes for gaze-camera control
- Author
-
Nobuji Tetsutani and Shinjiro Kawato
- Subjects
genetic structures ,Computer science ,business.industry ,Track (disk drive) ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Pentium ,Tracking (particle physics) ,Gaze ,eye diseases ,InformationSystems_MODELSANDPRINCIPLES ,Position (vector) ,Signal Processing ,Eye tracking ,Computer vision ,sense organs ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
A head-off gaze-camera needs eye location information for head-free usage. For this purpose, we propose new algorithms to extract and track the positions of eyes in a real-time video stream. For extraction of eye positions, we detect blinks based on the differences between successive images. However, eyelid regions are fairly small. To distinguish them from dominant head movement, we elaborate a head movement cancellation process. For eye-position tracking, we use a template of ‘Between-the-Eyes,’ which is updated frame-by-frame, instead of the eyes themselves. Eyes are searched based on the current position of ‘Between-the-Eyes’ and their geometrical relations to the position in the previous frame. The ‘Between-the-Eyes’ pattern is easier to locate accurately than eye patterns. We implemented the system on a PC with a Pentium III 866-MHz CPU. The system runs at 30 frames/s and robustly detects and tracks the eyes.
- Published
- 2004
- Full Text
- View/download PDF
14. The Proactive Desk: A New Haptic Display System for a Digital Desk Using a 2-DOF Linear Induction Motor
- Author
-
Haruo Noma, Shunsuke Yoshida, Nobuji Tetsutani, and Yasuyuki Yanagida
- Subjects
Structure (mathematical logic) ,Computer science ,business.industry ,Object (computer science) ,Human-Computer Interaction ,Haptic display ,Control and Systems Engineering ,Human–computer interaction ,Linear induction motor ,Computer Vision and Pattern Recognition ,business ,Omnidirectional antenna ,ComputingMilieux_MISCELLANEOUS ,Software ,Computer hardware ,Haptic technology ,Desk - Abstract
The Proactive Desk is a new digital desk with haptic feedback. The concept of a digital desk was proposed by Wellner in 1991 for the first time. A typical digital desk enables a user to seamlessly handle both digital and physical objects on the desk with a common GUI standard. The user, however, handles them as virtual GUI objects. Our Proactive Desk allows the user to handle both digital and physical objects on a digital desk with a realistic feeling. In the Proactive Desk, two linear induction motors are equipped to generate an omnidirectional translational force on the user's hand or on a physical object on the desk without any mechanical links or wires, thereby preserving the advantages of the digital desk. In this article, we first discuss applications of a digital desk with haptic feedback; then we mention the design and structure of the first trial Proactive Desk, and its performance.
- Published
- 2004
- Full Text
- View/download PDF
15. Distrubuted Camera Calibration Algorithm for Multiple Camera Based Vision Systems and its Application to Human Tracking System
- Author
-
Hirotake Yamazoe, Masahiko Yachida, Akira Utsumi, and Nobuji Tetsutani
- Subjects
Orientation (computer vision) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Tracking system ,Computer Science Applications ,Essential matrix ,Camera auto-calibration ,Distributed algorithm ,Computer Science::Computer Vision and Pattern Recognition ,Media Technology ,Computer vision ,Artificial intelligence ,Smart camera ,Electrical and Electronic Engineering ,business ,Camera resectioning ,Homography (computer vision) - Abstract
We propose a distributed automatic method of calibrating cameras for multiple-camera-based vision systems. because manual calibration is a difficult and time-consuming task. However, the data size and computational costs of automatic calibration increase when the number of cameras is increased. We solved these problems by employing a distributed algorithm. With our method, each camera estimates its position and orientation through local computation using observations shared with neighboring cameras. We formulated our method with two kinds of geometrical constraints, an essential matrix and homography. We applied the latter formulation to a human tracking system to demonstrate the effectiveness of our method.
- Published
- 2004
- Full Text
- View/download PDF
16. Detection of Face Representative Using Newly Proposed Filter
- Author
-
Yasutaka Senda, Nobuji Tetsutani, Hironori Yamauchi, Oraya Sawettanusorn, and Shinjiro Kawato
- Subjects
business.industry ,Filter (video) ,Computer science ,Template matching ,Face (geometry) ,Computer vision ,Pattern recognition ,Artificial intelligence ,business ,Face detection ,Stereo camera - Published
- 2004
- Full Text
- View/download PDF
17. Computer graphics eye-animation and feeling of gaze line
- Author
-
Fumio Kishino, Nobuji Tetsutani, and Kiyohiro Morii
- Subjects
Workstation ,business.industry ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye contact ,Eye movement ,Animation ,Gaze ,law.invention ,Computer graphics ,law ,Computer graphics (images) ,Perception ,Eye tracking ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,media_common - Abstract
In a teleconferencing system with realistic sensations, human images are generated by computer graphics (CG). This requires animating natural eye movement with CG. This paper describes a real-time animation technique to generate blinking and gaze shift, while still considering convergence using a graphic workstation. Moreover, the feeling of eye contract is evaluated using this animation. In this experiment, the gaze generated by CG using two-dimensional (2-D) or 3-D display is evaluated subjectively and compared with the gaze of an actual person. Also, an experiment is performed on the time required for perception of eye contact when the eyes of the CG image are moving. Then, a real-time CG eyeanimation system combined with an eye-movement detector is described. Also, the allowable transmission delay time when using this system is evaluated subjectively.
- Published
- 1995
- Full Text
- View/download PDF
18. A design of software adaptive to estimated user's mental state using pulse wave analysis
- Author
-
Yoshikatsu Ohta, Kazuhisa Ohira, Mayumi Oyama-Higa, Yoshito Tobe, Niwat Thepvilojanapong, Nobuji Tetsutani, and Shun Arai
- Subjects
Adaptive control ,Software ,Pulse Wave Analysis ,Computer science ,business.industry ,Mobile phone ,Frequency domain ,Real-time computing ,Mobile computing ,Pulse wave ,business ,Signal - Abstract
Recent study has shown that the pulse wave signal of human includes much information of human's mental condition. To get such information, measuring finger pulse wave is a non-invasive and convenient method. Then we analyze the measured pulse wave in frequency domain to investigate low- and high-frequency components whose the ratio can be used to estimate mental condition. We design and implement an API to retrieve user's mental state easily and adaptively control a mobile phone's software based on mental condition. To validate the benefit of API, we develop a mail filtering app on an Android phone and perform an experiment.
- Published
- 2012
- Full Text
- View/download PDF
19. Spatio-temporal analysis of eye movement patterns based on the maximum entropy method and the higher moment features
- Author
-
Nobuji Tetsutani, Hidetomo Sakaino, and Fumio Kishino
- Subjects
Moment (mathematics) ,business.industry ,Spatio-Temporal Analysis ,Saccade ,Eye movement ,Maximum entropy method ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Gaze ,Mathematics - Published
- 1994
- Full Text
- View/download PDF
20. Special Issue Image Technology of Next Generation. Stereoscopic Fusion Characteristics for a Synthetic Display between Real Objects and Stereoscopic Images
- Author
-
Fumio Kishino, Noriaki Kuwahara, and Nobuji Tetsutani
- Subjects
Fusion ,Computer science ,business.industry ,General Engineering ,Teleconference ,Stereoscopy ,Virtual reality ,Object (philosophy) ,Image (mathematics) ,law.invention ,law ,Computer graphics (images) ,Size ratio ,Computer vision ,Artificial intelligence ,business - Abstract
We are developing a Virtual Reality teleconferencing system that offers realistic sensations, and are exploring ways to synthesize real objects and stereoscopic images. We found that viewers felt it difficult to fuse stereoscopic images, even in fusion limit, when they were displayed with real objects. Because there is little information on the fusional limits in situations where real objects and stereoscopic images are synthesized, we examined this difficulty experimentally. The stereoscopic image was displayed on a 70-inch screen, and subjects viewed it by using liquid-crystal-shuttered glasses. The real object was placed to occlude the stereoscopic image. We changed the position and size ratio of these images and asked subjects to rate the difficulty of fusion. The results showed that fusion was extremely difficult when the stereoscopic image was displayed in front of the real object. We also experimentally evaluated situations in which the stereoscopic image moved, and obtained results similar to those obtained when it was stable.
- Published
- 1994
- Full Text
- View/download PDF
21. Improved Pointing Input Using Gaze Detection
- Author
-
Nobuji Tetsutani, Akira Tomono, and Fumio Kishino
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,Pointer (user interface) ,General Engineering ,Cursor (user interface) ,Word error rate ,Mouse button ,Gaze ,law.invention ,Controllability ,Sight ,InformationSystems_MODELSANDPRINCIPLES ,law ,Computer vision ,Input method ,Artificial intelligence ,business - Abstract
A pointing input method that uses gaze detection, along with a mouse, is proposed. This method enhances the efficiency of the pointing task in a man-machine interface. In this method, a cursor is displayed at the detected gaze point on the CRT and if the cursor is on target, the operator clicks the mouse button. If it is not on target, it is moved onto the target using the mouse and then the button is clicked. This method is compared to the conventional method, which uses only a mouse, and pointing time, error rate, and gaze point movement are analyzed. The results indicate that if the gaze detection device has an error angle of 3 degrees or less (at a sight distance of 60 cm), this method is better than the conventional method in terms of pointing time and cursor controllability.
- Published
- 1993
- Full Text
- View/download PDF
22. Enhancing web-based learning by sharing affective experience
- Author
-
Michael J. Lyons, Nobuji Tetsutani, and Daniel Kluender
- Subjects
Multimedia ,business.industry ,Computer science ,media_common.quotation_subject ,Context (language use) ,Empathy ,Affect (psychology) ,computer.software_genre ,Feeling ,Web based learning ,The Internet ,Chinese characters ,business ,Skin conductance ,computer ,Natural language ,media_common - Abstract
We suggest that the real-time visual display of affective signals such as respiration, pulse, and skin conductivity can allow users of Web-based tutoring systems insight into each other's felt bodily experience. This allows remotely interacting users to increase their experience of empathy, or shared feeling and has the potential to enhance telelearning. We describe an implementation of such a system and present the results of a preliminary experiment in the context of a Web-based system for tutoring the writing of Chinese characters.
- Published
- 2004
- Full Text
- View/download PDF
23. Hands-on learning of computer programming in introductory stage using a model railway layout
- Author
-
Yuichi Itoh, Fumio Kishino, Yoshifumi Kitamura, Hirokazu Sasamoto, Nobuji Tetsutani, and Haruo Noma
- Subjects
Set (abstract data type) ,Computer science ,business.industry ,Human–computer interaction ,Computation ,Computer programming ,Computer-supported cooperative work ,Experiential education ,Stage (hydrology) ,Virtual reality ,business ,Software engineering ,Inductive programming - Abstract
This research aims to develop a new methodology and its supporting technologies for learning about computation and programming in the introductory stage through a hands-on playing experience with toys. This allows a beginner to acquire the concepts and knowledge of computation and programming by playing with a model railway set.
- Published
- 2004
- Full Text
- View/download PDF
24. Vibrotactile letter reading using a low-resolution tactor array
- Author
-
Robert W. Lindeman, Yuichiro Kume, Nobuji Tetsutani, M. Kakita, and Yasuyuki Yanagida
- Subjects
Focus (computing) ,Alphanumeric ,Computer science ,business.industry ,media_common.quotation_subject ,Context (language use) ,Virtual reality ,Task (project management) ,Stimulus modality ,Sensory substitution ,Reading (process) ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
Vibrotactile displays have been studied for several decades in the context of sensory substitution. Recently, a number of vibrotactile displays have been developed to extend sensory modalities in virtual reality. Some of these target the whole body as the stimulation region, but existing systems are only designed for discrete stimulation points at specific parts of the body. However, since human tactile sensation has more resolution, a higher density might be required in factor alignment in order to realize general-purpose vibrotactile displays. One problem with this approach is that it might result in an impractically high number of required tactors. Our current focus is to explore ways of simplifying the system while maintaining an acceptable level of expressive ability. As a first step, we chose a well-studied task: tactile letter reading. We examined the possibility of distinguishing alphanumeric letters by using only a 3-by-3 array of vibrating motors on the back of a chair. The tactors are driven sequentially in the same sequence as if someone were tracing the letter on the chair's back. The results showed 87% successful letter recognition in some cases, which was close to the results in previous research with much larger arrays.
- Published
- 2004
- Full Text
- View/download PDF
25. A Novel Wearable System for Capturing User View Images
- Author
-
Hirotake Yamazoe, Akira Utsumi, Nobuji Tetsutani, and Masahiko Yachida
- Subjects
Computer science ,business.industry ,Computer graphics (images) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wearable computer ,Computer vision ,Smart camera ,Artificial intelligence ,User interface ,business ,Gesture - Abstract
In this paper, we propose a body attached system to capture the experience of a person in sequence as audio/visual information. The proposed system consists of two cameras (one IR (infra-red) camera and one wide-angle color camera) and a microphone. The IR camera image is used for capturing the user’s head motions. The wide-angle color camera is used for capturing frontal view images, and an image region approximately corresponding to the users’ view is selected according to the estimated human head motions. The selected image and head motion data are stored in a storage device with audio data. This system can overcome the disadvantages of systems using head-mounted cameras in terms of the ease in putting on/taking off the device and its less obtrusive visual impact on third persons. Using the proposed system, we can record audio data, images in the user’s view and head gestures (nodding, shaking, etc.) simultaneously. These data contain significant information for recording/analyzing human activities and can be used in wider application domains (such as a digital diary or interaction analysis). Experimental results show the effectiveness of the proposed system.
- Published
- 2004
- Full Text
- View/download PDF
26. Vision Based Acquisition of Mouth Actions for Human-Computer Interaction
- Author
-
Michael J. Lyons, Nobuji Tetsutani, and Gamhewage C. de Silva
- Subjects
Vision based ,business.industry ,Computer science ,Controller (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tracking (particle physics) ,GeneralLiterature_MISCELLANEOUS ,Human–computer interaction ,Face (geometry) ,Computer vision ,Artificial intelligence ,Data input ,Open mouth ,business - Abstract
We describe a computer vision based system that allows use of movements of the mouth for human-computer interaction (HCI). The lower region of the face is tracked by locating and tracking the position of the nostrils. The location of the nostrils determines a sub-region of the image from which the cavity of the open mouth may be segmented. Shape features of the open mouth can then be used for continuous real-time data input, for human-computer interaction. Several applications of the head-tracking mouth controller are described.
- Published
- 2004
- Full Text
- View/download PDF
27. Simulating side slopes on locomotion interfaces using torso forces
- Author
-
Haruo Noma, D. Checcacci, Nobuji Tetsutani, Yasuyuki Yanagida, and John M. Hollerbach
- Subjects
Engineering ,Waist ,business.product_category ,business.industry ,Biomechanics ,Structural engineering ,Experimental validation ,Torso ,Pulley ,medicine.anatomical_structure ,medicine ,Treadmill ,business ,Simulation ,Haptic technology - Abstract
This paper describes the biomechanical experimental validation of simulating side slope during walking on a treadmill style locomotion interface. The side slope effect is achieved by means of a lateral force applied to the waist of the walking subject. Results are provided and discussed for both simulated and real side slopes, showing a substantial biomechanical equivalence in the walking pattern for the real side slope and lateral torso force.
- Published
- 2003
- Full Text
- View/download PDF
28. Mouthbrush
- Author
-
Michael J. Lyons, Chi-Ho Chan, and Nobuji Tetsutani
- Subjects
Painting ,Pixel ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Linear discriminant analysis ,GeneralLiterature_MISCELLANEOUS ,Expression (mathematics) ,Single pixel ,Task (project management) ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Gesture - Abstract
We present a novel multimodal interface which permits users to draw or paint using coordinated gestures of hand and mouth. A headworn camera captures an image of the mouth and the mouth cavity region is extracted by Fisher discriminant analysis of the pixel colour information. A normalized area parameter is read by a drawing or painting program to allow read-time gestural control of pen/brush parameters by mouth gesture while sketching with a digital pen/tablet. A new performance task, the Radius Control Task, is proposed as a means of systematic evaluation of performance of the interface. Data from preliminary experiments show that with some practice users can achieve single pixel radius control with ease. A trial of the system by a professional artist shows that it is ready for use as a novel tool for creative artistic expression.
- Published
- 2003
- Full Text
- View/download PDF
29. A nose-tracked, personal olfactory display
- Author
-
Akira Tomono, Haruo Noma, Yasuyuki Yanagida, Nobuji Tetsutani, and Shinjiro Kawato
- Subjects
Computer science ,business.industry ,Facial motion capture ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Olfaction ,Virtual reality ,medicine.anatomical_structure ,Feature (computer vision) ,Face (geometry) ,medicine ,Computer vision ,Artificial intelligence ,business ,Nose - Abstract
An interface that involves all five senses, including olfaction, would be the ultimate interface for virtual reality (VR). We are trying to construct an olfactory display that does not require the user to wear anything on the face. We used an "air cannon" to transport small packets of scented air to the user's nose from some nearby place. In this paper, we report the ongoing development of an olfactory display system with a nose-tracking feature by incorporating vision-based face tracking technology.
- Published
- 2003
- Full Text
- View/download PDF
30. Vital signs
- Author
-
Daniel Kluender, Chi-Ho Chan, Nobuji Tetsutani, and Michael J. Lyons
- Subjects
Body language ,Communication ,business.industry ,Computer science ,Vital signs ,Computer vision ,Contrast (music) ,Artificial intelligence ,business ,Human communication - Abstract
The emotional aspects of human communication depend significantly on non-verbal signals or body language. By contrast, human telecommunication is characterized by a sense of disembodiment. However, machine-mediated communication permits the acquisition, transmission, and graphical display of data about a user’s physiological state. Here we explore novel forms of body language, not normally available in face-to-face interaction.
- Published
- 2003
- Full Text
- View/download PDF
31. Human Factors Evaluation of a Vision-Based Facial Gesture Interface
- Author
-
Michael J. Lyons, Nobuji Tetsutani, Shinjiro Kawato, and Gamhewage C. de Silva
- Subjects
Mouth ,Vision based ,business.industry ,Facial motion capture ,Computer science ,Mouthesizer ,Tracking ,Cursor (user interface) ,Input device ,Information throughput ,Nose ,Facial Gesture Interface ,Face ,Computer vision ,Artificial intelligence ,Typing ,Pixel ,Fitts's law ,Gain ,business ,Gesture - Abstract
We adapted a vision-based face tracking system for cursor control by head movement. An additional vision-based al- gorithm allowed the user to enter a click by opening the mouth. The Fitts law information throughput rate of cursor movements was measured to be 2.0 bits/sec with the ISO 9241-9 international standard method for testing input de- vices. A usability assessment was also conducted and we report and discuss the results. A practical application of this facial gesture interface was studied: text input using the Dasher system, which allows a user to type by moving the cursor. The measured typing speed was 7-12 words/minute, depending on level of user expertise. Performance of the system is compared to a conventional mouse interface.
- Published
- 2003
32. An unencumbering, localized olfactory display
- Author
-
Akira Tomono, Nobuji Tetsutani, Yasuyuki Yanagida, and Haruo Noma
- Subjects
Focus (computing) ,Stimulus modality ,Odor ,Computer science ,business.industry ,Computer vision ,Olfaction ,Artificial intelligence ,Virtual reality ,business - Abstract
Olfaction is considered to be an important sensory modality in next-generation virtual reality (VR) systems. We currently focus on spatiotemporal control of odor, rather than capturing and synthesizing odor itself. If we simply diffused the odor into the atmosphere, it would be difficult to clean it away in a short time. Several olfactory displays that inject the scented air under the nose through tubes have been proposed to realize spatiotemporal control of olfaction, but they require the user to wear something on one's face. Here, we propose an unencumbering olfactory display, by conveying a clump of scented air from a certain remote place to the user's nose. To implement this concept, we used an "air cannon" that generates toroidal vortices of the scented air. We conducted a preliminary experiment to examine the possibility of this method's ability to display scent to a restricted space. The result shows that we could successfully display incense to the target user.
- Published
- 2003
- Full Text
- View/download PDF
33. Dynamic micro aspects of facial movements in elicited and posed expressions using high-speed camera
- Author
-
Nobuji Tetsutani, Shigeo Morishima, S. Akamatsu, H. Uchida, Hiroshi Yamada, and Tatsuo Yotsukura
- Subjects
Facial expression ,business.industry ,Computer science ,Facial motion capture ,Movement (music) ,Frame rate ,Facial recognition system ,Facial Action Coding System ,stomatognathic diseases ,Computer vision ,Artificial intelligence ,Set (psychology) ,business ,Computer animation - Abstract
The presented study investigated the dynamic aspects of facial movements in spontaneously elicited and posed facial expressions of emotion. We recorded participants' facial movements when they were shown a set of emotional eliciting films, and when they posed typical facial expressions. Those facial movements were recorded by a high-speed camera of 250 frames per second. We measured facial movements frame by frame in terms of displacements of facial feature points. Such micro-temporal analysis showed that, although it was very subtle, there exits the characteristic onset asynchrony of each part's movement. Furthermore, it was found the commonality of each part's movement in temporal change although the speed and the amount of each movement varied along with expressional conditions and emotions.
- Published
- 2002
- Full Text
- View/download PDF
34. Facing the music
- Author
-
Michael J. Lyons and Nobuji Tetsutani
- Subjects
Action (philosophy) ,business.industry ,Computer science ,Feature (computer vision) ,Controller (computing) ,Face (geometry) ,Interface (computing) ,Computer vision ,Artificial intelligence ,Musical ,business ,Affective computing - Abstract
We describe a novel musical controller which acquires live video input from the user's face, extracts facial feature parameters using a computer vision algorithm, and converts these to expressive musical effects. The controller allows the user to modify synthesized or audio-filtered musical sound in real time by moving the face.
- Published
- 2001
- Full Text
- View/download PDF
35. A Full-wireless Vibrotactile Stimulation System
- Author
-
Nobuji Tetsutani, Yasuyuki Yanagida, and Kazuhiro Kuwabara
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,medicine ,Wireless ,Audiology ,Vibrotactile stimulation ,business - Published
- 2004
- Full Text
- View/download PDF
36. Study on a stereoscopic display system employing eye-position tracking for multi-viewers
- Author
-
Fumio Kishino, Katsuyuki Omura, and Nobuji Tetsutani
- Subjects
business.industry ,Parallax barrier ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Teleconference ,Holography ,Stereoscopy ,Tracking (particle physics) ,GeneralLiterature_MISCELLANEOUS ,Field (computer science) ,law.invention ,Geography ,Projector ,law ,Computer graphics (images) ,Autostereoscopy ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose a new autostereoscopic display system employing an eye-position tracking technique for multi-viewers, who do not need to wear 3-D glasses. We also describe the developed stereoscopic projectors and the lenticular screen design for this system. In this system, the 3-D screen consists of two lenticular screens and a diffusion layer. Each 3-D projector which corresponds to a viewer consists of two projectors for right and left images. The viewer's positionsare detected, and the 3-D projector moves according to the viewer's movement. 1. INTRODUCTION We are conducting basic research on a virtual space teleconferencing system [1] that achieves a sense of reality. Such a system needs large-scale stereoscopic displays. Stereoscopic displaytechniques requiring special glasses cause users to feel uncomfortable and tired, and reduce thefeeling of naturalness. In the field of teleconferencing, a no-glasses technique using a lenticularscreen was found to be superior to holography and a volume-scanning-type 3-D display in terms of
- Published
- 1994
- Full Text
- View/download PDF
37. Three-dimensional display method without special glasses for virtual space teleconferencing system
- Author
-
Fumio Kishino and Nobuji Tetsutani
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Teleconference ,Three dimensional display ,Virtual space ,Stereoscopy ,Tracking (particle physics) ,GeneralLiterature_MISCELLANEOUS ,Computing systems ,Virtual retinal display ,law.invention ,Geography ,law ,Computer graphics (images) ,Autostereoscopy ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we propose a new autostereoscopic display system employing an eye-position tracking technique. In this display system, the observable area of stereoscopic images becomes wide and an autostereoscopic display is realized by combining the tracking technique with computer graphic images. The wide 70-inch lenticular screen and HDTV LCD-projector generate more impressive 3-D images than conventional system.
- Published
- 1993
- Full Text
- View/download PDF
38. Technique of eye animation generated by CG and evaluation of eye contact using eye animation
- Author
-
Fumio Kishino, Kiyohiro Morii, Nobuji Tetsutani, and Takanori Satoh
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye contact ,Image processing ,Animation ,3D modeling ,Stereo display ,Gaze ,InformationSystems_MODELSANDPRINCIPLES ,Feeling ,Computer graphics (images) ,Perception ,Eye tracking ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
We describe a real-time animation technique to generate blinking and gaze shift, while still considering convergence, using a Graphic Workstation. Moreover, we evaluate the feeling of eye contact using this eye animation. In our experiment, we subjectively evaluate gaze generated by CG using 2-D or 3-D display, and compare it with the gaze of an actual person. We also perform an experiment on the time required for perception of eye contact when the eyes of the CG image are moving.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1992
- Full Text
- View/download PDF
39. Wide-screen autostereoscopic display system employing head-position tracking
- Author
-
Fumio Kishino, Katsuyuki Omura, and Nobuji Tetsutani
- Subjects
LCD projector ,Computer science ,business.industry ,Photography ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Magnification ,Stereoscopy ,Stereo display ,Atomic and Molecular Physics, and Optics ,law.invention ,Virtual image ,law ,Autostereoscopy ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose a head-tracking autostereoscopic display system based on magnetic-sensor tracking of the viewer's side-to-side location, and optical slewing of a stereoscopic image-pair array projected onto a lenticular screen so as to keep the images received by the viewer's eyes distinct. Viewer distance changes are accommodated by slight magnification changes of the projected image array. A high-definition-TV LCD projector is used with a 178-cm (70-in.) lenticular sheet (diagonal measurement) to provide impressive computer-generated 3-D images over a particularly wide viewing zone. The system finds application in a virtual-space teleconferencing system that requires a large-scale stereoscopic display without the use of special viewing glasses.
- Published
- 1994
- Full Text
- View/download PDF
40. Head Tracking Stereoscopic Projection Display Technique
- Author
-
Nobuji Tetsutani, Morito Ishibashi, Susumu Ichinose, Sinichi Shiwa, and Tomoaki Tanaka
- Subjects
business.industry ,Computer science ,law ,Projection display ,General Engineering ,Computer vision ,Stereoscopy ,Artificial intelligence ,business ,Head tracking ,law.invention - Abstract
特殊な眼鏡を使用する従来の立体画像表示技術は, 観察者に不自然感, 不快感を与えるため, 眼鏡を使用しない表示技術の開発が望まれている.本検討では, レンティキュラスクリーンと2台の液晶プロジェクタ, および観察者の頭部位置を電子的に認識, 追跡する頭部認識追跡装置からなる新しいコンセプトの立体画像投写表示技術を提案し, 実験装置を試作した.レンティキュラスクリーンとしては, 縦1000mm, 横900mm, ピッチ0.5mmの反射タイプを試作した.液晶プロジェクタは, 横640画素, 縦440画素の画素からなる約6インチの単板液晶パネルをライトバルブとした装置を試作した.頭部認識追跡装置は, 観察者の頭部に装着した位置検出素子で観察者頭部の位置を検出し, その位置信号で液晶プロジェクタに入力する信号を制御する構成とした.実験検討の結果, 対角長44インチの立体表示が可能なこと, 頭部の認識追跡が可能なこと, 表示面から奥行き方向に約2100mmの位置で, 左右方向に約600mmの範囲で立体視が可能であることを確認した.
- Published
- 1990
- Full Text
- View/download PDF
41. New photoreceptor charging method by rubbing with magnetic conductive particles
- Author
-
Nobuji Tetsutani and Yasushi Hoshino
- Subjects
Optics ,business.industry ,Chemistry ,General Physics and Astronomy ,sense organs ,Trapping ,Conductivity ,Composite material ,Applied potential ,business ,Electrical conductor ,Corona discharge ,Rubbing - Abstract
A new photoreceptor charging method without corona discharge is proposed in this paper. The charging is carried out by rubbing a photoreceptor with magnetic conductive particles having applied potential. Using this method, the photoreceptor is charged to more than 80% of its applied potential. The surface potential is found to be high and stable when the conductivity of particles and the peripheral velocity of the sleeve carrying the conductive particles are high. The charging mechanisms are assumed to be carrier‐injection assisted by the rubbing, and injection‐carrier trapping.
- Published
- 1987
- Full Text
- View/download PDF
42. Bilevel rendition method for documents including gray-scale and bilevel image
- Author
-
Hiroshi Ochi and Nobuji Tetsutani
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Grayscale ,Image (mathematics) - Published
- 1985
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.