235 results on '"3D Interaction"'
Search Results
2. Exploring Effects of Information Filtering With a VR Interface for Multi-Robot Supervision
- Author
-
Vijay Pawar, Daniel Butters, and Emil T. Jonasson
- Subjects
robot supervision ,3D interaction ,multi-robot systems ,Computer science ,Interface (computing) ,Virtual reality ,Human–robot interaction ,human-robot interaction ,Artificial Intelligence ,Human–computer interaction ,TJ1-1570 ,Mechanical engineering and machinery ,information display ,Original Research ,Robotics and AI ,Novelty ,Workload ,QA75.5-76.95 ,Filter (signal processing) ,information filtering ,Computer Science Applications ,user study ,Electronic computers. Computer science ,virtual reality ,Robot ,interface design - Abstract
Supervising and controlling remote robot systems currently requires many specialised operators to have knowledge of the internal state of the system in addition to the environment. For applications such as remote maintenance of future nuclear fusion reactors, the number of robots (and hence supervisors) required to maintain or decommission a facility is too large to be financially feasible. To address this issue, this work explores the idea of intelligently filtering information so that a single user can supervise multiple robots safely. We gathered feedback from participants using five methods for teleoperating a semi-autonomous multi-robot system via Virtual Reality (VR). We present a novel 3D interaction method to filter the displayed information to allow the user to read information from the environment without being overwhelmed. The novelty of the interface design is the link between Semantic and Spatial filtering and the hierarchical information contained within the multi robot system. We conducted a user study including a cohort of expert robot teleoperators comparing these methods; highlighting the significant effects of 3D interface design on the performance and perceived workload of a user teleoperating many robot agents in complex environments. The results from this experiment and subjective user feedback will inform future investigations that build upon this initial work.
- Published
- 2021
- Full Text
- View/download PDF
3. LPI: learn postures for interactions
- Author
-
Muhammad Raees and Sehat Ullah
- Subjects
3D interaction ,Computer science ,business.industry ,Interface (computing) ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Virtual reality ,computer.software_genre ,Computer Science Applications ,Hardware and Architecture ,Virtual machine ,Histogram ,Pattern recognition (psychology) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Gesture - Abstract
To a great extent, immersion of a virtual environment (VE) depends on the naturalness of the interface it provides for interaction. As people commonly exploit gestures during communication, therefore interaction based on hand-postures enhances the degree of realism of a VE. However, the choice of selecting hand postures for interaction varies from person to person. Generalizing the use of a specific posture with a particular interaction requires considerable computation which in turns depletes intuition of a 3D interface. By investigating machine learning in the domain of virtual reality (VR), this paper presents an open posture-based approach for 3D interaction. The technique is user-independent and relies neither on the size and color of hand nor on the distance between camera and posing-position. The system works in two phases—in the first phase, hand-postures are learnt, whereas in the second phase the known postures are used to perform interaction. With an ordinary camera, a scanned image is partitioned into equal size non-overlapping tiles. Four light-weight features, based on binary histogram and invariant moments, are calculated for each part and portion of a posture-image. The support vector machine classifier is trained by posture-specific knowledge carried accumulatively in each tile. By posing any known posture, the system extracts the tiles information to detect a particular hand-posture. At the successful recognition, appropriate interaction is activated in the designed VE. The proposed system is implemented in a case-study application; vision-based open posture interaction using the libraries of OpenCV and OpenGL. The system is assessed in three separate evaluation sessions. Results of the evaluations testify efficacy of the approach in various VR applications.
- Published
- 2021
- Full Text
- View/download PDF
4. Metric Learning for 2D Image Patch and 3D Point Cloud Volume Matching
- Author
-
Xiuhong Lin, Chenglu Wen, Cheng Wang, Jonathan Li, Weiquan Liu, Xuesheng Bian, Baiqi Lai, and Shuting Chen
- Subjects
3D interaction ,Similarity (geometry) ,Matching (graph theory) ,Computer science ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Pattern recognition ,Similarity measure ,Feature (computer vision) ,Metric (mathematics) ,Artificial intelligence ,business - Abstract
Similarity measure of cross-domain descriptors (2D descriptors and 3D descriptors) between 2D image patches and 3D point cloud volumes provides stable retrieval performance and establishes the spatial relationship between 2D and 3D space, which plays the potential applications in geospatial space, such as 2D and 3D interaction of remote sensing, Augmented Reality (AR) and robot navigation. However, the mature handcrafted descriptors of 2D image patches and 3D point cloud volumes are extremely different, resulting in the huge challenge for 2D image patch and 3D point cloud volume matching. In this paper, we propose a novel network which combines both unified descriptor training and descriptor comparison function training for 2D image patch and 3D point cloud volume matching. First, two feature extraction networks are applied for jointly learning the local descriptors for 2D image patches and 3D point cloud volumes, respectively. Second, a fully connected network is introduced to compute the similarity between 2D descriptors and 3D descriptors. Motivated by the successful indicator system on evaluating 2D patch feature representation, we use the false positive rate at 95% recall (FPR95) and precision based on cross-domain descriptors as the measured metric. The experimental results show that our proposed network achieve state-of-the-art performance in the matching of 2D image patches and 3D point cloud volumes.
- Published
- 2021
- Full Text
- View/download PDF
5. Neurosurgical Craniotomy Localization Using Interactive 3D Lesion Mapping for Image-Guided Neurosurgery
- Author
-
Zhigang Wang, Dai Zhiyu, Qinyong Lin, Yonghua Lao, Hang Fei, Jian Zhuang, and Rongqian Yang
- Subjects
margin modification ,3D interaction ,octree decomposition ,General Computer Science ,Computer science ,business.industry ,medicine.medical_treatment ,Craniotomy localization ,Interactive 3d ,General Engineering ,Image guided neurosurgery ,Navigation system ,image-guided neurosurgery ,Imaging phantom ,Margin (machine learning) ,interactive 3D lesion mapping ,medicine ,General Materials Science ,Computer vision ,Augmented reality ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,Craniotomy - Abstract
Precise craniotomy localization is essential in neurosurgical procedures, especially during the preoperative planning. The mainstream craniotomy localization method utilizing image-guided neurosurgery system (IGNS) or augmented reality (AR) navigation system require experienced neurosurgeons to point out the lesion margin by probe and draw the craniotomy manually on the patient's head according to cranial anatomy. However, improper manual operation and dither from the AR model will bring in errors about craniotomy localization. In addition, there is no specific standard to evaluate the accuracy of craniotomy. This paper attempts to propose a standardized interactive 3D method using orthogonal transformation to map the lesion onto the scalp model and generate a conformal virtual incision in real time. Considering clinical requirements, the incision can be amended by 3D interaction and margin modification. According to the IGNS and the virtual incision, an actual craniotomy will be located on the patient's head and the movement path of the probe will be recorded and evaluated by an indicator, which is presented as an evaluated standard to measure the error between virtual and actual craniotomies. After the experiment, an incision is drawn on a 3D printing phantom based on the generated virtual one. The results show that the proposed method can generate a lesion-consistent craniotomy according to the size of the lesion and the mapping angle and delineate the incision on the patient's head precisely under the IGNS.
- Published
- 2019
6. Radi-Eye:Hands-Free Radial Interfaces for 3D Interaction using Gaze-Activated Head-Crossing
- Author
-
Bill Bapisch, Dominic Potts, Ludwig Sidenmark, and Hans Gellersen
- Subjects
3D interaction ,Offset (computer science) ,business.industry ,Computer science ,Orientation (computer vision) ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Virtual reality ,Gaze ,Eye tracking ,Augmented reality ,Computer vision ,Artificial intelligence ,business - Abstract
Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction.
- Published
- 2021
- Full Text
- View/download PDF
7. Blink-Suppressed Hand Redirection
- Author
-
Kora Persephone Heqitz, André Zenner, and Antonio Krüger
- Subjects
3D interaction ,business.industry ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Rendering (computer graphics) ,Retargeting ,0202 electrical engineering, electronic engineering, information engineering ,Change blindness ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,User interface ,Image warping ,business ,050107 human factors ,Haptic technology - Abstract
Many interaction techniques in virtual reality break with the 1-to-1 mapping from real to virtual space. Instead, specialized techniques for 3D interaction and haptic retargeting leverage hand redirection, offsetting the virtual hand rendering from the real hand position. To achieve unnoticeable hand redirection, however, the utilization of change blindness phenomena has not been systematically explored. Inspired by recent advances in the domain of redirected walking, we present the first hand redirection technique that makes use of blink-induced visual suppression and corresponding change blindness. We introduce Blink-Suppressed Hand Redirection (BSHR) to study the feasibility and detectability of hand redirection based on blink suppression. Our technique is based on Cheng et al.'s (2017) [9] body warping algorithm and instantaneously shifts the virtual hand when the user's vision is suppressed during a blink. Additionally, it can be configured to continuously increment hand offsets when the user's eyes are opened, limited to an extent below detection thresholds. In a psychophysical experiment, we verify that unnoticeable blink-suppressed hand redirection is possible even in worst -case scenarios, and derive the corresponding conservative detection thresholds (CDTs). Moreover, our results show that the range of unnoticeable redirection can be increased by combining continuous warping and blink-suppressed instantaneous shifts. As an additional contribution, we derive the CDTs for Cheng et al.'s (2017) [9] redirection technique that does not leverage blinks.
- Published
- 2021
- Full Text
- View/download PDF
8. A Cost-Effective 3D Acquisition and Visualization Framework for Cultural Heritage
- Author
-
Taha Alfaqheri, Hosameldin Ahmed, Sebti Foufou, Abdelaziz Bouras, Abdul Hamid Sadka, Abdelhak Belhi, Décision et Information pour les Systèmes de Production (DISP), Université Lumière - Lyon 2 (UL2)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National des Sciences Appliquées de Lyon (INSA Lyon), and Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)
- Subjects
Value (ethics) ,Artificial intelligence ,3D interaction ,Computer science ,02 engineering and technology ,Virtual reality ,Constant (computer programming) ,11. Sustainability ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,CEPROQHA project ,ComputingMilieux_MISCELLANEOUS ,Motion controller ,business.industry ,Deep learning ,020207 software engineering ,Data science ,3D modelling ,Visualization ,Cultural heritage ,020201 artificial intelligence & image processing ,Augmented reality ,business - Abstract
Museums and cultural institutions, in general, are in a constant challenge of adding more value to their collections. The attractiveness of assets is practically tightly related to their value obeying the offer and demand law. New digital visualization technologies are found to give more excitements, especially to the younger generation as it is proven by multiple studies. Nowadays, museums around the world are currently trying to promote their collections through new multimedia and digital technologies such as 3D modeling, virtual reality (VR), augmented reality (AR), and serious games. However, the difficulty and the resources required to implement such technologies present a real challenge. Through this paper, we propose a 3D acquisition and visualization framework aiming mostly at increasing the value of cultural collections. This framework preserves cost-effectiveness and time constraints while still introducing new ways of visualization and interaction with high-quality 3D models of cultural objects. The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2021. Acknowledgements. This publication was made possible by NPRP grant 9-181-1-036 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors (www.ceproqha.qa). Scopus
- Published
- 2020
- Full Text
- View/download PDF
9. Super Haptoclone: Upper-Body Mutual Telexistence System with Haptic Feedback
- Author
-
Charlotte Delfosse, Kohki Serizawa, Hiroyuki Shinoda, Tao Morisaki, Masahiro Fujiwara, and Yasutoshi Makino
- Subjects
3D interaction ,Computer science ,business.industry ,Upper body ,020208 electrical & electronic engineering ,05 social sciences ,Tactile sensation ,02 engineering and technology ,Workspace ,Telexistence ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS ,Haptic technology - Abstract
We demonstrate a mutual telepresence system with a workspace that can cover the whole upper body. A pair of micromirror arrays produces a high-fidelity light field of the other party and the clothes worn by the users provide haptic feedbacks in the interaction. In the demonstration system, we use a technique that broadens the view angle of the 3D optical images by introducing a symmetric mirror structure, where users can see both their body and the partner’s mid-air image simultaneously. The tactile sensation is presented by a jacket-type haptic device in synchronization with the contact with the optical image.
- Published
- 2020
- Full Text
- View/download PDF
10. Simo: Interactions with Distant Displays by Smartphones with Simultaneous Face and World Tracking
- Author
-
Harald Reiterer, Teo Babic, Florian Perteneder, and Michael J. Haller
- Subjects
3D interaction ,Computer science ,business.industry ,Controller (computing) ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Tracking (particle physics) ,Match moving ,Face (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Augmented reality ,Computer vision ,Artificial intelligence ,business ,Mobile device ,050107 human factors - Abstract
The interaction with distant displays often demands complex, multi-modal inputs which need to be achieved with a very simple hardware solution so that users can perform rich inputs wherever they encounter a distant display. We present Simo, a novel approach, that transforms a regular smartphone into a highly-expressive user motion tracking device and controller for distant displays. Both the front and back cameras of the smartphone are used simultaneously to track the user's hand as well as the head, and body movements in real-world space and scale. In this work, we first define the possibilities for simultaneous face- and world-tracking using current off-the-shelf smartphones. Next, we present the implementation of a smartphone app enabling hand, head, and body motion tracking. Finally, we present a technical analysis outlining the possible tracking range.
- Published
- 2020
- Full Text
- View/download PDF
11. TeslaMirror: Multistimulus Encounter-Type Haptic Display for Shape and Texture Rendering in VR
- Author
-
Dzmitry Tsetserukou, Luiza Labazanova, Akerke Tleugazy, and Aleksey Fedoseev
- Subjects
FOS: Computer and information sciences ,3D interaction ,Haptic interaction ,Computer science ,business.industry ,media_common.quotation_subject ,Computer Science - Human-Computer Interaction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wearable computer ,Robotics ,Virtual reality ,Rendering (computer graphics) ,Human-Computer Interaction (cs.HC) ,Computer Science - Robotics ,Virtual image ,Perception ,Robot ,Computer vision ,Artificial intelligence ,business ,Robotics (cs.RO) ,Haptic technology ,media_common ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper proposes a novel concept of a hybrid tactile display with multistimulus feedback, allowing the real-time experience of the position, shape, and texture of the virtual object. The key technology of the TeslaMirror is that we can deliver the sensation of object parameters (pressure, vibration, and electrotactile feedback) without any wearable haptic devices. We developed the full digital twin of the 6 DOF UR robot in the virtual reality (VR) environment, allowing the adaptive surface simulation and control of the hybrid display in real-time. The preliminary user study was conducted to evaluate the ability of TeslaMirror to reproduce shape sensations with the under-actuated end-effector. The results revealed that potentially this approach can be used in the virtual systems for rendering versatile VR shapes with high fidelity haptic experience., Comment: Accepted to the ACM SIGGRAPH 2020 conference (Emerging Technologies section), ACM copyright, 2 pages, 3 figures, 1 table
- Published
- 2020
- Full Text
- View/download PDF
12. Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation
- Author
-
Jian J. Zhang, Shihui Guo, Shujie Deng, Nan Jiang, and Jian Chang
- Subjects
3D interaction ,Computer science ,media_common.quotation_subject ,Human Factors and Ergonomics ,02 engineering and technology ,Virtual reality ,Multimodal interaction ,Education ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,050107 human factors ,media_common ,business.industry ,05 social sciences ,General Engineering ,020207 software engineering ,Gaze ,Human-Computer Interaction ,Hardware and Architecture ,Gesture recognition ,Eye tracking ,Artificial intelligence ,business ,Software ,Gesture - Abstract
Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.
- Published
- 2017
- Full Text
- View/download PDF
13. Low-cost 3D motion capture system using passive optical markers and monocular vision
- Author
-
Yeonkyung Lee and Hoon Yoo
- Subjects
3D interaction ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,01 natural sciences ,Motion capture ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Power (physics) ,010309 optics ,Single camera ,Match moving ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Monocular vision - Abstract
This paper presents a low-cost 3D motion capture system using a single camera and passive optical markers. 3D motion capture systems usually demand multiple cameras. Thus, those systems require high-cost hardware devices and large computing power. However, such high-cost systems prevent personal or mobile systems using 3D motion capturing. To overcome this problem, we propose a low-cost motion capture system. The structure of our system is based on a single camera and passive optical markers, which enables the system to be low-cost. As the components of the proposed systems, we introduce 2D optical marker recognition, marker size recognition, and mapping for 3D data extraction. To show the effectiveness and efficiency, we conduct preliminary experiments in terms of accuracy and time-complexity. Experiment results indicate that the proposed system provides 3D motion capturing using a single camera enough to utilize it in 3D interaction.
- Published
- 2017
- Full Text
- View/download PDF
14. Development of MirrorShape: High Fidelity Large-Scale Shape Rendering Framework for Virtual Reality
- Author
-
Nikita Chernyadev, Dzmitry Tsetserukou, and Aleksey Fedoseev
- Subjects
FOS: Computer and information sciences ,3D interaction ,business.industry ,Computer science ,Robotics ,Virtual reality ,Rendering (computer graphics) ,Computer Science - Robotics ,High fidelity ,Human–computer interaction ,Virtual image ,Robot ,Artificial intelligence ,business ,Robotics (cs.RO) ,Haptic technology - Abstract
Today there is a high variety of haptic devices capable of providing tactile feedback. Although most of existing designs are aimed at realistic simulation of the surface properties, their capabilities are limited in attempts of displaying shape and position of virtual objects. This paper suggests a new concept of distributed haptic display for realistic interaction with virtual object of complex shape by a collaborative robot with shape display end-effector. MirrorShape renders the 3D object in virtual reality (VR) system by contacting the user hands with the robot end-effector at the calculated point in real-time. Our proposed system makes it possible to synchronously merge the position of contact point in VR and end-effector in real world. This feature provides presentation of different shapes, and at the same time expands the working area comparing to desktop solutions. The preliminary user study revealed that MirrorShape was effective at reducing positional error in VR interactions. Potentially this approach can be used in the virtual systems for rendering versatile VR objects with wide range of sizes with high fidelity large-scaleshape experience., Accepted to the 25th ACM Symposium on Virtual Reality Software and Technology (VRST'19), ACM copyright
- Published
- 2019
- Full Text
- View/download PDF
15. Immersive Hand Gesture for Virtual Museum using Leap Motion Sensor Based on K-Nearest Neighbor
- Author
-
Surya Sumpeno, I Gede Aris Dharmayasa, Diana Purwitasari, and Supeno Mardi Susiki Nugroho
- Subjects
Database normalization ,3D interaction ,Data acquisition ,business.industry ,Computer science ,Feature extraction ,Data classification ,Preprocessor ,Computer vision ,Artificial intelligence ,business ,Gesture ,k-nearest neighbors algorithm - Abstract
Virtual museum is a place where users can explore museum collection freely. In this study, we are discussing the 3D interactions presented in a virtual museum application using hand-sensing sensor named Leap Motion. In making 3D interaction, some hand gestures are needed to functions any interact in virtual world. To prevent miss-occurring in 3D interaction, it is necessary to do a hand pattern classification to improve accuracy and make it more precision so as not to reduce the quality of immersion in the virtual world. The classification method used in this study is K-Nearest Neighbor (KNN) classification methods. KNN is a method that is quite popular and simple. The first step is data acquisition processing that is used as training data using Leap Motion Controller which takes vector value data (x, y, z) from each fingertip. Then the data normalization process is carried out to facilitate the next process which is feature extraction process. Features are being extracted including angle value between fingers, angle value between fingertips, angle between fingertips and palms, distance vector between fingertips and palms, and elevation value between fingertips and palms. After that, extracted data are being trained and classified using K-Nearest Neighbor (KNN).
- Published
- 2019
- Full Text
- View/download PDF
16. Machine Learning Based Interaction Technique Selection For 3D User Interfaces
- Author
-
Jérôme Royan, Jérémy Lacoche, Bruno Arnaldi, Eric Maisel, Thierry Duval, Institut de Recherche Technologique b-com (IRT b-com), Orange Labs R&D [Rennes], France Télécom, Lab-STICC_IMTA_CID_IHSEV, Laboratoire des sciences et techniques de l'information, de la communication et de la connaissance (Lab-STICC), École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT), Département Logique des Usages, Sciences sociales et Sciences de l'Information (IMT Atlantique - LUSSI), IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA), 3D interaction with virtual environments using body and mind (Hybrid), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-MEDIA ET INTERACTIONS (IRISA-D6), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Lab-STICC_ENIB_CID_IHSEV, École Nationale d'Ingénieurs de Brest (ENIB), Institut Mines-Télécom [Paris] (IMT)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-Institut Mines-Télécom [Paris] (IMT)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes 1 (UR1), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique)
- Subjects
3D interaction ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Personalization ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Machine Learning ,3D Interaction ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Fitts's law ,Adaptation (computer science) ,ComputingMilieux_MISCELLANEOUS ,HCI ,User profile ,business.industry ,Virtual Reality ,020207 software engineering ,Interaction technique ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Support vector machine ,Artificial intelligence ,User interface ,business ,computer - Abstract
A 3D user interface can be adapted in multiple ways according to each user’s needs, skills and preferences. Such adaptation can consist in changing the user interface layout or its interaction techniques. Personalization systems which are based on user models can automatically determine the configuration of a 3D user interface in order to fit a particular user. In this paper, we propose to explore the use of machine learning in order to propose a 3D selection interaction technique adapted to a target user. To do so, we built a dataset with 51 users on a simple selection application in which we recorded each user profile, his/her results to a 2D Fitts Law based pre-test and his/her preferences and performances on this application for three different interaction techniques. Our machine learning algorithm based on Support Vector Machines (SVMs) trained on this dataset proposes the most adapted interaction technique according to the user profile or his/her result to the 2D selection pre-test. Our results suggest the interest of our approach for personalizing a 3D user interface according to the target user but it would require a larger dataset in order to increase the confidence about the proposed adaptations.
- Published
- 2019
- Full Text
- View/download PDF
17. Eye&Head
- Author
-
Hans Gellersen and Ludwig Sidenmark
- Subjects
3D interaction ,genetic structures ,Computer science ,Movement (music) ,business.industry ,Head (linguistics) ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Gaze ,eye diseases ,Motion (physics) ,InformationSystems_MODELSANDPRINCIPLES ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,0501 psychology and cognitive sciences ,Computer vision ,sense organs ,Artificial intelligence ,business ,050107 human factors ,Abstraction (linguistics) - Abstract
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.
- Published
- 2019
- Full Text
- View/download PDF
18. Beyond Fitts's Law: A Three-Phase Model Predicts Movement Time to Position an Object in an Immersive 3D Virtual Environment
- Author
-
Peng Geng, Cheng-Long Deng, Yi-Fei Hu, and Shu-Guang Kuai
- Subjects
Adult ,Male ,3D interaction ,Time Factors ,Computer science ,Deceleration ,Movement ,Acceleration ,Human Factors and Ergonomics ,02 engineering and technology ,Motor Activity ,computer.software_genre ,Behavioral Neuroscience ,Young Adult ,Position (vector) ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Fitts's law ,050107 human factors ,Applied Psychology ,business.industry ,Movement (music) ,05 social sciences ,Virtual Reality ,020207 software engineering ,Object (computer science) ,Virtual machine ,Female ,Artificial intelligence ,business ,computer - Abstract
Objective:The study examines the factors determining the movement time (MT) of positioning an object in an immersive 3D virtual environment.Background:Positioning an object into a prescribed area is a fundamental operation in a 3D space. Although Fitts’s law models the pointing task very well, it does not apply to a positioning task in an immersive 3D virtual environment since it does not consider the effect of object size in the positioning task.Method:Participants were asked to position a ball-shaped object into a spherical area in a virtual space using a handheld or head-tracking controller in the ray-casting technique. We varied object size (OS), movement amplitude (A), and target tolerance (TT). MT was recorded and analyzed in three phases: acceleration, deceleration, and correction.Results:In the acceleration phase, MT was inversely related to object size and positively proportional to movement amplitude. In the deceleration phase, MT was primarily determined by movement amplitude. In the correction phase, MT was affected by all three factors. We observed similar results whether participants used a handheld controller or head-tracking controller. We thus propose a three-phase model with different formulae at each phase. This model fit participants’ performance very well.Conclusion:A three-phase model can successfully predict MT in the positioning task in an immersive 3D virtual environment in the acceleration, deceleration, and correction phases, separately.Application:Our model provides a quantitative framework for researchers and designers to design and evaluate 3D interfaces for the positioning task in a virtual space.
- Published
- 2019
19. A 6-DOF Telexistence Drone Controlled by a Head Mounted Display
- Author
-
Hao Gao, Huimin Lu, Xingyu Xia, Feng Xu, Chi-Man Pun, Di Zhang, and Yang Yang
- Subjects
Scheme (programming language) ,3D interaction ,business.industry ,Computer science ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical head-mounted display ,020207 software engineering ,02 engineering and technology ,Telexistence ,Virtual reality ,Drone ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,computer ,050107 human factors ,Stereo camera ,ComputingMethodologies_COMPUTERGRAPHICS ,computer.programming_language - Abstract
Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. To this end, we propose an improved rate-control method with an adaptive origin update (AOU) scheme. Without the aid of any auxiliary equipment, our scheme handles the self-centering problem. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera, and we build a real prototype to realize this by utilizing a pan-tilt-zoom (PTZ) which not only provides 2-DOF to the camera but also compensates the jittering motion of the UAV to record more stable image streams.
- Published
- 2019
- Full Text
- View/download PDF
20. Design and Evaluation of Visual Interpenetration Cues in Virtual Grasping
- Author
-
Christoph W. Borst and Mores Prachyabrued
- Subjects
3D interaction ,Computer science ,business.industry ,05 social sciences ,GRASP ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Session (web analytics) ,Visualization ,Human–computer interaction ,Virtual image ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Sensory cue ,050107 human factors ,Software ,Haptic technology - Abstract
We present design and impact studies of visual feedback for virtual grasping. The studies suggest new or updated guidelines for feedback. Recent grasping techniques incorporate visual cues to help resolve undesirable visual or performance artifacts encountered after real fingers enter a virtual object. Prior guidelines about such visuals are based largely on other interaction types and provide inconsistent and potentially-misleading information when applied to grasping. We address this with a two-stage study. In the first stage, users adjusted parameters of various feedback types, including some novel aspects, to identify promising settings and to give insight into preferences regarding the parameters. In the next stage, the tuned feedback techniques were evaluated in terms of objective performance (finger penetration, release time, and precision) and subjective rankings (visual quality, perceived behavior impact, and overall preference). Additionally, subjects commented on the techniques while reviewing them in a final session. Performancewise, the most promising techniques directly reveal penetrating hand configuration in some way. Subjectively, subjects appreciated visual cues about interpenetration or grasp force, and color changes are most promising. The results enable selection of the best cues based on understanding the relevant tradeoffs and reasonable parameter values. The results also provide a needed basis for more focused studies of specific visual cues and for choosing conditions in comparisons to other feedback modes, such as haptic, audio, or multimodal. Considering results, we propose that 3D interaction guidelines must be updated to capture the importance of interpenetration cues, possible performance benefits of direct representations, and tradeoffs involved in cue selection.
- Published
- 2016
- Full Text
- View/download PDF
21. Gesture tank: a gesture detection water vessel for foot movements
- Author
-
K. S. Lasith Gunawardena and Masahito Hirakawa
- Subjects
Foot (prosody) ,Foot movements ,3D interaction ,Gesture recognition ,Computer science ,business.industry ,Computer vision ,General Medicine ,Artificial intelligence ,Set (psychology) ,business ,Gesture - Abstract
Computers have become ubiquitous and integrated into our day-to-day activities. Researchers have been exploring mechanisms for interacting with computers using natural means in natural environments. Water interaction is a perfect example. This paper presents our attempt to use foot gestures performed in water as an interaction mechanism. It is an extension of our previous study for detecting objects in a water vessel. An experiment was performed to determine which foot-based gestures are suitable for implementation, and we proceeded to recognize a selected set of gestures using machine-learning techniques. We present our findings regarding which algorithms provide the best recognition rates.
- Published
- 2016
- Full Text
- View/download PDF
22. CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds
- Author
-
Lingyun Yu, Tobias Isenberg, Petra Isenberg, Konstantinos Efstathiou, Hangzhou Dianzi University (HDU), University of Groningen [Groningen], Analysis and Visualization (AVIZ), Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Dynamical Systems, Geometry & Mathematical Physics, and Scientific Visualization and Computer Graphics
- Subjects
Adult ,Male ,3D interaction ,Spatial selection ,Adolescent ,Computer science ,Point cloud ,[SCCO.COMP]Cognitive science/Computer science ,Context (language use) ,02 engineering and technology ,Structure-aware selection ,computer.software_genre ,Machine learning ,01 natural sciences ,CONFIDENCE-INTERVALS ,Young Adult ,Imaging, Three-Dimensional ,Task Performance and Analysis ,0103 physical sciences ,Computer Graphics ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Set (psychology) ,Selection ,010303 astronomy & astrophysics ,Selection (genetic algorithm) ,Context-aware selection ,Point (typography) ,business.industry ,User interaction ,020207 software engineering ,Usability ,Computer Graphics and Computer-Aided Design ,Visualization ,Exploratory data visualization and analysis ,Signal Processing ,VISUALIZATION ,Female ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,VISUAL EXPLORATION ,computer ,Algorithms ,Software ,PACKAGE - Abstract
International audience; We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.
- Published
- 2016
- Full Text
- View/download PDF
23. Immersive Interactive Virtual Fish Swarm Simulation Based on Infrared Sensors
- Author
-
Chen Sun, Xingquan Cai, Yakun Ge, Honghao Buni, and Chao Chen
- Subjects
0209 industrial biotechnology ,3D interaction ,Computer science ,business.industry ,ComputingMethodologies_MISCELLANEOUS ,Swarm behaviour ,Particle swarm optimization ,Cohesion (computer science) ,02 engineering and technology ,Simulation system ,GeneralLiterature_MISCELLANEOUS ,020901 industrial engineering & automation ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,%22">Fish ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Simulation based ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Virtual simulation and 3D interaction have shown great potentials in a variety of domains for our future life. For a virtual fish swarm simulation system, the simulation of cohesion behaviors of fish swarm and the interaction between human and fish swarm are two key components to create immersive interactive experiences. However, it is a huge challenge to create a realistic fish swarm simulation system while providing a natural and comfortable interaction. In this paper, we propose a method for immersive virtual fish swarm simulation based on infrared sensors. Based on dynamic weight constraints, we propose a particle swarm optimization method for fish swarm cohesion simulation, which separates a particle swarm by the state of each particle and dynamically controls the particle swarm, making the movement behavior of virtual fish more realistic. In addition, an interactive fast skinning method is proposed for cartoon fishes, which leverages image segmentation, Optical Character Recognition (OCR) and bone skinning are used to generate cartoon fishes based on user-created colors. With infrared sensors, we propose a method for virtual fish swarm interaction, where the positions of human skeleton are processed by an action analyzer, achieving real-time user interactions with fish swarms. With all the proposed techniques integrated in a system, the experimental results show that our method is feasible and effective.
- Published
- 2020
- Full Text
- View/download PDF
24. Move the Object or Move Myself? Walking vs. Manipulation for the Examination of 3D Scientific Data
- Author
-
Doug A. Bowman and Wallace S. Lages
- Subjects
3D interaction ,Computer Networks and Communications ,Computer science ,Spatial ability ,Headset ,Interaction design ,Virtual reality ,computer.software_genre ,lcsh:QA75.5-76.95 ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,spatial cognition ,Artificial Intelligence ,Human–computer interaction ,Cognitive resource theory ,0501 psychology and cognitive sciences ,spatial ability ,physical navigation ,05 social sciences ,Spatial cognition ,Hardware and Architecture ,Virtual machine ,virtual reality ,lcsh:Electronic computers. Computer science ,computer ,030217 neurology & neurosurgery ,Software ,Information Systems - Abstract
Physical walking is consistently considered a natural and intuitive way to acquire viewpoints in a virtual environment. However, research findings also show that walking requires cognitive resources. To understand how this tradeoff affects the interaction design for virtual environments; we evaluated the performance of 32 participants, ranging from 18 to 44 years old, in a demanding visual and spatial task. Participants wearing a virtual reality (VR) headset counted features in a complex 3D structure while walking or while using a 3D interaction technique for manipulation. Our results indicate that the relative performance of the interfaces depends on the spatial ability and game experience of the participants. Participants with previous game experience but low spatial ability performed better using the manipulation technique. However, walking enabled higher performance for participants with low spatial ability and without significant game experience. These findings suggest that the optimal design choices for demanding visual tasks in VR should consider both controller experience and the spatial ability of the target users.
- Published
- 2018
25. A novel 3D interactive method of tracking handheld actual rigid objects
- Author
-
Lu Chen, Chunyong Ma, Ge Chen, Chun Wang, and Yang Fan
- Subjects
3D interaction ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,02 engineering and technology ,Object (computer science) ,Tracking (particle physics) ,020204 information systems ,Joystick ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Point (geometry) ,Bilateral filter ,Artificial intelligence ,business ,Mobile device ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we propose a 3D interactive method which can track handheld rigid object without markers and recognize human pose simultaneously. Based on depth data of Kinect v2.0, an improved bilateral filtering utilized to solve the failure of recognition caused by occlusion, then the target point cloud can be obtained after segment and tracked by Interactive Closet Point (ICP) algorithm. Through this method, users can operate the actual rigid object with both hands, change posture to implement multiple interactions with 3D scenes. Thus, the method breaks the limitations of a fixed interactive medium such as joystick and is suitable to promote to general application.
- Published
- 2018
- Full Text
- View/download PDF
26. Assessment of hand kinematics and interactions with the environment
- Author
-
Hendrik Gerhardus Kortier, Veltink, Peter H., and Schepers, H. Martin
- Subjects
3D interaction ,State variable ,Inertial frame of reference ,business.industry ,Computer science ,Orientation (computer vision) ,Kinematics ,Thumb ,medicine.anatomical_structure ,Inertial measurement unit ,Position (vector) ,medicine ,Computer vision ,Artificial intelligence ,business - Abstract
Measuring hand and finger movements, and interaction forces, is important for the assessment of tasks in daily life. This thesis proposes a new on-body assessment system that allows the measurement of movements and interaction forces of the hand, fingers and thumb. The first thesis objective concerns the development, evaluation and validation of an inertial and magnetic sensing system for the measurement of hand and finger kinematics. The second objective concerns the assessment of the dynamic interaction between human hand and environment using combined force and movement sensing. Chapter 2 describes the inertial and magnetic sensing hardware, and kinematic estimation algorithms for a sensing system which can be attached to the hand, fingers and the thumb. Chapter 3 reports an extensive comparison of this inertial sensing system against a passive opto-electronic marker system under for different tasks that mimic situations in activities of daily living. Chapter 4 describes a new method to ease the typical anatomical segment and sensor calibration procedures by estimating these parameters implicitly along with the estimation of the state variables. In addition, the method incorporates information form chained linkages to circumvent the usage of magnetometers for heading estimation. Chapter 5 presents a solution to estimate the full pose (3D position and 3D orientation) of the hand with respect to the sternum of the body using inertial sensors, magnetometers and a permanent magnet. Contrary to the previous chapters is the magnetic information used to yield a drift free position estimate. Chapter 6 concerns the second research objective which is about the assessment of the physical interaction between the human hand and environmental objects. Dedicated sensors have been applied to measure 3D interaction forces. This hardware has been combined with the inertial hardware presented in the previous chapters and attached to the finger and thumb tips to measure interaction forces and finger motions simultaneously. Eventually the system is used to quantify object dynamics using the information obtained during manipulation.
- Published
- 2018
27. Un moteur de raisonnement sémantique pour des interactions 3D sensibles au contexte
- Author
-
Yannick Dennemont, Guillaume Bouyer, Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), and Université d'Évry-Val-d'Essonne (UEVE)
- Subjects
3D interaction ,Philosophy ,Context ,Context (language use) ,02 engineering and technology ,Knowledge representation and reasoning ,Virtual reality ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Conceptual graphs ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Humanities ,Software - Abstract
National audience; Tasks and contexts of interaction within virtual environments are more and more varied and complex, and sometimes unpredictable. The design of static 3D user interaction techniques is not always adapted. Our researches aim at providing "context-aware" 3DUI. Having analyzed several solutions and knowledge representation and reasoning methods, we conceived and developed a generic decision engine. Knowledge representation is based on an ontology and Conceptual Graphs. Semantic reasoning uses 1st order logic. It can communicate with existing applications through a set of software tools: the information of context is gathered by sensors, whereas the multimodal assistance is realized by effectors. This process was applied for the adaptive assistance to the selection of objects, and for the offline analysis of activity.; Les tâches et les contextes d’interaction dans les environnements virtuels sont de plus en plus variées et complexes, et parfois imprévisibles. Les techniques d’interactions 3D figées lors de la conception n’étant pas toujours adaptées, nos recherches visent une assistance aux interactions 3D “sensible au contexte” ou “adaptative”. Après avoir analysé différentes solutions et méthodes de représentation et de raisonnement, nous avons conçu et développé un moteur de décision générique qui utilise une ontologie et des graphes conceptuels pour représenter les connaissances, et la logique du 1er ordre pour conduire un raisonnement sémantique. Il peut communiquer avec des applications de réalité virtuelle existantes à travers des outils logiciels : les informations de contexte sont rassemblées par des capteurs, tandis que l’assistance multimodale est réalisée par des effecteurs. Ce processus a été appliqué pour l’assistance adaptative à la sélection d’objets, et pour l’analyse d’activité hors-ligne.
- Published
- 2015
- Full Text
- View/download PDF
28. Generating 3D Visual Language Editors: Encapsulating Interaction Techniques in Visual Patterns
- Author
-
Uwe Kastens and Jan Wolter
- Subjects
3D interaction ,Computer Networks and Communications ,Programming language ,business.industry ,Computer science ,Structure editor ,Usability ,Construct (python library) ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Domain (software engineering) ,Visual language ,Artificial Intelligence ,Abstract syntax ,business ,computer ,Software ,3D computer graphics - Abstract
The implementation of three-dimensional visual languages requires a wide range of conceptual and technical knowledge on issues for 3D graphics and textual language processing. Our generator framework DEViL3D incorporates such knowledge and supports the design of visual 3D languages and their implementation from high-level specifications. Such 3D languages arise from different modeling domains that make use of three-dimensional representations, e.g. the "ball-and-stick" models of molecules. The front-end of a 3D language implementation is a dedicated 3D graphical structure editor, which offers interaction and navigation techniques to construct programs in their domain. These techniques allow to manipulate the 3D program directly using operations to insert, move, and restructure objects. We have developed canned solutions for all such techniques that are encapsulated in visual patterns, which are provided by our generator. The designer of a particular 3D language only has to apply visual patterns to constructs of the abstract syntax, which defines the basic structure of the language. We have complemented our development with a usability study. Participants had to solve several tasks with different interaction or navigation techniques. Furthermore, we equipped the 3D editors with the opportunity to let users gain an immersive 3D perception by using stereoscopic hardware.
- Published
- 2015
- Full Text
- View/download PDF
29. Markerless 3D Interaction in an Unconstrained Handheld Mixed Reality Setup
- Author
-
Annette Mossel, Daniel Fritz, Hannes Kaufmann, Centre Européen de Réalité Virtuelle (CERV), École Nationale d'Ingénieurs de Brest (ENIB), and Buche, Cédric
- Subjects
3D interaction ,Engineering ,business.industry ,Natural user interface ,020207 software engineering ,02 engineering and technology ,[INFO] Computer Science [cs] ,computer.software_genre ,Mixed reality ,Natural User Interface ,Software framework ,Handheld Augmented Reality ,RGB+D Hand Posture Detection ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,Comparative Study ,[INFO]Computer Science [cs] ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Mobile device ,computer ,Mobile interaction - Abstract
International audience; In mobile applications, it is crucial to provide intuitive means for 2D and 3D interaction. A large number of techniques exist to support a natural user interface (NUI) by detecting the user’s hand posture in RGB+D (depth) data. Depending on the given interaction scenario and its environmental properties, each technique has its advantages and disadvantages regarding accuracy and the robustness of posture detection. While the interaction environment in a desktop setup can be constrained to meet certain requirements, a handheld scenario has to deal with varying environmental conditions. To evaluate the performance of techniques on a mobile device, a powerful software framework was developed that is capable of processing and fusing RGB and depth data directly on a handheld device. Using this framework, five existing hand posture recognition techniques were integrated and systematically evaluated by comparing their accuracy under varying illumination and background. Overall results reveal best recognition rate of posture detection for combined RGB+D data at the expense of update rate. To support users in choosing the appropriate technique for their specific mobile interaction task, we derived guidelines based on our study. In the last step, an experimental study was conducted using the detected hand postures to perform the canonical 3D interaction tasks selection and positioning in a mixed reality handheld setup.
- Published
- 2015
- Full Text
- View/download PDF
30. Direct retinal signals for virtual environments
- Author
-
Ismo Rakkolainen, Eero Koskinen, Roope Raisamo, Viestintätieteiden tiedekunta - Faculty of Communication Sciences, and Tampere University
- Subjects
3D interaction ,Visual acuity ,genetic structures ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical head-mounted display ,02 engineering and technology ,Virtual reality ,virtuaalisilmikko ,Pupil ,Virtual retinal display ,law.invention ,chemistry.chemical_compound ,InformationSystems_MODELSANDPRINCIPLES ,Genetiikka, kehitysbiologia, fysiologia - Genetics, developmental biology, physiology ,law ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Computer vision ,Tietojenkäsittely ja informaatiotieteet - Computer and information sciences ,050107 human factors ,Retina ,business.industry ,05 social sciences ,020207 software engineering ,Retinal ,eye diseases ,virtuaalitodellisuus ,head-mounted display ,medicine.anatomical_structure ,chemistry ,virtual reality ,sense organs ,Artificial intelligence ,medicine.symptom ,business ,Near-to-eye display ,Sähkö-, automaatio- ja tietoliikennetekniikka, elektroniikka - Electronic, automation and communications engineering, electronics - Abstract
We present a novel signaling method for head-mounted displays, which surpasses eye (pupil) and delivers guiding light signals directly to retina through tissue near the eyes. This method preserves full visual acuity on the display and does not block view to the scene, while also delivering additional visual signals.
- Published
- 2017
- Full Text
- View/download PDF
31. A 3D interaction technique for selection and manipulation distant objects in augmented reality
- Author
-
Mahfoud Hamidia, Abdelkader Bellarbi, Samir Otmane, Nadia Zenati, Assia Messaci, Samir Benbelkacem, Hayet Belghit, Centre de Développement des Technologies Avancées ( CTDA ), Informatique, Biologie Intégrative et Systèmes Complexes ( IBISC ), Université d'Évry-Val-d'Essonne ( UEVE ), Centre de Développement des Technologies Avancées (CDTA), Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), and Université d'Évry-Val-d'Essonne (UEVE)
- Subjects
3D interaction ,Computer science ,business.industry ,3D interaction techniques ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Augmented reality ,[ SPI.SIGNAL ] Engineering Sciences [physics]/Signal and Image processing ,Object (computer science) ,Pose Estimation ,[ INFO.INFO-HC ] Computer Science [cs]/Human-Computer Interaction [cs.HC] ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Artificial intelligence ,Zoom ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Pose ,Selection (genetic algorithm) - Abstract
International audience; 3D object selection and manipulation is one of the essential features for any augmented reality (AR) system. However, distant object selection and manipulation still suffer from lack of accuracy and precision. This paper introduces an alternate 3D interaction technique for selection and manipulation distant 3D object in in immersive video see-through AR. The proposed interaction technique offers a high precision when selecting and manipulation distant objects thanks to the zooming-based idea. This later, allows bringing closer both real and virtual objects during maintaining the spatio-temporal registration between the virtual and the real scenes. The evaluation of our proposed approach and the comparison with other well-known techniques are given at the end of this paper.
- Published
- 2017
- Full Text
- View/download PDF
32. ThirdLight: low-cost and high-speed 3D interaction using photosensor markers
- Author
-
Gyuchull Han, Hwasup Lim, Abhijeet Ghosh, Shahram Izadi, and Jaewon Kim
- Subjects
3D interaction ,Inverse kinematics ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Photodetector ,Binary number ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Robustness (computer science) ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a low-cost 3D tracking system for virtual reality, gesture modeling, and robot manipulation applications which require fast and precise localization of headsets, data gloves, props, or controllers. Our system removes the need for cameras or projectors for sensing, and instead uses cheap LEDs and printed masks for illumination, and low-cost photosensitive markers. The illumination device transmits a spatiotemporal pattern as a series of binary Gray-code patterns. Multiple illumination devices can be combined to localize each marker in 3D at high speed (333Hz). Our method has strengths in accuracy, speed, cost, ambient performance, large working space (1m-5m) and robustness to noise compared with conventional techniques. We compare with a state-of-the-art instrumented glove and vision-based systems to demonstrate the accuracy, scalability, and robustness of our approach. We propose a fast and accurate method for hand gesture modeling using an inverse kinematics approach with the six photosensitive markers. We additionally propose a passive markers system and demonstrate various interaction scenarios as practical applications.
- Published
- 2017
33. Distant Pointing User Interfaces based on 3D Hand Pointing Recognition
- Author
-
Takashi Komuro, Dai Fujita, and Yutaka Endo
- Subjects
Pie menu ,Engineering ,3D interaction ,business.industry ,Interface (computing) ,05 social sciences ,020207 software engineering ,Pointing device ,02 engineering and technology ,Small hand ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,User interface ,business ,050107 human factors ,Remote control ,Gesture - Abstract
In this paper, we propose a system that realizes remote control of a computer with small hand gestures. Distant pointing is realized using a 3D hand pointing recognition algorithm that obtains position and direction of the pointing hand. We show the effectiveness of the system by constructing three types of user interfaces considering the accuracy of distant pointing in the current system. We created a tile layout interface for rough selection operation, a pie menu interface for detailed operation, and a viewer interface for document browsing.
- Published
- 2017
- Full Text
- View/download PDF
34. Mid-air interaction with a 3D aerial display
- Author
-
Ronald Azuma, Jonathan C. Moisant-Thompson, Hunter Seth E, Dave MacLeod, and Derek Disanjh
- Subjects
3D interaction ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,Illusion ,Volume (computing) ,Volumetric display ,01 natural sciences ,Visualization ,010309 optics ,Computer graphics (images) ,0103 physical sciences ,0501 psychology and cognitive sciences ,Point (geometry) ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,media_common - Abstract
We present a re-imaged swept volume display enabling mid-air interaction with 3D floating objects without requiring a head-mounted apparatus. The presence of the volume is so strong that everyone reaches out to touch it, at which point the illusion is broken. To resolve this, we implemented interaction techniques to mitigate occlusion conflicts between the hand and virtual volume during direct manipulation. Our long term goal is to prototype direct spatial 3D manipulation techniques that highlight direct manipulation scenarios such as viewing 3D scans, previewing 3d prints, and product design visualization.
- Published
- 2017
- Full Text
- View/download PDF
35. Computer-Created Interactive 3D Image with Midair Haptic Feedback
- Author
-
Yasutoshi Makino, Yuta Kimura, and Hiroyuki Shinoda
- Subjects
3D interaction ,Light source ,Computer science ,Virtual image ,business.industry ,Interactive 3d ,Computer vision ,Artificial intelligence ,business ,Image (mathematics) ,Haptic technology - Abstract
We created a system that enables a user to touch and interact with 3D images formed by a light source device in mid-air. A user can get the haptic feedback without wearing a special device by using focused airborne ultrasound. The pattern of 3D images of 36864 dots and tactile feedback points are fully programmable and updated at the refresh-rate of 18 Hz. Using this system, we demonstrate the feasibility of handling a 3D virtual object.
- Published
- 2017
- Full Text
- View/download PDF
36. HaptoCloneAR (Haptic-Optical Clone with Augmented Reality) for Mutual Interactions with Midair 3D Floating Image and Superimposed 2D Display
- Author
-
Seki Inoue, Kentaro Yoshida, Keisuke Hasegawa, Yasutoshi Makino, Hiroyuki Shinoda, and Takaaki Kamigaki
- Subjects
3D interaction ,business.industry ,Computer science ,Clone (computing) ,Computer vision ,Augmented reality ,Artificial intelligence ,Workspace ,Telexistence ,business ,Tactile display ,Haptic technology ,Image (mathematics) - Abstract
In the previous study, a new interactive system called HaptoClone was proposed. In HaptoClone system, a user can interact with optically copied objects from the adjacent workspace with haptic feedback. In this study, we propose an improved version of the HaptoClone system called HaptoCloneAR, which superimposes a virtual 2D screen on the copied objects by using half-mirrors. The system can display 2 different images independently to each workspace. In this paper, we show a basic configuration of the HaptoCloneAR system. We demonstrate a feasibility of the proposed system with our prototype.
- Published
- 2017
- Full Text
- View/download PDF
37. Design and Evaluation of Zoom-based 3D Interaction Technique for Augmented Reality
- Author
-
Hayet Belghit, Abdelkader Bellarbi, Nadia Zenati, Samir Otmane, Informatique, Biologie Intégrative et Systèmes Complexes ( IBISC ), Université d'Évry-Val-d'Essonne ( UEVE ), Centre de Développement des Technologies Avancées ( CTDA ), Informatique, Biologie Intégrative et Systèmes Complexes (IBISC), Université d'Évry-Val-d'Essonne (UEVE), and Centre de Développement des Technologies Avancées (CDTA)
- Subjects
3D interaction ,Augmented Reality ,Ar system ,Computer science ,business.industry ,02 engineering and technology ,Interaction technique ,[ INFO.INFO-HC ] Computer Science [cs]/Human-Computer Interaction [cs.HC] ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,Computer vision algorithms ,020201 artificial intelligence & image processing ,Computer vision ,Augmented reality ,Artificial intelligence ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Zoom ,business ,Pose - Abstract
International audience; Real-time 3D interaction with augmented reality (AR) environments is one of the main features of any AR system. However, selecting and manipulating distant 3D virtual objects in AR still suffer from lack of accuracy and precision. In this paper, we propose an alternate 3D interaction technique called “Zoomin” for selection and manipulation distant objects in immersive video see-through augmented reality. Zoom-in interactiontechnique is based on the idea of zooming the captured images. This allows bringing closer both of real and virtual distant objects, while keeping the spatial registration between the virtual and the real scenes thanks to a robust real-time computer vision algorithm for pose estimation. An evaluation and comparison with other well-known technique are given at the end of this paper, in order to validate our proposed approach.
- Published
- 2017
38. Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation
- Author
-
Eric D. Ragan, Felipe Bacim, Doug A. Bowman, and Siroberto Scerbo
- Subjects
3D interaction ,business.industry ,Head (linguistics) ,Orientation (computer vision) ,Computer science ,Visibility (geometry) ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Task (project management) ,Visualization ,Transfer of training ,020204 information systems ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Realism - Abstract
Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.
- Published
- 2017
39. Mechanism of Integrating Force and Vibrotactile Cues for 3D User Interaction within Virtual Environments
- Author
-
Frédéric Merienne, Yaoping Hu, Aida Erfanian, Stanley Tarng, Jeremy Plouzeau, University of Calgary, Laboratoire d'Electronique, d'Informatique et d'Image [EA 7508] (Le2i), Université de Technologie de Belfort-Montbeliard (UTBM)-Université de Bourgogne (UB)-École Nationale Supérieure d'Arts et Métiers (ENSAM), Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM)-Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM)-AgroSup Dijon - Institut National Supérieur des Sciences Agronomiques, de l'Alimentation et de l'Environnement-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Electronique, d'Informatique et d'Image UMR CNRS 6306 ( Le2i ), and Université de Technologie de Belfort-Montbeliard ( UTBM ) -Centre National de la Recherche Scientifique ( CNRS ) -École Nationale Supérieure d'Arts et Métiers ( ENSAM ) -Université de Bourgogne ( UB ) -AgroSup Dijon - Institut National Supérieur des Sciences Agronomiques, de l'Alimentation et de l'Environnement
- Subjects
3D interaction ,Computer science ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Synthèse d'image et réalité virtuelle [Informatique] ,Haptics ,Virtual reality ,050105 experimental psychology ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,[ INFO.INFO-HC ] Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Human–computer interaction ,0501 psychology and cognitive sciences ,Computer vision ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Sensory cue ,Haptic technology ,Mechanism (biology) ,business.industry ,05 social sciences ,3D user interaction ,[ INFO.INFO-GR ] Computer Science [cs]/Graphics [cs.GR] ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Interface homme-machine [Informatique] ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
International audience; Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE's suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users' task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs.
- Published
- 2017
- Full Text
- View/download PDF
40. Angle and pressure-based volumetric picking on touchscreen devices
- Author
-
Ievgeniia Gutenko, Arie E. Kaufman, and Seyedkoosha Mirhosseini
- Subjects
0301 basic medicine ,3D interaction ,Computer science ,business.industry ,020207 software engineering ,Volume rendering ,02 engineering and technology ,Target acquisition ,Rendering (computer graphics) ,law.invention ,03 medical and health sciences ,030104 developmental biology ,Touchscreen ,law ,0202 electrical engineering, electronic engineering, information engineering ,Perpendicular ,Computer vision ,Interaction problem ,Artificial intelligence ,Stylus ,business - Abstract
The target picking in 3D volumetric data is a common interaction problem in a variety of domains such as medicine, engineering, and physics. When a user clicks on a 2D point on the screen rendering, the ray is cast through the volume at an angle perpendicular to the screen. The problem lies in determining the intended location of picking, that is the end point of the ray. We introduce picking for 3D volumetric data by utilizing pressure and angle of the digital stylus pen on a touchscreen device. We map both pressure and angle of the stylus to the depth of the selector widget to help target acquisition. We evaluate several methods of selection: single finger selection, stylus with pressure but without an angle, and stylus with both angle and pressure. We decouple rotation of the volumetric data, and control for two target sizes. We report significant benefits of digital stylus picking methods and a relation between the methods and target sizes.
- Published
- 2017
- Full Text
- View/download PDF
41. Watchcasting: Freehand 3D interaction with off-the-shelf smartwatch
- Author
-
Liudmila Tahai, James R. Wallace, Krzysztof Pietroszek, and Edward Lank
- Subjects
3D interaction ,Computer science ,business.industry ,05 social sciences ,Wearable computer ,020207 software engineering ,02 engineering and technology ,Translation (geometry) ,Smartwatch ,0202 electrical engineering, electronic engineering, information engineering ,Off the shelf ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,Rotation (mathematics) ,050107 human factors - Abstract
We describe a mid-air, watch-based 3D interaction technique called Watchcasting. The technique enables target selection and translation by mapping z-coordinate position to forearm rotation. By replicating a large display 3D selection study, we show that Watchcasting provides comparable performance to smartphone (Smartcasting) and electrical impedance myography (Myopoint) techniques. Our work demonstrates that an off-the-shelf smartwatch is a practical alternative for 3D interaction using specialized devices.
- Published
- 2017
- Full Text
- View/download PDF
42. A Kinect-based 3D hand-gesture interface for 3D databases
- Author
-
Sergio A. Velastin, Vasileios Argyriou, and Raul A. Herrera-Acuna
- Subjects
3D interaction ,Database ,Computer science ,business.industry ,Interface (computing) ,Interaction technique ,computer.software_genre ,Human-Computer Interaction ,3d space ,Human–computer interaction ,Signal Processing ,Computer vision ,Overall performance ,Artificial intelligence ,business ,computer ,Gesture - Abstract
The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity and the overall performance. In this paper we present a novel framework to interact with data elements presented in a 3D space. The system provides two mechanisms to interact using 2D and 3D gestures based on data provided by Kinect and on hand detection and gesture interpretation algorithms. The proposed architecture is analysed indicating that 3D interaction with information is possible, and provides advantages over a 2D interaction over the same problem. Finally, two sets of experiments were performed to evaluate 2D and 3D interaction styles based on natural interfaces focusing on traditional interaction with 3D databases.
- Published
- 2014
- Full Text
- View/download PDF
43. Real-time and robust hand tracking with a single depth camera
- Author
-
Ziyang Ma and Enhua Wu
- Subjects
3D interaction ,business.industry ,Computer science ,Particle swarm optimization ,Tracking system ,Degrees of freedom (mechanics) ,Tracking (particle physics) ,Computer Graphics and Computer-Aided Design ,Motion capture ,Field (computer science) ,Finger tracking ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
In this paper, we introduce a novel, real-time and robust hand tracking system, capable of tracking the articulated hand motion in full degrees of freedom (DOF) using a single depth camera. Unlike most previous systems, our system is able to initialize and recover from tracking loss automatically. This is achieved through an efficient two-stage k-nearest neighbor database searching method proposed in the paper. It is effective for searching from a pre-rendered database of small hand depth images, designed to provide good initial guesses for model based tracking. We also propose a robust objective function, and improve the Particle Swarm Optimization algorithm with a resampling based strategy in model based tracking. It provides continuous solutions in full DOF hand motion space more efficiently than previous methods. Our system runs at 40 fps on a GeForce GTX 580 GPU and experimental results show that the system outperforms the state-of-the-art model based hand tracking systems in terms of both speed and accuracy. The work result is of significance to various applications in the field of human---computer-interaction and virtual reality.
- Published
- 2013
- Full Text
- View/download PDF
44. 50.2: Adding Depth-Sensing Capability to an OLED Display System Based on Coded Aperture Imaging
- Author
-
Chang-Yeong Kim, Changkyu Choi, Sungjoo Suh, and Du-sik Park
- Subjects
3D interaction ,Pixel ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Object (computer science) ,ENCODE ,Mura ,Computer Science::Computer Vision and Pattern Recognition ,Computer graphics (images) ,OLED ,Computer vision ,Artificial intelligence ,Coded aperture ,business ,Decoding methods ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we propose a novel OLED display system by adding depth-sensing capability to the display system. Coded apertures formed in OLED pixels with transmissive windows encode the radiation from a scene and the scene is reconstructed by an appropriate decoding pattern. Further, depth information is estimated from multiple coded images. In order to prove the feasibility of the proposed system, a 19” imaging system with transmissive windows within pixels was constructed and tested. Experimental results confirm that the proposed system can reconstruct the scene and accurately estimate the depth of an object in front of the display system.
- Published
- 2013
- Full Text
- View/download PDF
45. Three-dimensional interaction and autostereoscopic display system using gesture recognition
- Author
-
Jun Liu, Xiao-Qing Xu, Lei Li, Jie Zhang, and Qiong-Hua Wang
- Subjects
3D interaction ,Computer science ,business.industry ,Feature vector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,Graphics pipeline ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Visualization ,law.invention ,Gesture recognition ,law ,Computer graphics (images) ,Autostereoscopy ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Gesture - Abstract
We propose a 3D interaction and autostereoscopic display system that use gesture recognition, which can manipulate virtual objects in the scene directly by hand gestures and can display objects in 3D stereoscopy. The system consists of a gesture recognition and manipulation part as well as an autostereoscopic display as an interactive display part. To manipulate the 3D virtual scene, a gesture recognition algorithm is proposed, which use spatial-temporal sequences of feature vectors to match predefined gestures. To get smooth 3D visualization, we utilize the programmable graphics pipeline in graphic processing unit to accelerate data processing. We develop a prototype system for 3D virtual exhibition. The prototype system reaches frame rates of 60 fps and operates efficiently with a mean recognition accuracy of 90%.
- Published
- 2013
- Full Text
- View/download PDF
46. 3D selection with freehand gesture
- Author
-
Eamonn O'Neill and Gang Ren
- Subjects
3D interaction ,business.industry ,Computer science ,General Engineering ,Usability ,Interaction design ,Computer Graphics and Computer-Aided Design ,Human-Computer Interaction ,Human–computer interaction ,Selection (linguistics) ,Computer vision ,Artificial intelligence ,User interface ,Set (psychology) ,business ,3D computer graphics ,Gesture - Abstract
The use of 3D computer graphics is important in a very wide range of applications. However, user interaction with 3D applications is still challenging and often does not lend itself to established techniques that have been developed primarily for 2D desktop interaction. Meanwhile, 3D user interfaces that rely on tracking hand-held devices or fiducial markers attached to the user are cumbersome or entirely inappropriate in some situations. These challenges may be addressed by refining and building on the increasing use of freehand gestural input, i.e. without markers or hand-held devices, to extend the fluidity and immediacy of today's 2D touch-based interactions. In this paper, we analyze the characteristics of freehand gestural 3D interaction, and report a set of 3 related evaluation studies focused on the fundamental user interface task of object selection. We found that interaction design requiring a high accuracy single action are not appropriate for freehand gestural selection, while separating it into several connected low demand operations could be a potential solution; that our Reach technique is accurate and potentially useful for option selection tasks with freehand gesture; and that strong directional effects influence performance and usability of both 2D and 3D option selection. We propose guidelines for designers of 3D freehand gestural interaction based on our evaluation results.
- Published
- 2013
- Full Text
- View/download PDF
47. An Implementation of Table-top based Augmented Reality System for Motor Rehabilitation of the Paretic Hand
- Author
-
Ho Wan Kwak, Seokjun Lee, Soon Ki Jung, Yang-Soo Lee, Gye Wan Moon, Kil Houm Park, and Jae Hun Choi
- Subjects
Engineering ,3D interaction ,Rehabilitation ,business.industry ,medicine.medical_treatment ,Task (project management) ,Motor rehabilitation ,Rehabilitation exercise ,Position (vector) ,medicine ,Table (database) ,Computer vision ,Augmented reality ,Artificial intelligence ,business - Abstract
This paper presents an augmented reality (AR) based rehabilitation exercise system to enhance the motor function of the hands for the paretic/hemi-paretic patient. The existing rehabilitation systems rely on mechanical apparatus for palsy rehabilitation, but we aim to use the rehabilitation system at home with easy configuration and minimized equipment by the computer vision based approach. The proposed method evaluates the interaction status of the fingertip action by using the position and the contact of the fingertip markers. We obtain the 2D positions of the fingertip markers from a single camera, and then transform the 3D positions from the calibrated camera space by using an ARToolKit marker. We adopt simple geometric calculation by the conversion of the 2D interest points into the 3D interaction points for the simple interactive task in AR environment. Some experimental results show that the proposed method is practical and simply applicable to the applications with personal AR interaction.
- Published
- 2013
- Full Text
- View/download PDF
48. An interactive and flexible information visualization method
- Author
-
Ping-Yu Hsu and Liang-Hong Wu
- Subjects
Focus (computing) ,Hierarchy ,3D interaction ,Information Systems and Management ,Point (typography) ,Computer science ,business.industry ,media_common.quotation_subject ,Control (management) ,Context (language use) ,Computer Science Applications ,Theoretical Computer Science ,World Wide Web ,Information visualization ,Artificial Intelligence ,Control and Systems Engineering ,Human–computer interaction ,business ,Function (engineering) ,Software ,media_common - Abstract
The generation of digital information is continuing at an accelerated pace, and the exploration of relationships within such enormous volumes of data is becoming increasingly difficult. Information visualization methods directly address the requirements of human perception to help users analyze complex relationships, and graphical hierarchy trees are employed to present the results. Conventional methods only provide a fixed degree of detail, despite the requirements of users to alter the manner in which data is viewed. Magic Eye View, a well-known information visualization tool, incorporates a three-dimensional interactive function allowing users to control the degree of detail with which they view data. Unfortunately, this system fails to provide a number of crucial focus+context features. To address these shortcomings, we propose a novel 3D information visualization method that enables users to control the focus point three-dimensionally and view the information at a preferred degree of detail. It also provides crucial focus+context features that help users to comprehend the data. Simulation results have shown significant advantages over existing approaches through the incorporation of these crucial features.
- Published
- 2013
- Full Text
- View/download PDF
49. 6-DOF computation and marker design for magnetic 3D dexterous motion-tracking system
- Author
-
Shuichiro Hashi, Yoshifumi Kitamura, Tsuyoshi Mori, Jiawei Huang, and Kazuki Takashima
- Subjects
3D interaction ,Computer science ,business.industry ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Input device ,02 engineering and technology ,Kinematics ,Translation (geometry) ,Motion capture ,Set (abstract data type) ,020303 mechanical engineering & transports ,0203 mechanical engineering ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,Rotation (mathematics) ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We describe our approach that derives reliable 6-DOF information including the translation and the rotation of a rigid marker in a 3D space from a set of insufficient 5-DOF measurements. As a practical example, we carefully constructed a prototype and its design and evaluated it in our 3D dexterous motion-tracking system, IM6D, which is our novel real-time magnetic 3D motion-tracking system that uses multiple identifiable, tiny, lightweight, wireless, and occlusion-free markers. The system contains two key technologies; a 6-DOF computation algorithm and a marker design for 6D marker. The 6-DOF computation algorithm computes the result of complete 6-DOF information including translation and rotation in 3D space for a single rigid marker that consists of three LC coils. We propose several possible approaches for implementation, including geometric, matrix-based kinematics, and computational approaches. In addition, we introduce workflow to find an optimal marker design for the system to achieve the best compromise between its smallness and accuracy based on the tracking principle. We experimentally compare the performances of some typical marker prototypes with different layouts of LC coils. Finally, we also show another experimental result to prove the effectiveness of the results from the solutions in these two problems.
- Published
- 2016
- Full Text
- View/download PDF
50. Accurate fingertip detection from binocular mask images
- Author
-
Guijin Wang, Hengkai Guo, and Xinghao Chen
- Subjects
Scheme (programming language) ,3D interaction ,business.industry ,Computer science ,Fingertip detection ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Convolutional neural network ,Image (mathematics) ,Depth map ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
Accurate fingertip detection is important for hand-based human computer interaction. Different from prior methods based on depth image, in this paper we propose a novel scheme for accurate fingertip detection from binocular mask images without explicitly computing the depth map. To demonstrate our proposed scheme, we build a new hand dataset containing synthetic and real binocular images. A deep convolutional neural network (CNN) is utilized as a baseline method to demonstrate the proposed scheme. The mask images are extracted from binocular images and fed into the CNN to predict the 3D positions of fingertips and palm center. Experiments show that this method achieves the mean error of 8.30mm on synthetic data and 4.64mm on real data, which is promosing for accurate 3D interaction. The proposed scheme runs at 130fps on a CPU and is promising for real-time applications.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.