67 results on '"X-ray vision"'
Search Results
2. Adapting VST AR X-ray vision techniques to OST AR
- Author
-
Thomas J. Clarke, Wolfgang Mayer, Joanne E. Zucco, Brandon J. Matthews, Ross T. Smith, International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Singapore 17, Clarke, Thomas J, Mayer, Wolfgang, Zucco, Joanne E, Matthews, Brandon J, and Smith, Ross T
- Subjects
X-ray vision ,human computer interaction ,visualization techniques ,user studies ,augmented reality - Abstract
When merging physical and virtual objects with optical see-through augmented reality there is little research that has focused on x-ray vision visualizations considering depth perception. We investigate partial occlusion visualizations when merging visual cues and real-world objects to explore the effect of the visualizations with a proce-dural placement task. We adapted existing x-ray visualization techniques designed for Video See-Through (VST) Augmented Reality to operate on Optical See-Through (OST) devices and investigated how these techniques affect accuracy in a placement task within arms reach. We evaluate the visualizations' impact on accuracy when user movement is unrestricted and report the perceived usability and mental load of each visualization. Our findings indicate that although the type of x-ray visualization is important, the presence of other virtual objects in the scene appears to have a stronger impact on the users' accuracy in placing objects and user experience Refereed/Peer-reviewed
- Published
- 2023
3. Visualizing indoor layout for spatial learning using Mixed Reality-based X-ray vision.
- Author
-
Shengkai Wang, Bing Liu, Chenkun Zhang, Nianhua Liu, and Liqiu Meng
- Subjects
- *
CARTOGRAPHY , *MATHEMATICAL geography , *GEOGRAPHIC information systems , *CARTOGRAPHIC materials , *MIXED reality - Abstract
Humans rely on the spatial knowledge about their living environments to perform daily spatial tasks, such as localization, orientation and navigation. When navigating in the real world, especially in indoor environments, the integration of local spatial memory at landmarks and decision points is significant for spatial learning (Stankiewicz & Kalia, 2007). However, it is more difficult for humans to correctly anchor in memory places that can only be seen from different viewpoints than places that can be directly seen from the same viewpoint(Ishikawa & Montello, 2006). In such cases, some fragments of spatial memory are omitted or misplaced, which causes spatial distortions in mind and may lead to poor performance on spatial tasks, especially in areas with complex sturctures (Dickmann et al., 2013; Stevens & Coupe, 1978; Tversky, 1992). Such error-prone spatial knowledge may also lead to safety issues in emergency situations, such as during a fire or natural disaster, where people need to rely on their personal spatial knowledge and memory to navigate. Therefore, we need to seek suitable spatial visualizations that can provide aids and cues to reduce the aforementioned spatial distortions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Commentary: X-ray vision, a superpower against postoperative pain?
- Author
-
John P. Scott
- Subjects
Pulmonary and Respiratory Medicine ,medicine.medical_specialty ,business.industry ,Postoperative pain ,medicine ,Surgery ,Superpower ,business ,X-ray vision - Published
- 2021
5. An X-Ray Vision System for Situation Awareness in Action Space
- Author
-
Farzana Alam Khan, J. Edward Swan, Cindy L. Bethel, Nate Phillips, and Brady Kruse
- Subjects
Situation awareness ,Action (philosophy) ,Machine vision ,Computer science ,Human–computer interaction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Task analysis ,Augmented reality ,Context (language use) ,X-ray vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Variety (cybernetics) - Abstract
Usable x-ray vision has long been a goal in augmented reality research and development. X-ray vision, or the ability to view and understand information presented through an opaque barrier, would be imminently useful across a variety of domains. Unfortunately, however, the effect of x-ray vision on situation awareness, an operator’s understanding of a task or environment, has not been significantly studied. This is an important question; if x-ray vision does not increase situation awareness, of what use is it? Thus, we have developed an x-ray vision system, in order to investigate situation awareness in the context of action space distances.
- Published
- 2021
- Full Text
- View/download PDF
6. Depth perception using x-ray visualizations
- Author
-
Thomas J. Clarke, Clarke, Thomas J, and IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Virtual event 4-8 October 2021
- Subjects
x-ray vision ,Open research ,Human–computer interaction ,Computer science ,Problem solve ,Lack of knowledge ,Augmented reality ,Space (commercial competition) ,Depth perception ,Sensory cue ,X-ray vision ,augmented reality ,depth perception - Abstract
Augmented Reality’s ability to create visual cues that extend reality allows for many new abilities to enhance the way we work, problem solve and evaluate activities. Combining the digital and physical world’s information requires new understandings of how we perceive reality. The ability to look through physical objects without getting conflicting depth cues (X-Ray vision) is one challenge that is currently an open research question. The current research states several methods for improving depth perception such as providing extra occlusion by utilizing X-ray vision effects [4], [11], [12], [18], [23]. Currently, there is a lack of knowledge around this space into how and why some of these aspects work or the different strengths that using these techniques can offer. My research aims at developing a deeper understanding of X-Ray vision effects and how they can and should be used.
- Published
- 2021
7. X-ray vision: a mental representation of the body of a young and of an elderly person
- Author
-
José Grillo Evangelista, Isabel Ritto, Bernardo Claro, Maria do Rosário Dias, Ana Cristina Neves, Ana Ferreira, Ana Lúcia Monteiro, and Letícia Naben
- Subjects
Biopsychosocial model ,media_common.quotation_subject ,Mental representation ,Psychology ,General Medicine ,human activities ,X-ray vision ,Young person ,Cognitive psychology ,Diversity (politics) ,media_common - Abstract
Introduction: The concepts of Young Person and Elderly Person have undergone a diversity of evolutionary nature, permeable to many factors beyond the biopsychosocial conventions involved the definition of the life cycle stages [1]. The evaluation of such subjectivity in several population fringes and in different stages of development may be a clinical key regarding the implementation of suitable strategies for the prevention of disease and health promotion [2,3]. The present study aims to understand how the college students of the anatomy Fine Arts’ class mentally represent the internal morphology of the human body of the “Young Person” and of the “Elderly Person” [4]. Materials and methods: The present study was carried out based on the drawings made by 126 students who attended a Fine Arts higher education institution in the metropolitan area of Lisbon and Tagus Valley. The students were asked to draw the interior of the body of a young person and of an elderly person [2]. In all, 252 drawings were evaluated, and the interpretation of the drawings was based on an analysis matrix designed for this purpose. A comparative analysis of these two different vital life cycle phases was carried out. Results: The data suggests that in most cases the age attributed to the young figure is on average much lower than the age attributed to the elderly figure. Simultaneously, there was an absence of the contour of the body (the skin), which represented the largest organ of the human body. The drawings reveal an absence of sexual organs, suggesting a desexualization of the portrayed human bodies. The results suggest anatomical differences in the pictorial representation of the young person and of the elderly person, namely in the accentuation of curvatures of the vertebral column, retruded lower jaw and muscle flaccidity. Interestingly, the elderly figure, when invested, is represented with supporting instruments (e.g., cane) or associated with unhealthy behaviours (e.g., smoking). Discussion and conclusions: The present exploratory study demonstrated that although the body schema was the same for all individuals, the body image was singular, linked to each individual and to his own history, representing a synthesis of his idiosyncratic perceptions, experiences and particularities. The graphic representation of the mind and of the symbolic thought with resource to pictorial instrument enabled the transposition of this internal dimension in the form of imago, endowed with meanings associated with personal and bodily identity.
- Published
- 2019
8. Use of Random Dot Patterns in Achieving X-Ray Vision for Near-Field Applications of Stereoscopic Video-Based Augmented Reality Displays
- Author
-
Ryad Chellali, Paul Milgram, Sanaz Ghasemi, and Mai Otsuki
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,Near and far field ,02 engineering and technology ,050105 experimental psychology ,Human-Computer Interaction ,Presentation ,Control and Systems Engineering ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Stereoscopic video ,Augmented reality ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,X-ray vision ,Software ,media_common - Abstract
This article addresses some of the challenges involved with creating a stereoscopic video augmented reality “X-ray vision” display for near-field applications, which enables presentation of computer-generated objects as if they lie behind a real object surface, while maintaining the ability to effectively perceive information that might be present on that surface. To achieve this, we propose a method in which patterns consisting of randomly distributed dots are overlaid onto the real surface prior to the rendering of a virtual object behind the real surface using stereoscopic disparity. It was hypothesized that, even though the virtual object is occluding the real object’s surface, the addition of the random dot patterns should increase the strength of the binocular disparity cue, resulting in improved performance in localizing the virtual object behind the surface. In Phase I of the experiment reported here, the feasibility of the display principle was confirmed, and concurrently the effects of relative dot size and dot density on the presence and sensitivity of any perceptual bias in localizing the virtual object within the vicinity of a flat, real surface with a periodic texture were assessed. In Phase II, the effect of relative dot size and dot density on perceiving the impression of transparency of the same real surface while preserving detection of surface information was investigated. Results revealed an advantage of the proposed method in comparison with the “No Pattern” condition for the transparency ratings. Surface information preservation was also shown to decrease with increasing dot density and relative dot size.
- Published
- 2017
- Full Text
- View/download PDF
9. X-ray vision:the accuracy and repeatability of a technology that allows clinicians to see spinal X-rays superimposed on a person's back
- Author
-
Gregory N. Kawchuk, Kenton David Hamaluik, Jacob F. Aaskov, Pierre Boulanger, and Jan Hartvigsen
- Subjects
Radiology and Medical Imaging ,Computer science ,Heads up display ,lcsh:Medicine ,Context (language use) ,General Biochemistry, Genetics and Molecular Biology ,law.invention ,X-ray ,03 medical and health sciences ,0302 clinical medicine ,Lumbar ,law ,medicine ,Computer vision ,Projection (set theory) ,Mixed reality ,030222 orthopedics ,Head-up display ,business.industry ,General Neuroscience ,lcsh:R ,General Medicine ,Repeatability ,Science and Medical Education ,Kinesiology ,Spine ,Vertebra ,Human-Computer Interaction ,medicine.anatomical_structure ,Orthopedics ,Artificial intelligence ,General Agricultural and Biological Sciences ,business ,Focus (optics) ,X-ray vision ,030217 neurology & neurosurgery - Abstract
ObjectiveSince the discovery of ionizing radiation, clinicians have evaluated X-ray images separately from the patient. The objective of this study was to investigate the accuracy and repeatability of a new technology which seeks to resolve this historic limitation by projecting anatomically correct X-ray images on to a person’s skin.MethodsA total of 13 participants enrolled in the study, each having a pre-existing anteroposterior lumbar X-ray. Each participant’s image was uploaded into the Hololens Mixed reality system which when worn, allowed a single examiner to view a participant’s own X-ray superimposed on the participant’s back. The projected image was topographically corrected using depth information obtained by the Hololens system then aligned via existing anatomic landmarks. Using this superimposed image, vertebral levels were identified and validated against spinous process locations obtained by ultrasound. This process was repeated 1–5 days later. The projection of each vertebra was deemed to be “on-target” if it fell within the known morphological dimensions of the spinous process for that specific vertebral level.ResultsThe projection system created on-target projections with respect to individual vertebral levels 73% of the time with no significant difference seen between testing sessions. The average repeatability for all vertebral levels between testing sessions was 77%.ConclusionThese accuracy and repeatability data suggest that the accuracy and repeatability of projecting X-rays directly on to the skin is feasible for identifying underlying anatomy and as such, has potential to place radiological evaluation within the patient context. Future opportunities to improve this procedure will focus on mitigating potential sources of error.
- Published
- 2019
- Full Text
- View/download PDF
10. Precision study on augmented reality-based visual guidance for facility management tasks
- Author
-
Liu, Fei, Seipel, Stefan, Liu, Fei, and Seipel, Stefan
- Abstract
One unique capability of augmented reality (AR) is to visualize hidden objects as a virtual overlay on real occluding objects. This “X-ray vision” visualization metaphor has proved to be invaluable for operation and maintenance tasks such as locating utilities behind a wall. Locating virtual occluded objects requires users to estimate the closest projected positions of the virtual objects upon their real occluders, which is generally under the influence of a parallax effect. In this paper we studied the task of locating virtual pipes behind a real wall with “X-ray vision” and the goal is to establish relationships between task performance and spatial factors causing parallax through different forms of visual augmentation. We introduced and validated a laser-based target designation method which is generally useful for AR-based interaction with augmented objects beyond arm's reach. The main findings include that people can mentally compensate for the parallax error when extrapolating positions of virtual objects on the real surface given traditional 3D depth cues for spatial understanding. This capability is, however, unreliable especially in the presence of the increasing viewing offset between the users and the virtual objects as well as the increasing distance between the virtual objects and their occluders. Experiment results also show that positioning performance is greatly increased and unaffected by those factors if the AR support provides visual guides indicating the closest projected positions of virtual objects on the surfaces of their real occluders.
- Published
- 2018
- Full Text
- View/download PDF
11. Real-world occlusion in optical see-through AR displays
- Author
-
Giovanni Avveduto, Franco Tecchia, and Henry Fuchs
- Subjects
Future studies ,Computer science ,media_common.quotation_subject ,Headset ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Occlusion mask ,01 natural sciences ,010309 optics ,Optical see-through displays ,X-ray vision ,Software ,Shutter ,Perception ,0103 physical sciences ,Occlusion ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common ,Ar system ,business.industry ,020207 software engineering ,Artificial intelligence ,business - Abstract
In this work we describe a system composed by an optical see-through AR headset---a Microsoft HoloLens---, stereo projectors and shutter glasses. Projectors are used to add to the device the capability of occluding real-world surfaces to make the virtual objects to appear more solid and less transparent. A framework was developed in order to allow us to evaluate the importance of occlusion capabilities in optical see-through AR headset. We designed and conducted two experiment to test whether making virtual elements solid would improve the performance of certain tasks with an AR system. Results suggest that making virtual objects to appear more solid by projecting an occlusion mask onto the real-world is useful in some situations. Using an occlusion mask it is also possible to eliminate ambiguities that could arise when enhancing user's perception in some ways that are not possible in real-life, like when a "x-ray vision" is enabled. In this case we wanted to investigate if using an occlusion mask to eliminate perceptual conflicts will hit user's performance in some AR applications. The framework that allowed us to conduct our experiments is made freely available to anyone interested in conducting future studies.
- Published
- 2017
- Full Text
- View/download PDF
12. SP-0559: For the motion: Until we finally perfect x-ray vision, we need patient specific QA
- Author
-
L. McDermott
- Subjects
Oncology ,Computer science ,business.industry ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Hematology ,Artificial intelligence ,Patient specific ,business ,X-ray vision ,Motion (physics) - Published
- 2018
- Full Text
- View/download PDF
13. A Blending Technique for Enhanced Depth Perception in Medical X-Ray Vision Applications.
- Author
-
Westwood, James D., Haluck, Randy S., Hoffman, Helene M., Mogel, Greg T., Phillips, Roger, Robb, Richard A., Vosburgh, Kirby G., Hernell, Frida, Ynnerman, Anders, and Smedby, Örjan
- Abstract
Depth perception is a common problem for x-ray vision in augmented reality applications since the goal is to visualize occluded and embedded objects. In this paper we present an x-ray vision blending method for neurosurgical applications that intensifies the interposition depth cue in order to achieve enhanced depth perception. The proposed technique emphasizes important structures, which provides the user with an improved depth context. [ABSTRACT FROM AUTHOR]
- Published
- 2007
14. Design of X-Ray Vision Automatic Testing System
- Author
-
Rui Hong Li and Yue Ping Han
- Subjects
Identification (information) ,Computer science ,Machine vision ,business.industry ,3D single-object recognition ,General Engineering ,Sampling (statistics) ,Computer vision ,Artificial intelligence ,Object (computer science) ,business ,Automatic testing ,X-ray vision - Abstract
Spatial sampling criteria and fast recognition theory based on single view imaging system are proposed for automatic testing the assembly structures inside products in industry applications. There must be a maximum rotary step for an object within which the least structural size to be tested is ascertained. Rotating the object by the step and imaging it and so on until a 360°turn is completed, an image sequence is obtained that includes the full structural information for recognition. It is verified that objects could be recognized at a single or some limited orientations by analyzing the correlations among the image sequence. The theory is applied to the online automated X-ray vision system. Experiments show that the average identification takes less than 5s with 4.5% of wrong recognition expense.
- Published
- 2014
- Full Text
- View/download PDF
15. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology
- Author
-
Hameedur Rahman, Zainal Rasyid Mahayuddin, Haslina Arshad, and Rozi Mahmud
- Subjects
Engineering ,Breast cancer ,business.industry ,Computer graphics (images) ,medicine ,Mobile technology ,Augmented reality ,medicine.disease ,business ,X-ray vision ,Visualization - Published
- 2017
- Full Text
- View/download PDF
16. Use of Random Dot Pattern for Achieving X-Ray Vision with Stereoscopic Augmented Reality Displays
- Author
-
Sanaz Ghasemi, Paul Milgram, and Mai Otsuki
- Subjects
Computer science ,business.industry ,020207 software engineering ,Stereoscopy ,02 engineering and technology ,Surface finish ,Overlay ,Rendering (computer graphics) ,law.invention ,Virtual image ,law ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Augmented reality ,Artificial intelligence ,Percept ,business ,X-ray vision - Abstract
This paper presents a possible solution to some of the challenges involved with creating a stereoscopic augmented reality ‘X-ray vision‘ display, which enables presentation of computer generated (virtual) objects as if they lie behind a real object surface, while maintaining the ability to effectively perceive any information that might be present on that surface. The method involves overlaying random dot patterns onto a real object surface prior to rendering a virtual object. Results from preliminary experiments have shown that the use of random dot patterns can be effective in contributing to the percept of transparency for the case of flat real surfaces with subtle textures. This suggests that the addition of such patterns may also help in perceiving the correct depth order of virtual objects in such images. Moreover, experimental results indicate that, by controlling the relative dot size and dot density of the patterns, it should be possible also to retain sufficient information about the real surface. Future research should be aimed towards the feasibility and effectiveness of applying this method to more realistic AR conditions.
- Published
- 2016
- Full Text
- View/download PDF
17. An extraordinary view of the universe: The use of X-ray vision in space exploration
- Author
-
Aneta Siemiginowska
- Subjects
010302 applied physics ,Physics ,Multidisciplinary ,COSMIC cancer database ,Astrophysics::High Energy Astrophysical Phenomena ,Astrophysics::Instrumentation and Methods for Astrophysics ,Astronomy ,Astrophysics::Cosmology and Extragalactic Astrophysics ,02 engineering and technology ,Astrophysics ,021001 nanoscience & nanotechnology ,01 natural sciences ,History and Philosophy of Science ,Observatory ,0103 physical sciences ,Space Science ,0210 nano-technology ,X-ray vision - Abstract
X-ray emission from cosmic sources indicates that these sources are heated to temperatures exceeding a million degrees or that they contain highly energetic particles. Recent X-ray telescopes, such as the Chandra X-ray Observatory and XMM-Newton, observed thousands of cosmic X-ray sources. These observations greatly impacted our understanding of the physics governing the evolution of structures across the universe. Here, I review and highlight some of these important results.
- Published
- 2016
- Full Text
- View/download PDF
18. Wednesday, September 26, 2018 7:35 AM–9:00 AM ePosters
- Author
-
Pierre Boulanger, Greg Kawchuk, Jacob F. Aaskov, and Kenton David Hamaluik
- Subjects
medicine.diagnostic_test ,business.industry ,Radiography ,Distortion (optics) ,Magnification ,Context (language use) ,Palpation ,Lumbar ,medicine ,Optometry ,Surgery ,Orthopedics and Sports Medicine ,Neurology (clinical) ,Projection (set theory) ,business ,X-ray vision - Abstract
BACKGROUND CONTEXT Clinicians of all disciplines learn to visualize internal anatomy to enhance diagnostic accuracy and reduce iatrogenic injury. Unfortunately, the performance of this skill is poor. To address this issue, our team developed technology that superimposes diagnostic images on to a patient's skin while minimizing distortion caused by patient topography. The result is that clinicians can immediately visualize their patient's underlying anatomy while freeing their hands to conduct clinical procedures such as palpation, injections or even surgery. PURPOSE In this project, we assess the accuracy and reliability of a holographic projection system with respect to representing underlying anatomy. STUDY DESIGN/SETTING Experimental system compared to criterion reference. PATIENT SAMPLE Ten (10) males (76.9%, mean age 42, age range 31–58) and 3 females (23.1%, mean age 41.6, age range 27–68) were enrolled in the study. OUTCOME MEASURES The projected location of any spinous process was judged to be accurate or “on target” if its location was within known spinous dimensions for that specific level. METHODS Thirteen participants were enrolled in the study, each having a pre-existing anteroposterior lumbar radiograph. Each participant's image was uploaded into a goggle system which when worn, allowed the operator to view the radiograph superimposed on the participant's back. The projected image was topographically corrected using depth information obtained by the goggles then aligned via existing anatomic landmarks. Using this superimposed image, vertebral levels were identified and validated against spinous process locations obtained by ultrasound. This process was repeated 1-5days later. The projection of each vertebra was deemed to be “on-target” if it fell within the known dimensions of the spinous process for that specific vertebral level. RESULTS The projection system created on-target projections with respect to individual vertebral levels 74% of the time with no significant difference seen between testing sessions. The average agreement for all vertebral levels between testing sessions was 75%. CONCLUSIONS The projection system tested here shows promise as a unique technology to assist clinicians in locating internal anatomy. While there are presently many sources of error including those from the device (resolution), viewer (inaccurate marking and placement) X-ray (magnification) and patient (when using generic images), we anticipate these errors can be mitigated, if not corrected, in future studies to provide an accurate and reliable way for clinicians to visualize patient-specific anatomy while allowing their hands to move freely.
- Published
- 2018
- Full Text
- View/download PDF
19. DOCTOR ALAN HART
- Author
-
Emile Devereaux
- Subjects
Gender Studies ,Emerging technologies ,Reading (process) ,media_common.quotation_subject ,Media studies ,Identity (social science) ,Gender studies ,Sociology ,Archival research ,Popular science ,X-ray vision ,media_common - Abstract
Archival data, when read with ultraviolet technologies, reveals traces like the shadows left on X-ray images. This interactive and technological form of reading disrupts conventional categorizations of popular science, literature, authorship, gender and identity. Alan Hart provides an early example of how our contemporary lives are mediated by medicalized ways of seeing. New technologies (like the not-so-new X-ray technology) mark us as data, alter our relationships between public and private, ownership and participation, and assign us roles and identities as data-based subjects alongside voices of marketing.
- Published
- 2010
- Full Text
- View/download PDF
20. Redefining the Nature of Business
- Author
-
Robert Scheinfeld
- Subjects
Engineering ,business.industry ,Management science ,Business game ,Public relations ,business ,X-ray vision - Published
- 2009
- Full Text
- View/download PDF
21. [POSTER] Vergence-Based AR X-ray Vision
- Author
-
Yuki Kitajima, Kosuke Sato, and Sei Ikeda
- Subjects
Computer science ,business.industry ,GRASP ,Augmented reality ,Computer vision ,Vergence ,Artificial intelligence ,business ,Gaze ,X-ray vision ,Visualization ,Task (project management) ,Gesture - Abstract
The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.
- Published
- 2015
- Full Text
- View/download PDF
22. When Superman Used X-Ray Vision, Did He Have a Search Warrant? Emerging Law Enforcement Technologies and the Transformation of Urban Space
- Author
-
Samuel Nunn
- Subjects
Urban Studies ,Transformation (function) ,Urban technology ,Search warrant ,Political science ,Law enforcement ,Superman ,Computer security ,computer.software_genre ,X-ray vision ,Urban space ,computer - Abstract
(2002). When Superman Used X-Ray Vision, Did He Have a Search Warrant? Emerging Law Enforcement Technologies and the Transformation of Urban Space. Journal of Urban Technology: Vol. 9, No. 3, pp. 69-87.
- Published
- 2002
- Full Text
- View/download PDF
23. The Magic of X-ray Vision
- Author
-
Gretchen A. Case
- Subjects
Health (social science) ,Magic (illusion) ,ComputingMilieux_THECOMPUTINGPROFESSION ,business.industry ,Health Policy ,Art history ,GeneralLiterature_MISCELLANEOUS ,ComputingMilieux_GENERAL ,Issues, ethics and legal aspects ,X ray image ,Medicine ,business ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) ,Humanities ,X-ray vision - Abstract
An exploration of the power of images using Thomas Mann's Magic Mountain. Virtual Mentor is a monthly bioethics journal published by the American Medical Association.
- Published
- 2007
- Full Text
- View/download PDF
24. RFID is x-ray vision
- Author
-
Frank Stajano
- Subjects
General Computer Science ,business.industry ,Computer science ,Internet privacy ,business ,Computer security ,computer.software_genre ,computer ,X-ray vision - Abstract
In a world saturated with RFID tags, protecting the privacy of individuals is technically difficult. Without a proper alignment of interests it may be impossible.
- Published
- 2005
- Full Text
- View/download PDF
25. X-ray vision: how audit can help you reveal the quality of your radiography
- Author
-
Andrew Toy and Kenneth A Eaton
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,Radiography ,media_common.quotation_subject ,General Medicine ,Audit ,United Kingdom ,Dental Audit ,medicine ,Radiography, Dental ,Humans ,Quality (business) ,Medical physics ,business ,X-ray vision ,media_common - Published
- 2013
26. Effectiveness of occluded object representations at displaying ordinal depth information in augmented reality
- Author
-
Mark A. Livingston and Kenneth R. Moser
- Subjects
Computer science ,business.industry ,Virtual reality ,Object (computer science) ,Task (project management) ,Computer graphics ,Virtual image ,Computer vision ,Augmented reality ,Icon ,Artificial intelligence ,business ,computer ,X-ray vision ,computer.programming_language - Abstract
An experiment was conducted to investigate the utility of a number of iconographic styles in relaying ordinal depth information at vista space distances of more than 1900m. The experiment consisted of two tasks: distance judgments with respect to discrete zones, and ordinal depth determination in the presence of icon overlap. The virtual object representations were chosen based on their effectiveness, as demonstrated in previous studies. The first task is an adaptation of a previous study investigating distance judgments of occluded objects at medium field distances. We found that only one of the icon styles fared better than guessing. The second is a novel task important to situation awareness and tested two specific cases: ordinal depth of icons with 50% and 100% overlap. We found that the case of full overlap made the task effectively impossible with all icon styles, whereas in the case of partial overlap, the Ground Plane had a clear advantage.
- Published
- 2013
- Full Text
- View/download PDF
27. Pursuit of 'X-ray vision' for augmented reality
- Author
-
Bruce H. Thomas, Christian Sandor, Mark A. Livingston, Arindam Dey, Livingston, Mark A, Dey, Arindam, Sandor, Christian, and Thomas, Bruce H
- Subjects
Engineering ,business.industry ,media_common.quotation_subject ,Virtual reality ,Computer-mediated reality ,Mixed reality ,augmented reality ,Visualization ,mobile technology ,Human–computer interaction ,Perception ,Computer graphics (images) ,Human visual system model ,Augmented reality ,business ,X-ray vision ,media_common - Abstract
The ability to visualize occluded objects or people offers tremendous potential to users of augmented reality (AR). This is especially true in mobile applications for urban environments, in which navigation and other operations are hindered by the urban infrastructure. This “X-ray vision” feature has intrigued and challenged AR system designers for many years, with only modest progress in demonstrating a useful and usable capability. The most obvious challenge is to the human visual system, which is being asked to interpret a very unnatural perceptual metaphor. We review the perceptual background to understand how the visual system infers depth and how these assumptions are or are not met by augmented reality displays. In response to these challenges, several visualization metaphors have been proposed; we survey these in light of the perceptual background. Because augmented reality systems are user-centered, it is important to evaluate how well these visualization metaphors enable users to perform tasks that benefit from X-ray vision. We summarize studies reported in the literature. Drawing upon these analyses, we offer suggestions for future research on this tantalizing capability.
- Published
- 2013
28. Superman-like X-ray vision: Towards brain-computer interfaces for medical augmented reality
- Author
-
Ekkehard Euler, Tobias Blum, Nassir Navab, and Ralf Stauder
- Subjects
Multimedia ,business.industry ,Computer science ,Interface (computing) ,computer.software_genre ,Visualization ,Data visualization ,Human–computer interaction ,Video tracking ,Medical imaging ,Augmented reality ,business ,X-ray vision ,computer ,Brain–computer interface - Abstract
This paper describes first steps towards a Superman-like X-ray vision where a brain-computer interface (BCI) device and a gaze-tracker are used to allow the user controlling the augmented reality (AR) visualization. A BCI device is integrated into two medical AR systems. To assess the potential of this technology first feedback from medical doctors is gathered. While in this pilot study not the full range of available signals but only electromyographic signals are used, the medical doctors provided very positive feedback on the use of BCI for medical AR.
- Published
- 2012
- Full Text
- View/download PDF
29. Toward a practical wall see-through system for drivers: How simple can it be?
- Author
-
Hiroshi Yasuda and Yoshihiro Ohama
- Subjects
genetic structures ,business.industry ,Computer science ,Active safety ,Advanced driver assistance systems ,Collision ,Visualization ,Data visualization ,Human–computer interaction ,Computer vision ,Meaning (existential) ,Artificial intelligence ,business ,X-ray vision ,Circle of a sphere - Abstract
This study specifically examines wall see-through visualization for drivers at blind corners to prevent crossing collisions. We believe that realizing the desired effect with the simplest visualization is a key to building practical systems, although previous studies mainly targeted rich visualization as if the wall were actually transparent. We compared several visualization levels using qualitative and quantitative measures based on performance of the driver's collision estimation and the meaning assignment to visual stimuli. The results revealed that displaying only the direction of the obscured vehicle by a small circle is sufficient for collision estimation, although it was perceived as less informative. We also obtained a preliminary result indicating that the meaning assignment performance is significantly lower in a peripheral region of the driver's view. Although both collision estimation and meaning assignment performance are necessary for building an effective system, these results clarify that future studies must specifically examine the meaning assignment performance of the stimuli.
- Published
- 2012
- Full Text
- View/download PDF
30. Depth judgments by reaching and matching in near-field augmented reality
- Author
-
Gurjot Singh, J. Adam Jones, Stephen R. Ellis, and J. Edward Swan
- Subjects
Matching (statistics) ,business.industry ,Computer science ,media_common.quotation_subject ,Task (project management) ,Visualization ,Computer graphics ,Computer graphics (images) ,Perception ,Augmented reality ,Computer vision ,Artificial intelligence ,Depth perception ,business ,X-ray vision ,media_common ,Gesture - Abstract
In this abstract we describe an experiment that measured depth judgments in optical see-through augmented reality (AR) at near-field reaching distances of ∼ 24 to ∼ 56 cm. The 2×2 experiment crossed two depth judgment tasks, perceptual matching and blind reaching, with two different environments, a real-world environment and an augmented reality environment. We designed a task that used a direct reaching gesture at constant percentages of each participant's maximum reach; our task was inspired by previous work by Tresilian and Mon-Williams [6] that found very accurate blind reaching results in a real-world environment.
- Published
- 2012
- Full Text
- View/download PDF
31. Depth judgment tasks and environments in near-field augmented reality
- Author
-
Gurjot Singh, J. Edward Swan, J. Adam Jones, and Stephen R. Ellis
- Subjects
Matching (statistics) ,Visual perception ,Computer science ,business.industry ,media_common.quotation_subject ,Contrast (statistics) ,Visualization ,Task (project management) ,Perception ,Augmented reality ,Computer vision ,Artificial intelligence ,Depth perception ,business ,X-ray vision ,media_common - Abstract
In this poster abstract we describe an experiment that measured depth judgments in optical see-through augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared two depth judgment tasks: perceptual matching, a closed-loop task, and blind reaching, a visually open-loop task. The experiment tested each of these tasks in both a real-world environment and an augmented reality environment, and used a between-subjects design that included 40 participants. The experiment found that matching judgments were very accurate in the real world, with errors on the order of millimeters and very little variance. In contrast, matching judgments in augmented reality showed a linear trend of increasing overestimation with increasing distance, with a mean overestimation of ∼ 1 cm. With reaching judgments participants underestimated ∼ 4.5 cm in both augmented reality and the real world. We also discovered and solved a calibration problem that arises at near-field distances.
- Published
- 2011
- Full Text
- View/download PDF
32. On-Line Visualization of Underground Structures using Context Features
- Author
-
Jiazhou Chen, Naiyang Lin, Xavier Granier, Qunsheng Peng, Visualization and manipulation of complex data on wireless mobile devices (IPARLA ), Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Sciences et Technologies - Bordeaux 1, Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB), State Key Lab CAD&CG [HangZhou], Zhejiang University, ACM, INRIA Associated Team BIRD, Université Sciences et Technologies - Bordeaux 1 (UB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), and Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,0211 other engineering and technologies ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Rendering (computer graphics) ,Visualization ,ACM: I.: Computing Methodologies/I.3: COMPUTER GRAPHICS/I.3.7: Three-Dimensional Graphics and Realism ,Perception ,Computer graphics (images) ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Augmented reality ,Artificial intelligence ,business ,X-ray vision ,Sensory cue ,media_common ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
International audience; We introduce an on-line framework for the visualizing of underground structures that improves X-Ray vision and Focus and Context Rendering for Augmented Reality. Our approach does not require an accurate reconstruction of the 3D environment and runs on-line on modern hardwares. For these purposes, we extract characteristic features from video frames and create visual cues to reveal occlusion relationships. To enhance the perception of occluding order, the extracted features are either directly rendered, or used to create hybrid blending masks: we thus ensures that the resulting cues are clearly noticeable.
- Published
- 2010
- Full Text
- View/download PDF
33. Depth judgment measures and occluding surfaces in near-field augmented reality
- Author
-
J. Adam Jones, Stephen R. Ellis, Gurjot Singh, and J. Edward Swan
- Subjects
Matching (statistics) ,Computer science ,Coincident ,Virtual image ,business.industry ,Salient ,Augmented reality ,Computer vision ,Artificial intelligence ,Depth perception ,business ,X-ray vision ,Task (project management) - Abstract
In this paper we describe an apparatus and experiment that measured depth judgments in augmented reality at near-field distances of 34 to 50 centimeters. The experiment compared perceptual matching, a closed-loop task for measuring depth judgments, with blind reaching, a visually open-loop task for measuring depth judgments. The experiment also studied the effect of a highly salient occluding surface appearing behind, coincident with, and in front of a virtual object. The apparatus and closed-loop matching task were based on previous work by Ellis and Menges. The experiment found maximum average depth judgment errors of 5.5 cm, and found that the blind reaching judgments were less accurate than the perceptual matching judgments. The experiment found that the presence of a highly-salient occluding surface has a complicated effect on depth judgments, but does not lead to systematically larger or smaller errors.
- Published
- 2010
- Full Text
- View/download PDF
34. Importance masks for revealing occluded objects in augmented reality
- Author
-
Dieter Schmalstieg and Erick Mendez
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Overlay ,Rendering (computer graphics) ,Perception ,Computer graphics (images) ,Computer vision ,Augmented reality ,Artificial intelligence ,business ,Shader ,X-ray vision ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
When simulating "X-ray vision" in Augmented Reality, a critical aspect is ensuring correct perception of the occluded objects position. Naive overlay rendering of occluded objects on top of real-world occluders can lead to a misunderstanding of the visual scene and a poor perception of the depth. We present a simple technique to enhance the perception of the spatial arrangements in the scene. An importance mask associated with occluders informs the rendering what information can be overlaid and what should be preserved. This technique is independent of scene properties such as illumination and surface properties, which may be unknown. The proposed solution is computed efficiently in a single-pass fragment shaders on the GPU.
- Published
- 2009
- Full Text
- View/download PDF
35. Nanotubes sharpen X-ray vision
- Author
-
Zeeya Merali
- Subjects
Molecular interactions ,Multidisciplinary ,Materials science ,Optics ,business.industry ,Biophysics ,business ,X-ray vision - Published
- 2009
- Full Text
- View/download PDF
36. Improving Spatial Perception for Augmented Reality X-Ray Vision
- Author
-
Bruce H. Thomas, Christian Sandor, Benjamin Avery, Avery, Benjamin Raymond James, Sandor, Christian, Thomas, Bruce Hunter, and IEEE Virtual Reality 2009 3D User Interface Symposium Louisiana, USA 14 January - 18 January 2009
- Subjects
Spatial contextual awareness ,Computer science ,business.industry ,Machine vision ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Virtual reality ,Image-based modeling and rendering ,Visualization ,Computer graphics ,Computer vision ,Augmented reality ,Artificial intelligence ,Zoom ,Depth perception ,business ,X-ray vision - Abstract
Augmented reality x-ray vision allows users to see through walls and view real occluded objects and locations. We present an augmented reality x-ray vision system that employs multiple view modes to support new visualizations that provide depth cues and spatial awareness to users. The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them. Utilizing this edge overlay, the tunnel cut-out visualization provides details about occluding layers between the user and remote location. Inherent limitations of these visualizations are addressed by our addition of view modes allowing the user to obtain additional detail by zooming in, or an overview of the environment via an overhead exocentric view.
- Published
- 2009
- Full Text
- View/download PDF
37. Depth judgments by reaching and matching in near-field augmented reality.
- Author
-
Singh, Gurjot, Swan, J. Edward, Jones, J. Adam, and Ellis, Stephen R.
- Abstract
In this abstract we describe an experiment that measured depth judgments in optical see-through augmented reality (AR) at near-field reaching distances of ∼ 24 to ∼ 56 cm. The 2×2 experiment crossed two depth judgment tasks, perceptual matching and blind reaching, with two different environments, a real-world environment and an augmented reality environment. We designed a task that used a direct reaching gesture at constant percentages of each participant's maximum reach; our task was inspired by previous work by Tresilian and Mon-Williams [6] that found very accurate blind reaching results in a real-world environment. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
38. Effectiveness of Occluded Object Representations at Displaying Ordinal Depth Information in Augmented Reality
- Author
-
NAVAL RESEARCH LAB WASHINGTON DC, Livingston, Mark A, Moser, Kenneth R, NAVAL RESEARCH LAB WASHINGTON DC, Livingston, Mark A, and Moser, Kenneth R
- Abstract
An experiment was conducted to investigate the utility of a number of iconographic styles in relaying ordinal depth information at vista space distances of more than 1900m. The experiment consisted of two tasks: distance judgments with respect to discrete zones and ordinal depth determination in the presence of icon overlap. The virtual object representations were chosen based on their effectiveness as demonstrated in previous studies. The first task is an adaptation of a previous study investigating distance judgments of occluded objects at medium field distances. We found that only one of the icon styles fared better than guessing. The second is a novel task important to situation awareness and tested two specific cases ordinal depth of icons with 50% and 100% overlap. We found that the case of full overlap made the task effectively impossible with all icon styles, whereas in the case of partial overlap, the Ground Plane had a clear advantage., Presented at IEEE Virtual Reality, held in Orlando, Florida, on 16-23 Mar 2013.
- Published
- 2013
39. Development of the Detector Tray for the Wide Field Monitor of the Large Observatory For x-ray Timing (LOFT), a medium-class candidate mission in ESA´s Cosmic Vision 2015-2025.
- Author
-
Universitat Politècnica de Catalunya. Departament de Física Aplicada, Hernanz, Margarita, García-Berro Montilla, Enrique, Karelin, Dmitry, Universitat Politècnica de Catalunya. Departament de Física Aplicada, Hernanz, Margarita, García-Berro Montilla, Enrique, and Karelin, Dmitry
- Abstract
LOFT, the Large Observatory For X-ray Timing, has recently pass the ESA call as one of the four M3 mission candidates that will complete for a launch opportunity at the start of the 2020s. The primary goal of the mission is to study strongfield gravity, black hole masses and spins, and the equation of state of ultra-dense matter via high-time-resolution X-ray observations of compact objects. Two instruments will be comprised in the scientific payload of LOFT: the Large Area Detector (LAD) a 12 m2 collimated Xray detector in the 2-30 keV range, with unprecedented timing capabilities, and the Wide Field Monitor (WFM), a coded-mask wide field X-ray monitor. Both instruments are based on silicon drift detectors. The main scope of WFM is to catch good triggering sources to be pointed with LAD, but it will also have its own science programme. The Pre-prototype development of the Detector Tray creating the 3D digital model will need to meet the following requirements: - Alignment of 4 Silicon Drift Detectors (SDDs) with 4 degrees of freedom forming the flat detector plane and its alignment with coded mask with high precision; - Maintain the geometry within working temperatures from -30°C to -20°. Select the materials with low CTE (Coefficient of Thermal Expansion) and with high Heat Transfer Coefficient (HTC) to be used. Mechanical and thermal computing simulations will be performed in order to verify the fulfillment of the requirements.
- Published
- 2013
40. Augmented Reality X-Ray Vision with Gesture Interaction
- Author
-
P. Geethan, T. Naveen, J. Shruthi Krithika, P. Jithin, Shriram K. Vasudevan, and K. V. Padminy
- Subjects
Multidisciplinary ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical head-mounted display ,Window (computing) ,Augmented reality ,Computer vision ,Artificial intelligence ,Depth perception ,business ,X-ray vision ,Anaglyph 3D ,Gesture - Abstract
Augmented reality is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. AR will really alter the way individuals view the world. Augmented reality X-Ray Vision is an emerging concept. While AR deals with virtual and real objects coexisting in the same space, AR X-Ray Vision is a subdivision of the broad spectrum of AR, which provides a "see through" vision among real world objects. In this paper, we have thoroughly analysed the existing methodologies dealing with AR X-Ray Vision and we have come up with a convenient method that enables easy implementation. This paper deals with creating a methodology to provide an X-Ray vision using the anaglyph technique and finally integrating it with the Leap Motion Controller to enable gesture interaction to move the window around through which the point of interest can be viewed. The limitations of the suggested methodology have also been discussed. This system enables the user to perceive depth between two regions with the help of just anaglyph glasses without the use of any head mounted display devices. Can be extended to that of medical field, where X-Ray vision is of increasing importance to view the layers of skin and bones of a patient giving the doctors and surgeons an approximate depth perception.
- Published
- 2015
- Full Text
- View/download PDF
41. Near-field depth perception in see-through augmented reality
- Author
-
Singh, Gurjot and Singh, Gurjot
- Subjects
- Augmented reality., Virtual reality., Mixed reality., Depth perception Measurement., Computer simulation., Image processing Digital techniques., Graphical user interfaces (Computer systems), Computers Optical equipment., Virtual Reality, Computer Simulation, Réalité augmentée., Réalité virtuelle., Réalité mixte., Simulation par ordinateur., Traitement d'images Techniques numériques., Interfaces graphiques (Informatique), Ordinateurs Équipement optique., augmented reality., virtual reality., simulation., digital imaging., Augmented reality., Computer simulation., Computers Optical equipment., Graphical user interfaces (Computer systems), Image processing Digital techniques., Mixed reality., Virtual reality.
- Abstract
This research studied egocentric depth perception in an augmented reality (AR) environment. Specifically, it involved measuring depth perception in the near visual field by using quantitative methods to measure the depth relationships between real and virtual objects. This research involved two goals; first, engineering a depth perception measurement apparatus and related calibration and measuring techniques for collecting depth judgments, and second, testing its effectiveness by conducting an experiment. The experiment compared two complimentary depth judgment protocols: perceptual matching (a closed-loop task) and blind reaching (an open-loop task). It also studied the effect of a highly salient occluding surface; this surface appeared behind, coincident with, and in front of virtual objects. Finally, the experiment studied the relationship between dark vergence and depth perception.
- Published
- 2010
42. Interactive Perspective Cut-away Views for General 3D Scenes
- Author
-
Christopher Coffin and Tobias Höllerer
- Subjects
Computer graphics ,business.industry ,Computer science ,Computer graphics (images) ,Perspective (graphical) ,Computer vision ,Artificial intelligence ,Information geometry ,User interface ,business ,X-ray vision ,3D computer graphics - Abstract
We present a technique that allows a user to look beyond occluding objects in arbitrary 3D graphics scenes. In order to control this form of virtual x-ray vision, the user interactively cuts holes into the occluding geometry. The user can rapidly define a cutout shape or choose a standard shape and sweep it over the occluding wall segments to reveal what lies behind them. Holes are rendered in the correct 3D perspective as if they were actually cut into the obstructing geometry, including border regions that give the cutout shape physical depth, simulating penetration of a physical wall that possesses some generic thickness.
- Published
- 2006
- Full Text
- View/download PDF
43. Interactive Tools for Virtual X-Ray Vision in Mobile Augmented Reality
- Author
-
Tobias Höllerer and Ryan Bane
- Subjects
Null (SQL) ,Human–computer interaction ,Computer science ,First person ,Perspective (graphical) ,Augmented reality ,Computer-mediated reality ,Depth perception ,Set (psychology) ,X-ray vision - Abstract
This paper presents a set of interactive tools designed to give users virtual x-ray vision. These tools address a common problem in depicting occluded infrastructure: either too much information is displayed, confusing users, or too little information is displayed, depriving users of important depth cues. Four tools are presented: the tunnel tool and room selector tool directly augment the user's view of the environment, allowing them to explore the scene in direct, first person view. The room in miniature tool allows the user to select and interact with a room from a third person perspective, allowing users to view the contents of the room from points of view that would normally be difficult or impossible to achieve. The room slicer tool aids users in exploring volumetric data displayed within the room in miniature tool. Used together, the tools presented in this paper can be used to achieve the virtual x-ray vision effect. We test our prototype system in a far-field mobile augmented reality setup, visualizing the interiors of a small set of buildings on the UCSB campus.
- Published
- 2005
- Full Text
- View/download PDF
44. UNVEILING THE AGN POPULATION WITH CHAMP’S X-RAY VISION
- Author
-
Belinda Jane Wilkes, D. W. Kim, Wayne A. Barkhouse, Paul J. Green, John D. Silverman, and Tom Aldcroft
- Subjects
Physics ,education.field_of_study ,Population ,Astronomy ,Astrophysics ,education ,X-ray vision - Published
- 2004
- Full Text
- View/download PDF
45. A blending technique for enhanced depth perception in medical x-ray vision applications
- Author
-
Hernell, Frida, Ynnerman, Anders, Smedby, Örjan, Hernell, Frida, Ynnerman, Anders, and Smedby, Örjan
- Abstract
Depth perception is a common problem for x-ray vision in augmented reality applications since the goal is to visualize occluded and embedded objects. In this paper we present an x-ray vision blending method for neurosurgical applications that intensifies the interposition depth cue in order to achieve enhanced depth perception. The proposed technique emphasizes important structures, which provides the user with an improved depth context.
- Published
- 2007
46. Japan's X-ray vision for the future
- Author
-
Michael Banks
- Subjects
Physics ,General Physics and Astronomy ,Astronomy ,X-ray vision - Published
- 2012
- Full Text
- View/download PDF
47. Nobel Focus: Neutrino and X-ray Vision
- Author
-
JR Minkel
- Subjects
Physics ,Physics::Popular Physics ,Focus (computing) ,COSMIC cancer database ,Astrophysics::High Energy Astrophysical Phenomena ,Window (computing) ,Statistics::Other Statistics ,Astrophysics ,Neutrino ,Computer Science::Digital Libraries ,X-ray vision ,Physics::History of Physics - Abstract
The 2002 Nobel Prize in Physics went to three experimentalists who opened the window on cosmic neutrinos and x rays.
- Published
- 2002
- Full Text
- View/download PDF
48. Electron microscope gets x-ray vision
- Author
-
Toni Feder
- Subjects
Physics ,Optics ,business.industry ,law ,General Physics and Astronomy ,Electron microscope ,business ,X-ray vision ,law.invention - Published
- 2011
- Full Text
- View/download PDF
49. X-ray Vision
- Author
-
Charles J. Hailey and Fiona A. Harrison
- Subjects
Multidisciplinary ,Optics ,Computer science ,business.industry ,business ,X-ray vision - Published
- 2011
- Full Text
- View/download PDF
50. Orbital orientation is not visual orientation: A comment on 'X-ray Vision and the evolution of forward-facing eyes' by M.A. Changizi and S. Shimojo
- Author
-
Howard C. Howland
- Subjects
Statistics and Probability ,Physics ,General Immunology and Microbiology ,business.industry ,Applied Mathematics ,General Medicine ,Orientation (graph theory) ,Visual orientation ,General Biochemistry, Genetics and Molecular Biology ,Optics ,Modeling and Simulation ,General Agricultural and Biological Sciences ,business ,X-ray vision - Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.