232 results on '"AUGMENTED reality"'
Search Results
102. VISUALIZATION OF REAL-TIME DISPLACEMENT TIME HISTORY SUPERIMPOSED WITH DYNAMIC EXPERIMENTS USING WIRELESS SMART SENSORS AND AUGMENTED REALITY
- Author
-
Agüero Injante, Marlon Frank
- Subjects
- Wireless Smart Sensor, Monitoring, Augmented Reality, Displacement, Human-Infrastructure Interface, Civil and Environmental Engineering, Structural Engineering
- Abstract
Wireless Smart Sensor (WSS) process field data and inform structural field engineers and owners about the infrastructure health, condition, and safety. Engineers acquire the data and investigate changes on displacements under loads to make informed choices and prioritize decisions concerning maintenance, required repairs, and infrastructure replacements. However, reliable data collection and access to displacement information in the field in real-time under loads remains a challenge. The displacement data provided by the WSS in the field undergoes additional processing, and it is seen at a different location, at the decision-making headquarters. If inspectors were able to observe structural displacements in real-time at the locations of interest, that would enable a new information-based decision-making reality in the field, and it would also make it possible for the inspector to conduct additional observations based on that information. As a solution, this thesis develops a new human-centered access to actionable structural data (real-time displacements under loads) using Augmented Reality (AR). The main contribution of this work is a real-time human-infrastructure interface that facilitates inspectors’ direct access to displacement data during the structural inspection and monitoring.
- Published
- 2020
103. Integrating Deep Learning and Augmented Reality to Enhance Situational Awareness in Firefighting Environments
- Author
-
Bhattarai, Manish
- Subjects
- Deep Learning, Firefighting, Situational Awareness, Deep Reinforcement Learning, Path Planning, Navigation, Augmented Reality, Artificial Intelligence and Robotics, Electrical and Computer Engineering
- Abstract
We present a new four-pronged approach to build firefighter's situational awareness for the first time in the literature. We construct a series of deep learning frameworks built on top of one another to enhance the safety, efficiency, and successful completion of rescue missions conducted by firefighters in emergency first response settings. First, we used a deep Convolutional Neural Network (CNN) system to classify and identify objects of interest from thermal imagery in real-time. Next, we extended this CNN framework for object detection, tracking, segmentation with a Mask RCNN framework, and scene description with a multimodal natural language processing(NLP) framework. Third, we built a deep Q-learning-based agent, immune to stress-induced disorientation and anxiety, capable of making clear navigation decisions based on the observed and stored facts in live-fire environments. Finally, we used a low computational unsupervised learning technique called tensor decomposition to perform meaningful feature extraction for anomaly detection in real-time. With these ad-hoc deep learning structures, we built the artificial intelligence system's backbone for firefighters' situational awareness. To bring the designed system into usage by firefighters, we designed a physical structure where the processed results are used as inputs in the creation of an augmented reality capable of advising firefighters of their location and key features around them, which are vital to the rescue operation at hand, as well as a path planning feature that acts as a virtual guide to assist disoriented first responders in getting back to safety. When combined, these four approaches present a novel approach to information understanding, transfer, and synthesis that could dramatically improve firefighter response and efficacy and reduce life loss.
- Published
- 2020
104. BX | Branded AR Filters: Redefining Branded Interactions
- Author
-
Araujo, Moses Michael
- Subjects
- AR, Augmented reality, Branding, Instagram, Marketing, User experience
- Abstract
Traditional forms of experiential marketing have been interrupted by augmented reality. As augmented reality continues to grow, a new ecosystem for brand interactions has revealed a world of branded augmented content or “BX”. BX is a concept design that creates a personalized, immersive connection between brand and user through four AR filters.
- Published
- 2020
105. Mend Magazine: Creating Awareness. Shifting Habits of Consumption.
- Author
-
Fabozzi, Marie A
- Subjects
- Augmented reality, Climate change, Consumption, Environmentalism, Millennials
- Abstract
Generation Y, or more commonly known as Millennials (born from 1981-1996) seamlessly blend the digital realm with the physical world, as a result of adolescence in a time when technology was adopted early on (Dimock, 2019 & Fromm et al, 2013). Research shows that millennials want constant participation and interactivity to be personally engaged and that millennials are a largely influential, specifically with family members (Fromm et al, 2013 and Lu et al, 2013). Combining the Millennial generation's leverage with the topic of climate crisis there is opportunity for current and future generations to establish better habits early on. By creating a richer augmented reality experience along with an online magazine, future generations will become more knowledgeable and can build a positive impact linked to rethinking daily actions of consumption in relation to climate change. The scope of this thesis will investigate carbon emissions and how consumer choices are interrelated. Research included how to market to the Millennial generation, an investigation into why and how climate change is occurring, and what consumers can do to change their habits. Through visual design, I will explore augmented reality technology incorporated with an online magazine, callout graphics, and overlaying motion graphic animations. This solution will inform the audience to rethink daily actions and as a result, mend their consumption habits.
- Published
- 2020
106. Designing Augmented Reality Systems to Empower People with Low Vision
- Author
-
Zhao, Yuhang
- Subjects
- Accessibility, Augmented reality, Low vision
- Abstract
Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. According to the Center for Disease Control and Prevention (CDC), about 19 million Americans have low vision and the number is still rapidly growing with the overall aging of the population. Low vision includes many different conditions, such as central and peripheral vision loss, extreme light sensitivity, and blind spots, which have brought low vision people difficulties in their daily life. Unlike people who are completely blind, low vision people have residual vision and extensively use their vision in daily activities. However, prior research has mainly focused on providing only audio and tactile feedback for blind users, overlooking the residual vision that low vision people have. While current low vision aids (e.g., magnifier, CCTV) support basic vision enhancements, such as magnification and contrast enhancement, these enhancements often arbitrarily alter the user's full field of view without considering the user's context, such as the tasks, the environmental factors, and the user’s visual abilities. As a result, these low vision aids are not helpful or preferred by low vision users in many important daily activities, such as navigation and socializing. To address this gap in low vision accessibility, I sought to deeply explore low vision people's visual perception, and design and build intelligent augmented reality (AR) systems that provide direct visual augmentations to empower low vision people in various daily activities. Different from the conventional low vision aids, my AR systems provide semantic visual augmentations that are tailored to users' visual abilities and tasks. Specifically, my doctoral research focuses on three directions: First, I conducted both qualitative and quantitative studies to explore low vision people’s visual abilities and preferences on different AR platforms including both video see-through and optical see-through AR glasses, deriving design guidelines for AR assistive technologies for low vision. Second, I explored low vision people’s experiences and needs in different daily tasks, such as shopping and navigation, and designed intelligent AR systems with tailored visual augmentations to facilitate these tasks for low vision people. Finally, to enable easy control and interaction with the visual augmentations, I also designed accessible interaction techniques on AR glasses for people with low vision. Evaluations have showed that my AR systems effectively improved low vision people’s performance and experiences in various daily tasks. Based on my doctoral research and findings, I distill design considerations and discuss future research directions for low vision accessibility.
- Published
- 2020
107. Promoting Personally Relevant Access to the General Mathematics Curriculum for Students With Intellectual Disability
- Author
-
Cook, Jennifer Elizabeth
- Subjects
- Intellectual Disability, Personally Relevant, Access to the General Curriculum, Math Facts, Technology-Aided Instruction, Augmented Reality, Special Education and Teaching
- Abstract
Providing access to the general curriculum for students with intellectual disability (ID) has been a topic of debate in the field of low-incidence disabilities (e.g., Ayers et al., 2011; Trela & Jimenez, 2013). Researchers (e.g., Spooner et al., 2006; Trela & Jimenez, 2013) generally agree that students with ID should have access to the general academic curriculum, but some (e.g., Ayers et al., 2011) are concerned that adhering to a standards-based academic curriculum may not lead to independence. Trela and Jimenez (2013) proposed the term personally relevant to describe curriculum modifications for students with ID. Personally relevant modifications provide individualized support and a focus on academic curriculum that is meaningful for each student. Finally, a 2006 literature review of academic interventions for students with ID (Spooner et al.) found more evidence for reading interventions than for math interventions. The purpose of this dissertation was to identify effective strategies and interventions to support personally relevant access to the general mathematics curriculum for students with ID. Study one of this dissertation was a systematic review of math fact literature for students with ID. Basic math fact acquisition and fluency is imperative for independent living (Codding et al., 2011) and students with ID should be provided opportunity to acquire basic math facts to automaticity. The purpose of the review was to identify empirical studies on math fact interventions for students with ID, summarize the evidence base, and use those findings to offer recommendations to the field of ID for future research and application. Study two investigated the use of a technology-aided instruction (TAI) and augmented reality (AR) intervention for basic math fact acquisition in elementary students with ID. Three students participated in a multiple baseline design single-case study. An AR application was used to teach basic addition and multiplication facts. Results indicated the AR intervention improved math fact acquisition for all three students. Findings were discussed in the context of TAI and Universal Design for Learning (UDL) to provide personally relevant access to the general math curriculum for students with ID.
- Published
- 2020
108. The Effect of Augmented Reality on Learning in the Mathematics Classroom
- Author
-
Maffei, Justin Thomas
- Subjects
- Augmented Reality, Geometry, Mathematics, Mixed-reality, Education, Science and Mathematics Education
- Abstract
This study examined the impact of two treatments, augmented reality or concrete materials, on the Geometry knowledge of high school students. Participating classes were chosen from two secondary schools between two rural Virginia school districts. The sampling method selected for the study employed a convenience sample. There were 87 total participants in the study. The importance of this study emerged from a lack of research relating the use of augmented reality in the classroom to its effect on student learning. The purpose of this quantitative pretest-posttest, non-equivalent control group quasi-experimental study was to evaluate the difference in achievement scores, as measured by scores on the Three-Dimensional Figures Reporting Category of the Virginia Standards of Learning test, based on type of instructional delivery. Data analysis was completed using Quade’s Rank Analysis of Covariance to control for pretest scores. The study also evaluated the perceived learning for high school Geometry students, as measured by the CAP Perceived Learning Scale (Rovai, Wighting, Baker, & Grooms, 2009), based on the type of instruction. Data analysis on CAP Perceived Learning scores was completed using the Mann-Whitney U test.
- Published
- 2020
109. IIoT based Augmented Reality for Factory Data Collection and Visualization
- Author
-
Rosales Vizuete, Jonathan P.
- Subjects
- Engineering, IIoT, Augmented Reality, Factory floor, Industry 4, Mixed Reality, AR
- Abstract
Industrial factory floor data has exponentially increased as more and more processes are automated throughout the different industries. Typically, this data is buried inside the machines and only used internally to monitor and correct the process. Industry 4.0 is a new paradigm that requires accessing this data continuously in order to provide monitoring and feedback of the process and integrate this information with other machines or processes on the factory floor. There is also a need for factory personnel to instantaneously acquire machine data from one or more stations using Augmented Reality methods using smart devices. This thesis proposes a novel method for bringing factory floor data into an Industrial IoT environment where it can then be used for further analyses using a Mixed Reality environment in conjunction with smart handheld devices. The method and the process were implemented and validated on workcenters at the Volvo Truck plant in Hagerstown, Maryland. Data from four different stations at the plant was successfully brought into an IIoT platform and then visualized in real-time using an Augmented Reality (AR) experience through a handheld device or an AR headset with the use of spatial anchors. A software solution was created to automate the configuration process based on some basic information about the station and the desired signals. Performance tests were performed and validated to evaluate the efficiency of the data, any latency in the communication, and to test the feasibility of using spatial anchors in an industrial environment.
- Published
- 2020
110. WEARABLE COMPUTING TECHNOLOGIES FOR DISTRIBUTED LEARNING
- Author
-
Jiang, Haotian
- Subjects
- Computer Engineering, Deep learning, Mobile Computing, Data Mining, Distributed Database, Gait Analysis, Augmented Reality, Wearable Computers and Body Area Networks, Distributed System
- Abstract
With the increasing number of people using limited financial and human resources, it is challenging to provide high-quality and efficient healthcare services. The recent years have witnessed a major revolution in the technological advances of deep learning and wearable devices paving the way towards new healthcare services. Besides, the increasing worldwide mobile society and the wireless infrastructure can support many current and emerging novel healthcare applications. This dissertation presents and explores various design and development of next-generation of m-health applications, with a particular emphasis on artificial intelligence (AI) and applied wearable systems design.Deep learning is becoming a promising focus on AI research. With deep learning techniques, researchers can discover deep properties and features of events from quantitative mobile sensor data. However, many data sources are geographically separated and have strict privacy, security, and regulatory constraints. Upon releasing the privacy-sensitive data, these data sources generally no longer physically possess their data and cannot interfere with the way of their data being used. Also, severe data leakage may happen once the data warehouse is attacked by a hacker. Ensuring the security of collected data becomes a big challenge in deep learning. Therefore, it is necessary to explore distributed data mining architecture which can conduct consensus learning based on the needs. Accordingly, we propose a distributed deep learning optimized system that contains a cloud server and multiple smartphone devices with computation capabilities and each device is served as a personal mobile data hub for enabling mobile computing while preserving data privacy. The proposed system keeps the private data locally in smartphones, deploys the computation model seamlessly, shares trained parameters, and builds a global consensus model incrementally. The feasibility and usability of the proposed system are evaluated by multiple experiments and related discussions. User data privacy is protected on two levels. First, local sensitive data do not need to be shared with other people and the user has full control of their sensitive data all the time. Second, only a small fraction of local model updates are selected for sharing, which further reduces the risk of information leaking. Moreover, in an application of healthcare, smartphones can serve as a personal mobile hub with wide area network connectivity. Two realistic use cases are evaluated and discussed in this dissertation, including risk factors identification and fall detection. Both use cases are challenged and crucial for connected healthcare services which involve medical problem identification, feasible solution development, and performance verification. In addition to the above two use cases, several potential applications are also discussed and the preliminary results are verified by wearable devices.
- Published
- 2020
111. Creating a Smart Eye-Tracking Enabled AR Instructional Platform: Fundamental Steps
- Author
-
Sanchez Perdomo, Yerly P
- Subjects
- Augmented Reality, Performance Difficulty, Surgical Education, Chest Tube Insertion, Eye Tracking
- Abstract
Abstract: Surgical procedures are demanding to learners because they need to perform and excel on the task to succeed, and eventually, provide a high standard of care to patients. During training, safety and proficiency are emphasized. However, the limited numbers of hours and instructor feedback for learning and practicing procedures in certain medical schools make it challenging to master surgical techniques. As a result, simulation has been implemented, and new technologies such as augmented reality (AR) are employed in medical and surgical education. In the design of a useful AR platform for learning and practicing surgical skills, it is necessary to provide only essential information, thereby reducing cognitive load to the user. In this project, we provide the foundation of a smart eye-tracking enabled AR platform for teaching the multistep procedure of chest tube insertion. In this thesis, we answered two core questions. First, can eye tracking and AR devices be synchronized and integrated into one platform to be potentially used for the practice of a chest tube insertion? Second, can eye tracking identify the moment of performance difficulty during a multistep surgical procedure, specifically a chest tube insertion? This project lays the foundation for a customized eye-tracking AR teaching platform for chest tube insertion.
- Published
- 2020
112. Applications of Close-Range Terrestrial 3D Photogrammetry to Improve Safety in Underground Stone Mines
- Author
-
Bishop, Richard
- Subjects
- photogrammetry, stereophotogrammetry, underground, mining, mine safety, drone, uav, mining engineering, virtual reality, augmented reality, 3D
- Abstract
The underground limestone mining industry is a small, but growing segment of the U.S. crushed stone industry. However, its fatality rate has been amongst the highest of the mining sector in recent years due to ground control issues related to ground collapses. It is therefore important to improve the engineering design, monitoring and visualization of ground control by utilizing new technologies that can help an underground limestone company maintain a safe and productive operation. Photogrammetry and laser scanning are remote sensing technologies that are useful tools for collecting three-dimensional spatial data with high levels of precision for many types of mining applications. Due to the reality of budget constraints for many underground stone mining operations, this research concentrates on photogrammetry as a more accessible technology for the average operation. Despite the challenging lighting conditions and size of underground limestone mines that has previous hindered photogrammetric surveys in these environments, over 13,000 photographic images were taken over a 3-year period in active mines to compile these models. This research summarizes that work and highlights the many applications of terrestrial close-range photogrammetry, including practical methodologies for implementing the techniques in working operations to better visualize hazards and pragmatic approaches for geotechnical analysis, improved engineering design and monitoring.
- Published
- 2020
113. ANALYSIS OF MOTIVATION, SITUATIONAL INTEREST, AND AUGMENTED REALITY
- Author
-
Raber, James A.
- Subjects
- Educational Psychology, augmented reality, ar, situational interest, motivation, may 4th
- Abstract
Motivation and situational interest have proven to be critical factors related to student outcomes. Augmented reality, when leveraged properly, has been demonstrated to be an efficacious instructional vehicle across many academic domains, but little is known about its relationship to motivation and situational interest. Additionally, little empirical research has been performed on augmented reality applications related to the specific domain of history. Merely knowing that a technology, like augmented reality, can produce positive learning outcomes is not enough; understanding how it impacts motivation and situational interest are critical in understanding how and when to leverage this technology.This study analyzed the impact to situational interest and motivation using an augmented reality application that delivers instructional content about the tragic events that occurred on May 4th, 1970 at Kent State University. Using both a qualitative and a quantitative pretest and posttest approach, it was determined that situational interest and motivation were impacted by AR. More specifically, aspects of motivation decreased while situational and content knowledge increased. Through a qualitative approach, this study outlined factors that contributed to these changes. These factors include the feeling of immersiveness and enjoyable multimedia content, as well as negative feelings towards the location finding in the application as well as the applications’ technical reliability
- Published
- 2020
114. The design process for XR experiences
- Author
-
Easdon, Jesse Allen
- Subjects
- Extended reality, XR, Virtual reality, Augmented reality, Mixed reality, Design, Design process, Experiences, Theatre, Theatre design, New media, Storytelling, Immersive, Interactive
- Abstract
The act of telling stories has been a core part of the human experience since the days of cavemen standing around a fire to tell the story of the day's hunt (Balter). As new technologies are being developed, such as virtual reality headsets and cellphones with augmented reality capabilities, the designer's process has been challenged. We are increasingly living in a world where human experiences are separating from physical reality and moving toward an extended reality or XR. “Extended Reality” (XR) is the umbrella term used to describe VR (Virtual Reality), AR (Augmented Reality), and MR (Mixed Reality) as well as all future realities [any new experiences that might be created outside of the already existing realms] such technology, might bring. XR covers the full spectrum of real and virtual environments'' (Scribani). The challenge Designers are facing is how to effectively tell engaging stories using new and increasingly prevalent technologies within the XR envelope. Many experiences utilizing XR tend to focus on the technology and tools rather than the story. XR is the new “Wild West” in storytelling. There are no hard rules and no defined creative processes for crafting a successful experience within the technology. This thesis asks the question, can the theatrical design process help to create more successful XR experiences? Success being more social interaction and long-lasting engagement. Can a focus on human-centric stories make the experiences more engaging? In this thesis, I will be investigating three XR experiences and their varying design processes. The evaluation of this research will be qualitative rather than quantitative. The success or failure of these experiential XR designs will be determined through the lens of a collaborative theatrical designer.
- Published
- 2020
115. AR Comic Chat
- Author
-
Bowald, Dylan
- Subjects
- Augmented reality, Captioning, Computer vision, Data science, Machine learning
- Abstract
Live speech transcription and captioning are important for the accessibility of deaf and hard of hearing individuals, especially in situations with no visible ASL translators. If live captioning is available at all, it is typically rendered in the style of closed captions on a display such as a phone screen or TV and away from the real conversation. This can potentially divide the focus of the viewer and detract from the experience. This paper proposes an investigation into an alternative, Augmented Reality driven approach to the display of these captions, using deep neural networks to compute, track and associate deep visual and speech descriptors in order to maintain captions as "speech bubbles" above the speaker.
- Published
- 2020
116. Eleanor Roosevelt Augmented Reality
- Author
-
Hu, Zhiling
- Subjects
- Augmented Reality, Dance, Eleanor Roosevelt, Martha Graham, Maya, Unity
- Abstract
For this project, we chose to focus on the absence of Eleanor Roosevelt (ER) at Four Freedoms Park (FFP) in New York City, despite her active political participation during Franklin D. Roosevelt’s presidency. Our goal was to communicate this participation using Augmented Reality (AR) and the power of dance. We conducted various forms of research in order to determine the best way to approach this and found that an interactive mobile AR app was the best way to do this so that each user was able to have their own experience. Our experimentation with AR also helped assess the limitations of what can be implemented within this time frame.
- Published
- 2020
117. Architecture 2.0; Representing the architectural future with new technologies
- Author
-
Ancira, Andrew Joshua
- Subjects
- Augmented Reality, Coding, Representation, Social Media, Virtual Reality
- Abstract
Emergent digital technologies, such as Virtual Reality (VR), Augmented Reality (AR), Social Media, coding, and robotic technology, provide users with a new way of processing architectural designs. These tools help to explore and enhance the architectural design process to create a powerful link between the design and idea through testing and calculation. With the advancement as well as productive innovations of these technologies, people in the not so distant future will find these platforms instrumental to the design process. For these systems to become innovative, designers must push the field to become more responsive, and not just to the environments that they help. More importantly, designers need to be sensitive to the users of said spaces. In other words, pioneers of these innovations must incorporate feedback from the physical aspects of a project as well as the cultural contexts of the user rather than solely relying on conventional or analytical processes of a project. While designers have their way of working and developing projects, it would be very beneficial for them to learn these new types of techniques and technologies since their prominence within the field of architecture will continue to grow and expand. The knowledge of these new tools will continue to change the way architecture is thought and produced. The potential of new technologies to develop designs and spatial configurations that are perceivable by our sensory system could potentially uncover a latent domain of spatial aesthetics that architects can experiment with, develop, and harness.
- Published
- 2020
118. Examining augmented reality in middle schools through a review of studies on augmented reality in education
- Author
-
Hong, Daeun
- Subjects
- Augmented reality, Technology integration in education, EdTech, Middle school, Teacher education
- Abstract
Although augmented reality (AR) is considered a promising technology in the field of education, many educators are still having trouble distinguishing between AR and virtual reality. Therefore, this report aims to present and refine the understanding of AR technology in middle school-level education based on existing studies published in the last three years. A total of eighteen academic journal articles were reviewed to obtain a comprehensive view of the definitions, types, features, advantages, limitations, and integration methods of AR in middle schools. According to Azuma (1997), AR is a technology that enhances one’s sense of reality by enabling the coexistence and simultaneous interaction of digital information and the real world. The most commonly used types of AR today are marker/vision-based and location-based. Based on the reviewed studies, eight features, five advantages, and six limitations of AR technology are identified. In addition, the ways in which AR technology has been integrated into middle school classrooms are described.
- Published
- 2020
119. Understanding social interactions with augmented reality
- Author
-
Bailey, Gabriel Elijah
- Subjects
- Augmented reality, Social interaction, Sharing, Information sharing, Social media, Krikey, AR, VR, Online privacy
- Abstract
This report examines the ways that users share information using augmented reality (AR). In order to accomplish that goal, the researcher (with the help of their supervisor), conducted interviews of two participants. In those interviews, the participants were asked questions about their sharing behavior on and offline along with their current usage of social media. After the interviews were conducted, the participants were given an AR app called Krikey to use for a period of time. At the end of that time, the participants reconvened with the researcher and answer a few more questions regarding their experience with the AR app and their sharing behavior on the app. The report, through thematic analysis and descriptive statistics, found that the participants were more willing to share information through AR when interacting with groups and individuals with whom they were familiar. On top of that, the users expressed concerns regarding privacy and social interaction within the app. This study provides a preliminary step toward better understanding the sharing habits of users in AR
- Published
- 2020
120. Designing Simulation-Based Active Learning Activities Using Augmented Reality and Sets of Offline Games
- Author
-
Hernandez, Olivia Kay
- Subjects
- Industrial Engineering, active learning, simulation, augmented reality, offline games
- Abstract
The use of augmented reality and gaming technologies to enhance active learning in simulation environments has generated a great deal of interest. These simulated environments are low risk and have minimal real-world consequences for erroneous actions or delayed diagnosis. From a human factors perspective, augmented reality enhances the ability of trainees to perform sensemaking of subtle symptoms to accurately diagnose and treat a patient. From an operations research perspective, game theory provides the ability to find equilibria for dyadic bimatrix adversarial games which allows offline prediction of what an opponent will likely do, and the ability to perform mechanism design. In applying some of these concepts to the context of gaming environments, we can simulate scenarios with adversarial elements. From a methodological perspective, there are many possible options within a game and an individual decision maker is not in complete control of all the factor settings that influence outcomes. Mathematical estimations can be made for the situation where multiple decision makers select options and receive rewards that depend on the selections made by all players.Findings are presented from a study that was conducted in a simulation-based environment with 42 recruited medical students. For the stimuli, a video of an augmented reality trauma patient was `painted’ on a table. The patient became increasingly sick over four consecutive stages of a case. The three patient cases were gunshot wound, superheated airway, and tension pneumothorax. The experimental condition prompted participants for an articulation of their mental model between stages and provided expert coaching via an audio-taped verbal presentation on the cues and mechanism of injury for the case. The mental model prompts included questions about asking for applicable cues and information driving a diagnosis, treatment goals, interventions, and predictions for case progression with and without treatment actions. Our findings suggest that requiring students to think aloud and make predictions about what could happen next directs their attention to recognize subtle cues and aids in determining treatment plans in a simulated setting. This dissertation demonstrates how both methodological and experimental approaches can be applied in simulated environments to enhance and facilitate active learning.
- Published
- 2020
121. Providing Examples and Tool Support for Novice AR Creators
- Author
-
Chen, Kangning
- Subjects
- augmented reality, computing education, example-centric programming, novice misconceptions
- Abstract
The recent democratization of AR development with the availability of ARKit and ARCore has lowered its barrier to entry for seasoned mobile and game developers. However, learning AR development still poses significant challenges to creators with limited development experience, let alone novice programmers. This paper surveys conceptual and technical challenges currently faced by novice AR creators who are beginner programmers and presents an investigation around the design and use of progressive source code examples and an AR inspector tool to improve AR learner experiences. We characterize and address the challenges in designing step-by-step examples specifically for AR based on a survey with 17 students recruited from an AR/VR development course. We also present findings from a focus group with 5 of the students using our progressive example source code of an AR scene in combination with our AR inspector tool. We find that even simple features such as visualizing the world origin, relative camera and object positions, and 3D model pivot points already enhance novices’ understanding of 3D computer graphics and AR concepts and that interactive debugging tools with alerts for missing colliders and touch handlers can better facilitate AR learner experiences.
- Published
- 2020
122. Memory Layers: AR for the Museum of Memory
- Author
-
Cárdenas Gasca, Ana M.
- Subjects
- museum, augmented reality, research through design, memory museum
- Abstract
We present the design, implementation, and evaluation of Memory Layers. An Augmented Reality (AR) application co-designed with the Museum of Historical Memory in Bogotá, Colombia (MMH). The MMH was created to collect, preserve and communicate the stories of war victims left by Colombia's internal armed conflict. Following research through design methodology, we designed Memory Layers, an Augmented Reality application where digital content curated by the museum interacts with the physical context of the user using AR. We created Memory Layers with the objective of uncovering the nuances of building AR applications that seek to close the distance between a narrator and an audience in the context of war stories. Memory Layers consists of both a digital campaign and a mobile application that adapts content previously curated by the MMH to an AR format. We designed a study around Memory Layers to find out how the format elicited new perspectives from the users using the original format as a baseline. We found three major themes in our interviews: First, the AR app proposes a challenge in balancing the sensitive experience of users, delivered through visuals and metaphors related to victim’s stories, and the background and analysis provided by contextual information. Second, the audiences shift from a role of narrative outsiders to be an active participant in the victim’s narrative. Third, as participants shift to a more active role in the narrative, they acknowledge new responsibilities as contributors to the victims' narrative.
- Published
- 2020
123. PLACE AND DIGITAL SPACE
- Author
-
Chaudhary, Suraj
- Subjects
- Phenomenology, Virtual Space, Augmented Reality, Philosophy of Technology, Philosophy of Space, Digitization of Space, Continental Philosophy, Geography, History of Philosophy, Philosophy, Place and Environment, Social and Cultural Anthropology, Sociology
- Abstract
The intersection of philosophies of space and technology is a fecund area of inquiry that has received surprisingly little attention in the philosophical literature. While the major accounts of space and place have not considered complexities introduced by recent technological developments, scholarship on the human-technology relationship has virtually ignored the spatial dimensions of this interaction. Place and Digital Space takes a step in addressing this gap in literature by offering an original, phenomenological account of place and using this framework to analyze digitally mediated spaces. I argue that places are continually evolving, internally heterogenous, and spatially distinct meaningful wholes with indeterminate boundaries. The emergence and ongoing reconstitution of places require repeated bodily engagements, which occur in the context of other places, in relation to the engagements of others, and against the background of social practices and cultural norms. I then show how spaces mediated by digital technologies, particularly augmented reality (AR), are fundamentally different from ordinary places. The increasing use of AR, I argue, poses an unprecedented challenge to the way we interpret, engage with, and have collective experiences of everyday places. Finally, I identify ethical questions raised by the interpretation of spaces by artificial intelligence, by the unauthorized augmentation of places, and by the possibility of a few companies with big data dominating the virtual modifications of public places.
- Published
- 2020
124. Interacting with Smart Devices - Advancements in Gesture Recognition and Augmented Reality
- Author
-
Becker, Vincent; id_orcid 0000-0003-0522-0312
- Subjects
- Human-computer interaction (HCI), Ubiquitous computing, Interaction with smart devices, Gesture recognition, Augmented reality, Tangible interfaces, Data processing, computer science
- Abstract
Over the last decades, embedded computer systems have become powerful and wide-spread with remarkable success. Besides traditional computers, such as desktops, laptops, smartphones, and servers, such systems have become part of nearly every technical appliance, for example, cars, televisions, and washing machines and thereby, an essential part of our lives. A common phrase for such appliances is “smart device”, a term which encompasses equipment to which one can digitally connect in order to exchange information and commands. Further, it can potentially sense its environment and process as well as act upon this measurement. Although computers and the devices containing them play such an important role, the way in which we interact with them has not changed much since the early days. Humans are required to control them in a manner very different from human-to-human communication. In particular, the possibilities to provide input to a smart device are mostly limited to traditional interfaces, such as buttons, knobs, or keyboards; or graphical representations thereof on displays. Technological progress in recent years in hardware, as well as in algorithmic methods, e.g. in machine learning, enables novel solutions for human-computer (or in fact human-device) interaction. Only recently, the use of speech has become practical and is used on smartphones as well as for home devices, incorporating a modality in the interaction process that is innate to humans and rich in expressiveness. However, for natural communication, other modalities, such as gestures, complement speech and may do so in human-computer communication, enabling simple and spontaneous interactions and avoiding the known social awkwardness of having to talk to devices. The adoption of speech and also other commonly used interaction methods, such as touch input on smartphones, indicate the relevance of considering further modalities in addition to the traditional ones. This dissertation contributes to further bridging the interaction gap between humans and smart devices by exploring solutions in the following areas: (i) Wearable gesture recognition based on electromyography (EMG): The touch of our fingers is widely used for interaction. However, most approaches only consider binary touch events. We present a method, which classifies finger touches using a novel neural network architecture and estimates their force based on data recorded from a wireless EMG armband. Our method runs in real time on a smartphone and allows for new interactions with devices and objects, as any surface can be turned into an interactive surface and additional functionality can be encoded through single fingers and the force applied. (ii) Wearable gesture recognition based on sound and motion: Besides other signals, gestures might also emit sound. We develop a recognition method for sound-emitting gestures, such as snapping, knocking, or clapping, employing only a standard smartwatch. Besides the motion information from the built-in accelerometer and gyroscope, we exploit audio data recorded by the smartwatch microphone as input. We propose a lightweight convolutional neural network architecture for gesture recognition, specifically designed to run locally on resource-constrained devices. It achieves a user-independent recognition accuracy of 97.2% for nine distinct gestures. We find that the audio input drastically reduces the false positive rate in continuous recognition compared to using only motion. (iii) Device representations in wearable augmented reality (AR): While AR technology is becoming increasingly available to the public, ways of interaction in the AR space are not yet fully understood. We investigate how users can control smart devices in AR. Connected devices are augmented with interaction widgets representing them. For example, a widget can be overlaid on a loudspeaker to control its volume. We explore three ways of manipulating the virtual widgets in a user study: (1) in-air finger pinching and sliding, (2) whole-arm gestures rotating and waving, (3) incorporating physical objects in the surroundings and mapping their movements to interaction primitives. We find significant differences in the preference of the users, the speed of executing commands, and the granularity of the type of control. While these methods only apply to control of a single device at a time, in a second step, we create a method which also takes potential connections between devices into account. Users can view, create, and manipulate connections between smart devices in AR using simple gestures. (iv) Personalizable user interfaces from simple materials: User interfaces rarely adapt to specific user preferences or the task at hand. We present a method that allows the quick and inexpensive creation of personalized interfaces from paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes, which are monitored by a camera with depth sensing, can be dynamically changed during use. The proposed methods aim at a more natural interaction with smart devices through advanced sensing and processing in the user’s environment or on his/her body itself. As these interactions could be made ubiquitously available through wearable computers, our methods could help to improve the usability of the growing number of smart devices and make them more easily accessible to more people.
- Published
- 2020
125. Public Perception of Different Planting Techniques using Augmented Reality
- Author
-
Tania, Sultana Quader
- Subjects
- Augmented reality, Public perception, Planting techniques, Sustainability, Color and aesthetics, Mix method survey, Design of Experiments and Sample Surveys, Other Social and Behavioral Sciences, Transportation Engineering, Urban Studies and Planning, Jack N. Averitt College of Graduate Studies, Electronic Theses & Dissertations, ETDs, Student Research
- Abstract
The objective of this study was to measure public perception of the different planting techniques (block and matrix), which are used at visitor information centers (VICs) and other rights of way (ROW) areas. The main factors that affect public perception of planting techniques were identified through an extensive literature review and qualitative survey from four welcome centers in the state of Georgia. The ranking of those indicators, based on public preferences, was discovered through a quantitative survey. During the first phase of the quantitative survey, images of block and matrix were used. An iOS-based user-friendly and cost-effective augmented reality (AR) app was developed, and a significant difference was found between data with and without AR. Participants were more interactive and engaged in the survey process, largely due to the addition of the AR visuals questionnaire. The ranking of the factors being obtained from the study were: environmental benefits, sustainability, color and aesthetics, cost, maintenance, and restorative effect. The majority of the respondents expressed that block planting configuration was more aesthetically beautiful. However, when all the factors were considered, the public largely preferred matrix planting, as it tends to be more beneficial to the environment. It is sustainable, cost-effective, and requires less maintenance. Results from this study indicated that environmentally beneficial and sustainable planting was more preferred to the traveling people for ROW planting.
- Published
- 2020
126. Space Time Exploration of Musical Instruments
- Author
-
Munoz, Isaac Garcia
- Subjects
- Music, Augmented Reality, Extended Reality, Mixed Reality, Musical Instruments, Spatial Audio, Virtual Reality
- Abstract
Musical instruments are tools used to generate sounds for musical expression. Virtual Reality (VR) and Augmented Reality (AR) musical instruments create sounds that may be spatially disjointed from the instrument controls. Spatial audio processing can be used to position the Extended Reality (XR) musical instruments and their corresponding sounds in the same space. This dissertation investigates novel ways of combining spatial reverb models to improve the naturalness of XR musical instruments. Seven spatial reverb systems, combinations of a shoebox spatial reverb model, a raytracing spatial reverb model, and measured directional room impulse response convolution reverb, were compared in a pilot study. A novel hybrid system of synthetic early reflections and directional room impulse responses was preferred for naturalness when tested over headphones with three instruments created by the author: AR electric guitar, AR drumset, and VR Singing Kite. This research culminated in a concert, Spherical Sound Search, which showcased the preferred hybrid system, the three XR musical instruments, and four re-contextualized spatial audio effects (spatial looping, spatial delay, spatial feedback, and spatial compression). The three pieces in the concert explored different aspects of XR modalities and presented the novel system with spatial audio effects to a larger audience by rendering to an octophonic loudspeaker layout.
- Published
- 2020
127. Feasibility of Virtual and Augmented Reality Devices as Psychology Research Tools: A Pilot Study
- Author
-
Garduno Luna, Cristopher Daniel
- Subjects
- Cognitive psychology, Psychology, Augmented Reality, Cognitive Science, Electroencephalography, Virtual Reality
- Abstract
The recent proliferation of VR and AR devices has led to an increase in the use of these devices as research tools. As these technical developments continue, researchers can leverage these hardware improvements to create realistic and controlled environments for experimentation in life-like scenarios. In cognitive research, these devices will often be coupled with neurophysiological recordings, which poses the challenge of dealing with movement artifacts. In this study, three experiments were conducted using oddball tasks and semantic processing tasks to assess EEG data quality using VR/AR to display stim- uli. The first experiment showed that the VR oddball task elicited comparable neural activity as a traditional desktop oddball task. The subsequent experiments systematically introduced movement artifacts in VR and AR, and showed that these neural data were usable with minor movement artifacts, while neural signals recorded under walking/free motion conditions were heavily contaminated with movement artifacts. Although there have been a variety of approaches for removing movement artifacts from neural data, many of them are specific to the experimental design, or have other constraints limiting their generalizability.
- Published
- 2020
128. Capturing Tacit Knowledge through Smart Device Augmented Reality (SDAR)
- Author
-
Hashimoto, Jason Lee Beronia
- Subjects
- Architectural engineering, Computer science, Augmented Reality, Design Communication, Design Process, Smart Device, Tacit Knowledge
- Published
- 2020
129. Widen People’s Career Opportunities with AR Technology
- Author
-
Shi, Yuedan
- Subjects
- Augmented reality, Career, Mobile application
- Abstract
According to some existing research, a lot of people regret about their career selection due to the lack of sufficient knowledge and research when they made the decisions. By exploring the reasons, process, and factors that influence people’s decision-making, I designed an mobile application that help people find more career opportunities beyond their own living experience by leveraging AR recognition technology, and provided a platform for people to get sufficient information so that they could make a better choice for their future career.
- Published
- 2019
130. Funsigns - An Interactive Educational Tool to Learn Sign Language
- Author
-
Fang, Shiqi
- Subjects
- Augmented reality, Deaf children, Hearing parents, Interactive, Learn, Sign language
- Abstract
This thesis aims to provide an educational and interactive tool where deaf children and their hearing parents can learn, bond, enjoy and interact with each other. More than 90% of children which severe to profound hearing losses are born to normally hearing families. When parents are told that their child is deaf, their dreams are crushed and a grief response may be triggered (Weaver, K. A., & Starner, T, 2011). Due to this lack of knowledge when faced with a deaf family member for the first time, parents tend to base their perceptions on obsolete stereotypes which can greatly affect the development of the child. Delayed cognitive and language development in early childhood that leads to academic difficulties and underperformance when they begin schooling. Despite the government, schools, and professionals having good intentions, this situation persists, leading to significant under-education and underemployment for persons who are deaf or hard of hearing. Because there is not enough spoken or sign language, the effects of early language deprivation or limited exposure to language are often very serious, leading to serious problems in the health, education, and quality of life issues of these children. Family engagement has a more positive impact if it begins early in a child’s educational experience. My solution is using object recognition with augmented reality to help facilitate language acquisition for deaf kids and their hearing parents. Through this thesis project, I intend to develop an interactive tool that engages a deaf child and their hearing family to learn sign language together, create better communication and improve life quality. According to existing education sign language ways, most of them are still in a traditional way like books, online or offline courses, and some text/image-oriented applications. An interactive visualized educational tool gives the family more motivation to learn the knowledge and help them learn it in an efficient and cheerful way. Integrating with augmented reality technology has also created a new method to display the content in this field.
- Published
- 2019
131. Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge Acquisition
- Author
-
De Oliveira Faria, Nayara
- Subjects
- spatial knowledge, augmented reality, head-up display, driving
- Abstract
In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface. Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts. Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration.
- Published
- 2019
132. Incorporation of Depth Feed for Prototype Rover Video and Other Usability Improvements
- Author
-
Dal Santo, Joseph
- Subjects
- Teleoperation, Augmented Reality, Depth Video, Direct Control
- Abstract
NASA has been mandated by the US government to return to the moon within the next five years, meaning that functional technology for lunar exploration must be developed and tested in order to reach that goal. While most lunar exploration vehicles can be operated using a supervisory control scheme, utilizing a direct human control scheme would allow missions and navigation be more robust to unforeseen events and obstacles. Because any human-teleoperated lunar mission would involve significant time delay, low video resolution, and low framerate, video feeds need to be augmented to grant human navigators additional information about the rover's surroundings. This work proposes two new video feed options for an existing prototype lunar rover that utilize an Intel D435 depth camera to provide a depth image stream of the rover's surroundings and an augmented reality (AR) depth overlay on a mono RGB video stream. The video feeds proposed in this work have been proven to function within the rover's current architecture, and can correctly simulate the conditions a navigating user would experience when attempting to teleoperate the rover at a lunar distance. This work also proposes usability improvements to the User Heads-Up Display based on previous test feedback to reduce onscreen clutter for navigating users, and a data visualization program to reduce data analysis time for testers and increase clarity when presenting data.
- Published
- 2019
133. An AR Enhancement of Printed Educational Resources: Keeping printed educational materials competitive in a digital age
- Author
-
Simitch Warke, Dax
- Subjects
- 3D animation, Augmented Reality, Education, Interaction design, Motion graphics, User experience
- Abstract
Digital resources have begun to take over our educational system and overshadow traditional printed resources. While these digital resources are becoming increasingly popular research has shown that printed resources have notable benefits that should not be dismissed. Enhancing these printed resources with augmented reality (AR) technology will allow them to be competitive with digital resources while preserving their academic benefits. The readily available smart devices carried in the pockets of most students have already prepared many classrooms for the use of AR. The interactive and visual potential of AR makes it an appealing educational tool that can dramatically improve the experience of learning with printed resources. This project will utilize 3D and animation graphics to simulate the use of AR on a selected set of existing texts.
- Published
- 2019
134. Encouraging Children to Actively Recycle: A mobile application to promote recycling in the Dominican Republic
- Author
-
Rivera Pagan, Noelia A
- Subjects
- Augmented reality, Behavioral design, Gamification, Interaction design, Mobile application, Recycling
- Abstract
The Dominican Republic generates 14,000 tons of solid waste daily, 49% of this is recyclable, but only 5% is. This waste is causing health problems, affecting the tourism industry, and the quality of life of its residents. Problems of confusion, lack of motivation, and lack of prioritization of the activity, affect the decision-making process when recycling. What if there was a way to motivate the Dominican population to recycle, and change the behavior towards this activity? Children learn new behaviors faster than adults. Why not use this opportunity to incorporate the engaging factor of design and technology to help improve the motivation level of the Dominican society, with the help of children as the promoters of good recycling habits? To educate, motivate, and engage children and adults to actively recycle, a mobile application can be used throughout the country to raise recycling awareness memorably. This thesis explores behavioral design methods and the use of gamification and Augmented Reality to engage children with the application. The solution rewards real-world recycling actions and allows children to transform recycled materials into energy for virtual robots. The application provides tips on how to separate and dispose of scanned materials. Users can share their knowledge with other children and adults, which can result in hopes of building a virtual recycling community. When using the application, children will be able to make recycling a habit, by implementing their learnings from interacting with the platform and integrate recycling into their life and its value to become a lifelong habit. The final solution highlights the primary interactions of the mobile application in a demo format, built based off of feedback from primary user groups and design peers. This prototype demonstrates how design can be used to best leverage the full capabilities of mobile technology to affect real-world change.
- Published
- 2019
135. Immersive Technologies in Preservice Teacher Education: The Impact of Augmented Reality in Project-Based Teaching and Learning Experiences
- Author
-
Arbogast, Michelle A.
- Subjects
- Adult Education, Curriculum Development, Early Childhood Education, Education, Educational Technology, Instructional Design, Teacher Education, Teaching, Technology, immersive, AR, augmented reality, experiential learning, cone of experience, UDL, universal design for learning, authentic, preservice teacher, project-based experience, motivation, instructional materials motivation survey
- Abstract
The value of personal experience in learning is a concept that has been around for thousands of years dating back to the time of Confucius in 450 B.C. Today, personal experience can be accomplished through immersive technology, such as augmented reality, a technology simulating real-world and authentic experiences. Kolb’s Theory of Experiential Learning (1984) and Dale’s Cone of Experience (1946) theorized not only the importance of learning by doing, but that the type or authenticity of the experience is important in learning outcomes, retention, and learner motivation. Immersive technology has advanced to the point that it is not only accessible, but also user friendly. However, research into the impact of immersive technology remains focused in K-12 settings with students as the consumers, rather than creators of authentic experiences. The purpose of this study was to refocus the research to higher education preservice teachers, a unique population who are the potential creators of these experiences. The study investigated if the use of immersive technology in a preservice teacher project-based learning experience influenced knowledge attainment and retention of a key pedagogical concept and if it affected preservice teacher motivation. The key pedagogical concept selected for the study was Universal Design for Learning (UDL). The individual project-based learning experience required preservice teachers to implement these principles into a functional lesson appropriate for their grade level and subject. The study utilized a baseline/post/posttest design and the Instructional Materials Motivational Survey (IMMS) as instruments. The results of the level of knowledge attainment and retention were inconclusive due to underperformance of the baseline/post/posttest instrument. A more functional, hands-on test of the application of the UDL principle would provide more reliable results. In the motivation construct, the results indicated that the type of experience (immersive or interactive) impacted preservice teacher learning. The level of the impact varied based on the levels within the underlying constructs of the motivational measure: attention, relevance, confidence, and satisfaction. The students who utilized the immersive technology showed higher levels of relevance, confidence, and satisfaction. However, the study also revealed a substantial gap in the motivational measure and the need for additional research.
- Published
- 2019
136. The Design and Evaluation of an Innovative Head Mounted Display Counselling Tool for Warfarin
- Author
-
Prenzler, Shane Joseph
- Subjects
- Augmented reality, Head-mounted display, Educational theory, Warfarin, Pharmacy
- Abstract
Background Emergent and disruptive technologies such as augmented reality (AR) and head-mounted display (HMD) have not been explored in the context of pharmacy to date despite the potential benefits for the use of these technologies. Patient counselling could be potentially be modulated to better reflect their level of health literacy and the standardisation of the counselling given to each patient. Further benefits may also be seen with the continual education of pharmacists who use the HMD-AR warfarin counselling guide, as pharmacists have been found to favour interactive forms of continuing education. For this reason, a previously validated educational framework was used for the development of the HMD-AR warfarin counselling guide, the mobile augmented reality education (MARE) design framework. The acceptance of the HMD-AR warfarin counselling guide was gauged using the previously validated e-learning technology acceptance model (TAM). The use of TAM gives significant insight into participants perceived ease of use (PEU), perceived usefulness (PU) and their behavioural intention (BI) which correlate highly to actual use of technology being tested. Aims and Objectives: The aim of this study was to develop and test acceptance of a counselling tool for the drug warfarin using both HMD and AR (HMD-AR warfarin counselling guide). The objectives of this study were to (i) conduct background literature research prior to the developments of the HMD-AR warfarin counselling guide, followed by a (ii) pilot study with 7 Griffith University School of Pharmacy and Pharmacology academic staff. Use pilot study feedback to (iii) redevelop the HMD-AR warfarin counselling guide as well as the accompanying adapted TAM survey. Conduct a (iv) larger mixed method cohort study with a pre- and post-test with 40 Australian registered pharmacists. Methods: The HMD-AR warfarin counselling guide was developed in the same documented way as MARE framework. TAM was assessed in the post-test on a five-point Likert scale. The redeveloped HMD-AR warfarin counselling guide used in the larger cohort of 40 pharmacists to gauge acceptance of the redeveloped HMD-AR warfarin counselling guide. Descriptive statistics, two-tailed Spearman’s rank analysis, Wilcoxon signed rank test and qualitative analysis were then utilised. A pre-and post-test assessment was conducted on participants willingness to use technology and the usefulness of HMD-AR warfarin counselling guide. Results and Discussion: It was shown that even though overall each construct of TAM had an average positive result, this fluctuated. PEU was shown to be the best performing construct with an average score of 1.68 on a five-point Likert scale, while BI showed the lowest average score of 2.74. Spearman rank analysis showed the pre-test question regarding usefulness of the HMD-AR warfarin counselling guide was associated with the post-test statements for PU, AT, BI and SN. Wilcoxon signed rank analysis showed both the post-test additional question for usefulness of the HMD-AR warfarin counselling guide (p= 0.005) and willingness to use technology normally (p= 0.025) had declined compared to the same questions in the pre-test. Qualitative feedback was coded to form three major categories which were then split into two sub-categories each. This qualitative feedback showed a negative perception most participants towards the HMD-AR warfarin counselling guide, some praise was seen for the content and potential of the counselling guide itself however. This study was able to document the early acceptance of the HMD-AR warfarin counselling guide with the use of 40 recruited pharmacists. Expectations were higher prior to use of the HMD-AR but dropped after participants trialled the HMD-AR warfarin counselling guide. Perceptions regarding the HMD-AR technology incorporated into this device were more negative compared to the content and information of the counselling guide. This may have influenced the BI construct having a near neutral on average response from participants, this indicated a possible reluctance for actual use of the HMD-AR warfarin counselling guide in practice. This was despite other constructs of TAM like PEU having a more positive average score indicating participants found the HMD-AR warfarin counselling guide easy to use. Conclusion: This study was successful in developing the HMD-AR warfarin counselling guide and testing for acceptance. Current technology available fell below expectations of usefulness for most participants. Future applications of this technology could mean the use of HMD-AR technology for other drugs. Further research would need to be conducted on a larger sample of participants from more diverse professional backgrounds in order further understand acceptance.
- Published
- 2019
137. Feed Me: an in-situ Augmented Reality Annotation Tool for Computer Vision
- Author
-
Ilo, Cedrick K.
- Subjects
- Augmented Reality, 3D User Interface, Computer Vision Training
- Abstract
The power of today's technology has enabled the combination of Computer Vision (CV) and Augmented Reality (AR) to allow users to interface with digital artifacts between indoor and outdoor activities. For example, AR systems can feed images of the local environment to a trained neural network for object detection. However, sometimes these algorithms can misclassify an object. In these cases, users want to correct the model's misclassification by adding labels to unrecognized objects, or re-classifying recognized objects. Depending on the number of corrections, an in-situ annotation may be a tedious activity for the user. This research will focus on how in-situ AR annotation can aid CV classification and what combination of voice and gesture techniques are efficient and usable for this task.
- Published
- 2019
138. Cyber-Physical Experiences: Architecture as Interface
- Author
-
Akman, Turan M.
- Subjects
- Architecture, Augmented Reality, Architectural Experiences, Spatial Experiences, Futuristic Architecture, Dynamic Space
- Abstract
Conventionally, architects have relied on qualities of several elements such as materiality, light and shadows, solids and voids, patterns and paintings, mass, volume, etc. to break out of the static nature of the space, and enhance the way users experience and perceive architecture. Even though some of these elements and methods helped create more dynamic spaces, architecture is still bound by conventional, namely the physical constraints of the discipline. With the introduction of technologies such as augmented reality(AR), it is becoming easier to blend digital, and physical realities, and create new types of spatial qualities and experiences. This ultimately creates possibilities that had not existed for architects before. As AR technology becomes streamlined and commonly used, architects won’t be bound by the aforementioned conventional and physical constraints as a result of being able to blend digital and physical elements. Since this technology is not limited by the constraints of the physical world, the nature of the effects AR can bring are unlimited, and dynamic by its nature. Even though AR cannot replace the primary and conventional qualitative elements in architecture, it can be used to supplement and enhance the experience and qualities they provide.
- Published
- 2019
139. Improving User Performance in Rehabilitation Exercises
- Author
-
Ocampo, Renz J R
- Subjects
- Functional Capacity Evaluation, Augmented Reality, Serious Games, Robotics, Rehabilitation, Virtual Reality
- Abstract
Abstract: Disabling events such as stroke affect millions of people worldwide, causing a need for efficient and functional rehabilitation therapies in order for patients to regain motor function for reintegration back into their normal lives. Rehabilitation regimes often involve performing exercises that mimic the movements done in activities of daily living. These are sometimes complemented with serious games controlled through a robotic user interface to increase the motivation of the patients, further increasing the likelihood of success of the therapy. However, alongside physical disability, some patients (e.g., stroke patients) develop cognitive deficiencies that affect their ability to think, plan, and carry out tasks. In such cases, serious games, which are commonly displayed on a 2D monitor to the patient, may be too hard for patients due to the spatial disconnect between the visual coordinate frame (screen frame) and the hand coordinate frame (robotic user interface frame). Patients will have to do mental transformations to align their hand movements with their movements on-screen. The colocation of visual and motor frames for rehabilitation in commercial devices is still in its infancy, and while there are research regarding the use of visual-motor colocation in rehabilitation, its effectiveness has not yet been explored. This thesis presents a study of the effectiveness of visual-motor colocation in rehabilitation exercises by integrating augmented reality in serious games to achieve the above-mentioned colocation. A technique called projection mapping is utilized to project digitally constructed objects onto the real-world environment. Physical interaction with these objects is handled through a haptic user interface. The system is comprised of a projector, a game engine to create the virtual environment, a haptic user interface to interact and receive force feedback on the virtual objects, and a depth sensor to implement head tracking in 3D scenarios.The system design and the investigations in this study consist of two stages: first implementing visual-haptic colocation in a 2D spatial augmented-reality display in Chapter 3, and then further extending the work into a 3D environment in Chapter 4. The 2D case involves a task where reaching motions are performed using a 2D planar haptic robot. For the 3D case, three tasks are presented, each requiring a combination of spatial accuracy, awareness, and manipulation. Disability-induced cognitive deficiency is simulated on able-bodied participants by putting them under a cognitive load while performing the tasks. Each of the tasks in the 2D and 3D cases are compared to their non-colocated counterpart (tasks displayed on a 2D screen placed in front of the user) in terms of several user performance indicators. Results show a significant increase in user performance when visual and motor frames are colocated for both 2D and 3D cases. Furthermore, one of the tasks in 3D showed that visual-motor colocation can alleviate the negative effects of cognitive loading.Finally, after the validation of the effectiveness of AR in robot-assisted therapy, we combine AR with a heavy-duty robot in Chapter 5 and explore the use of this robot-AR system in occupational rehabilitation and functional capacity evaluations. The biomechanics of the user's arm while performing the task with the robot-AR system is compared with their arm biomechanics for an equivalent real-world task. An analysis for similarity of the arm biomechanics is carried out to determine if using the robot-AR system can produce the same upper-limb movements as in conventional rehabilitation practices.By increasing the user performance, which consequently increases the likelihood of success in performing the exercise, this work of bridging the spatial disparity between two frames can potentially improve the efficiency of current rehabilitation practices that use serious games for therapy. It has been shown that users who are set up to succeed more are more likely engaged in rehabilitation. It becomes a positive feedback loop where as the patient's performance improves with practice, this improvement allows the patient to do even more. While not all patients achieve this positive feedback structure, we hope to make it easier for patients to reach this ``threshold."
- Published
- 2019
140. AUGMARENA / A TRANS-DIMENSIONAL SOCIAL NETWORK
- Author
-
Mansoor, Mohammed
- Subjects
- Urbanism, Computer science, Bio-architecture, solar panel design, Augmented Reality, Virtual Reality, architecture, Bioengineering
- Abstract
When I think of architecture, images come to my mind, images of worlds different from ours. I believe our world is a collection of overlapping microcosms of different scales, we enter and exit these microcosms daily, but few of them are so small or large in scale that it becomes inconceivable to us. Like the space between the complex matrix of a sponge or a trabecular structure of a bone; between the growth of branches in a tree or spaces in between the riverine system of a river; in the complex neural network of our brain or the networks of streets in our cities. Everything has a world of its own, we have our own world built through interactions of forces from the outside world. There is another world made up of codes, the digital world, where everything is possible. The world that is built with our imagination, which redefined the way we socialize, communicate, entertain and exchange ideas. We see the influence of the digital world more pressing today than any time in history, digital has turned from being a mare anomaly within the reality to be a new reality. My journey through M.S AAD has been the transition from Ecology centered Architecture to Digital Ecology centered Architecture, with the focus on technology as a main driving force.
- Published
- 2019
141. Arch[e]ology
- Author
-
Lee, Chang-Feng
- Subjects
- Urbanism, industrial zone, architecture, Augmented Reality, Ecology
- Abstract
Architecture as an interface Architecture, for me, is an interface capable of re-bridging the connection between nature and the manmade world. Architecture, instead of a lifeless container for human beings, is a diverse, complex system or infrastructure that can adapt to our surroundings. It can not only accommodate humans, but also has the potential to mitigate environmental dilemmas or, minimally, increasing awareness of them. During my territory of investigation at Cornell, a multidisciplinary approach is indispensable, since it empowers architecture to be explored more innovative possibilities and experimental practices. Within this year, I specifically focus on five different relationship between ecology and architecture through exploring the intersection of human civilization and the natural world. They utilized various methodologies approaching to architecture, including ecological research with insects and algae, specific site research in Harlem neighborhood, Washington Square park, industrial zone in Armenia, and experimental process related to augmented and virtual reality for producing digitalized ecology. Arch[e]ology refers to the interface between architecture and ecology, which concludes the design projects I was involving throughout my study at Cornell. It aims to present interdisciplinary aspect of architecture, and conveys the notion of architecture nowadays is boundary-less. It is potentially able to transform the modern architectural typology, to overturn the architecture we have being taken for granted, and to rethink the problematic issues within our society.
- Published
- 2019
142. Astral Logistics
- Author
-
Braun, David
- Subjects
- Fine Arts, art, fine art, virtual reality, vr, immaterialism, art religion, religioneering, augmented reality
- Abstract
Astral Logistics is a post-object exploration of immaterialism, Religioneering, and the obsolescence of the body, using virtual and augmented reality technology set in a mock-retail environment. The overarching purpose of Astral Logistics is to challenge and question the monopoly the major religions have over the spiritual lives of people and to establish Astral Logistics, its parent Art Religion, Super-Psycho-Synergetic-Eleventyism (Eleventyism for short), as a critique and viable alternative. Astral Logistics was originally conceived as a religion-generation-engine – a Religionator - in the form of a retail store where one could create and manifest one’s own custom religion to be delivered as a sculptural object either digitally via augmented reality or as a 3D print. It is my goal to empower the individual and, through Religioneering, establish the Art Religion as a new medium. Astral Logistics is asking the viewer to question whether a sculpture, or any art object, must take solid, corporeal form in the `real’ world to be a work of art and to challenge the status of art galleries and museums as the ultimate validating venues for art by using a medium that can live solely on the internet or a user’s own computer.
- Published
- 2019
143. TrailXplorer: An interactive mobile educational tool to promote safety awareness and prevent injuries from outdoor adventures.
- Author
-
Xu, Zhuoxin
- Subjects
- Augmented reality, Educational tool, Interactive application, Prevent injuries, Safety awareness
- Abstract
This thesis seeks to introduce an interactive mobile prototype to promote safety awareness and a visualized guideline to assist outdoor adventurers. Nature is a place where many people go to find peace and relax. According to the “Outdoor participation report 2017” (the Coleman Company, 2017), 144.4 million Americans participated in outdoor activities in 2017. Unfortunately for a multitude of reasons, people encounter some type of unfortunate event that results in either injury or death. As the number of outdoor participants has continued increased annually, safety awareness and skills training have become increasingly crucial now more than ever. This project sought to not only educate people’s outdoor skills and reduce injuries but to also proposes new interactive methods that can support them in the future. The overall purpose of this project was to reduce injuries from outdoor activities by promoting safety awareness and educate people on outdoor activity skills. By providing users with information about environmental conditions and Augmented reality technology, users can have better insights into their surroundings in addition to knowledge of useful safe solutions when they go outdoors. Therefore, in the future outdoor enthusiasts will be able to enjoy a better and safer adventure. More importantly, this project explored how to integrate emerging technologies into people’s real- life experiences while helping them to solve problems. The deliverables of this project included three parts. Firstly, the user research. A lot of user research methods, such as user questionnaire, interviews, user personas are used to identify the target audience, their needs and pain points. Secondly, the interaction design process included planning and documenting user flow, the definition of design through wireframes and visual design iterations. To find the optimal user experience, scenarios were evaluated based on user objectives. Lastly, a demo video was created to help communicate the workflow by simulating the use cases. This project proposed to combine user experience methods, interfaces design, and augmented reality technology to deliver a better and safer outdoor experience.
- Published
- 2019
144. ROVAR: Authentic Travel Leveraging Local Social Media
- Author
-
Arun, Akshay Kumar
- Subjects
- Augmented reality, Emotion driven design, Interaction design, Travel, User experience design, User interface design
- Abstract
In our era of constant technological development and an intricate network of information exchange, people tend to rely on online sources for their travels more and more. But because of the time crunch, people travel in, many places of authentic value go unnoticed. Take the ramen shop in Sankri, at the foothills of the Siwalik ranges of the Himalayas for example. It was here that I had one of the most authentic ramen noodles at a small little quaint shop which used hill spices to give it a unique flavor. This store was nowhere on the map, and if I had not stumbled upon it by mistake, I would have never had that experience which was truly authentic to that place. Many such sites are overshadowed by the commercial nature of travel currently present. The travel and tourism industry is one of the largest industries in the United States, making a total contribution of 1.5 trillion U.S. dollars to GDP in 2015. The industry was forecasted to contribute more than 2.6 trillion U.S. dollars by 2027. The most popular vacation type for U.S. travelers was beach vacations in 2017, followed by all-inclusive packages. This commercial nature of tourism, places of authentic value tend to get lost, and people fail to have the local experience. There are many travel guide websites and applications which give useful search results for different places to visit. Yelp, TripAdvisor, Bookings.com, and much more. But most of the time they do not accurately offer the most authentic locations in a place, and it does not put you in touch with the local culture. Airbnb does a better job at this by helping travelers meet the local residents and stay at their place as a rent-paying guest. This can include travelers in the local culture. My thesis explores the interaction methods involved in providing users with an authentic experience while leveraging local social media to bring local gems i.e., places that have an authentic value on the map and give a chance to the travelers at a truly authentic travel experience. Travelers will be continuously notified of the local gems when they are near one using geofencing technology and augmented reality, so they don’t miss out on anything that borders being authentic to a region. They will also have an opportunity to contribute to the travel community and build their credibility as a traveler, by marking new places they discover, rate and even collect places that other travelers have marked. This application aspires to bring the joy and serendipity of discovery in the form of a mobile application.
- Published
- 2019
145. Modeling Color Appearance in Augmented Reality
- Author
-
Hassani, Nargess
- Subjects
- AR, Augmented reality, Color appearance model, Color perception, Mixed reality, MR
- Abstract
Augmented reality (AR) is a developing technology that is expected to become the next interface between humans and computers. One of the most common designs of AR devices is the optical see-through head- mounted display (HMD). In this design, the virtual content presented on the displays embedded inside the device gets optically superimposed on the real world which results in the virtual content being transparent. Color appearance in see-through designs of AR is a complicated subject, because it depends on many factors including the ambient light, the color appearance of the virtual content and color appearance of the real background. Similar to display technology, it is vital to control the color appearance of content for many applications of AR. In this research, color appearance in the see-through design of augmented reality environment is studied and modeled. Using a bench-top optical mixing apparatus as an AR simulator, objective measurements of mixed colors in AR were performed to study the light behavior in AR environment. Psychophysical color matching experiments were performed to understand color perception in AR. These experiments were performed first for simple 2D stimuli with single color both as background and foreground and later for more visually complex stimuli to better represent real content that is presented in AR. Color perception in AR environment was compared to color perception on a display which showed they are different from each other. The applicability of the CAM16 color appearance model, one of the most comprehensive current color appearance models, in AR environment was evaluated. The results showed that the CAM16 is not accurate in predicting the color appearance in AR environment. In order to model color appearance in AR environment, four approaches were developed using modifications in tristimulus and color appearance spaces, and the best performance was found to be for Approach 2 which was based on predicting the tristimulus values of the mixed content from the background and foreground color.
- Published
- 2019
146. Creating a Safer Running Experience: Reducing Runner and Vehicular Traffic Incidents
- Author
-
Cooper, Chad A
- Subjects
- Accessibility, Augmented reality, Emerging technology, User experience, Wayfinding, Wearables
- Abstract
A routine run can turn into a final run in a matter of seconds. Runners and drivers are exposed to a variety of distractions that hinder their ability to safely react to potential incidents. With the help of Augmented Reality and wearable technology, runners can help reduce traffic related incidents by staying focused on the road and their surroundings. This project looks to investigate how AR technology can be leveraged to offer the safest run without diminishing the running user-experience.
- Published
- 2019
147. COCO-Bridge: Common Objects in Context Dataset and Benchmark for Structural Detail Detection of Bridges
- Author
-
Bianchi, Eric Loran
- Subjects
- Convolutional neural network, bridge inspection, UAS, CNN, Artificial Intelligence, Augmented Reality, Deep learning (Machine learning), Machine learning
- Abstract
Common Objects in Context for bridge inspection (COCO-Bridge) was introduced for use by unmanned aircraft systems (UAS) to assist in GPS denied environments, flight-planning, and detail identification and contextualization, but has far-reaching applications such as augmented reality (AR) and other artificial intelligence (AI) platforms. COCO-Bridge is an annotated dataset which can be trained using a convolutional neural network (CNN) to identify specific structural details. Many annotated datasets have been developed to detect regions of interest in images for a wide variety of applications and industries. While some annotated datasets of structural defects (primarily cracks) have been developed, most efforts are individualized and focus on a small niche of the industry. This effort initiated a benchmark dataset with a focus on structural details. This research investigated the required parameters for detail identification and evaluated performance enhancements on the annotation process. The image dataset consisted of four structural details which are commonly reviewed and rated during bridge inspections: bearings, cover plate terminations, gusset plate connections, and out of plane stiffeners. This initial version of COCO-Bridge includes a total of 774 images; 10% for evaluation and 90% for training. Several models were used with the dataset to evaluate model overfitting and performance enhancements from augmentation and number of iteration steps. Methods to economize the predictive capabilities of the model without the addition of unique data were investigated to reduce the required number of training images. Results from model tests indicated the following: additional images, mirrored along the vertical-axis, provided precision and accuracy enhancements; increasing computational step iterations improved predictive precision and accuracy, and the optimal confidence threshold for operation was 25%. Annotation recommendations and improvements were also discovered and documented as a result of the research.
- Published
- 2019
148. Exploiting Human Factors and UI Characteristics for Interactive Data Exploration
- Author
-
Khan, Meraj Ahmed
- Subjects
- Computer Science, database systems, interactive data visualization, interactive data exploration, interactive visualization, augmented reality, AR, data transformations
- Abstract
The development of new data interaction modalities and concurrent advancement in the hardware and database technology has dramatically changed the way users interact with data. Data analysis tasks have now turned into interactive and instantaneous explorations of query-result space. Database systems are not designed to maintain interactive latency while handling the unique and unprecedented workloads posed by the modern interaction modalities. Our work strives to bridge the gap between the end-users and database systems, allowing the users to engage in interactive data exploration without an undue cognitive burden in processing the system feedback. We utilize the user-interaction behavior, the user-interface characteristics, and the human factors that drive the data exploration process to handle the challenges of interactive latency and cognitive overload in different interactive data exploration scenarios. This dissertation presents a middleware component – Flux Capacitor, that insulates the backend from bursty and query-intensive workloads generated by modern web, mobile, touch, and gesture-driven next-generation interfaces. Flux Capacitor uses prefetching and caching strategies informed by the inherent physics-metaphor of UI widgets such as friction and inertia in range sliders and maps, and the typical behavior patterns in user-interaction, enabling low interaction response times while intelligently trading off accuracy when required. We present a data substrate for addressing the unique data management and user interaction challenges posed by Augmented Reality (AR) interfaces for data – DreamStore. The platform incorporates optimizations for AR workload characteristics at various layers of the data stack, treating AR queries as first-class queries. DreamStore provides a user-focus based mechanism to handle visual clutter – a form of cognitive overload and enables interactive latency for user actions through prefetching and caching strategies that utilize the user’s position and movement pattern and the physical placement of AR objects. Even with a performant system supporting interactive latency, a perceived disconnect between the user action and system response can be a source of cognitive overload. For example, in an interactive query session for structured data analysis, it is often difficult to comprehend a succession of transformations, especially for complex queries. Thus, to facilitate understanding of each data transformation and to provide continuous feedback, we introduce Data Tweening. Data Tweening presents to the user a series of incremental visual changes, that call out the user’s attention to the sequence with motion and highlights points of changes with other visual features such as color. The parallel presentation of visual changes encoded in an engaging animation offloads some of the user’s cognitive burden in processing the changes in the result, to their perceptual system. Through empirical evaluation, we demonstrate the efficacy of informing query processing and data presentation for ad-hoc data exploration, with the human factors that drive the interactive data exploration process and the inherent characteristics of the user interface, towards achieving interactive latency and reducing cognitive load
- Published
- 2019
149. Reexamining Deus ex Machina: Artificial Intelligence, Theater, & a New Work
- Author
-
Arnold, Nathan S.
- Subjects
- Theater, Artificial Intelligence, Adult Education, theater, artificial intelligence, AI, conversational artificial intelligence, conversational AI, intelligence augmentation, IA, AI for moral enhancement, AIenhancement, augmented reality, AR, lifelong learning, educational entertainment, edutainment
- Abstract
Inspired by continuing developments in artificial intelligence, this creative thesis comprises an overview of conversational artificial intelligence and an exploration of how education and entertainment overlap in the broad genre of science plays. I argue that the combination of education and entertainment can be mutually beneficial, and that this relationship should be exploited to increase public accessibility to information. Moreover, this thesis documents the process of writing, producing, directing, and designing a new work entitled Time to Think: A Short Play about Ignorance & Bliss. The original script, including marginal director’s notes, is appended.
- Published
- 2019
150. Reinventing Touch with Body Channel Communication - System Design from Electric Fields To Mixed Reality
- Author
-
Varga, Virag
- Subjects
- Body channel communication, Capacitive coupling, Embedded systems, Computer networks, Augmented reality, Channel characterization, Life sciences, Data processing, computer science
- Abstract
Even though capacitive touchscreens represent nowadays a prevalent technology, Body Channel Communication (BCC) is relatively unknown – nevertheless, these two technologies share the same core idea. Both concepts work with weak electric signals and rely on the capacity of the human body to act as a conductor. However, while touchscreens work with the disrupted signals at the place of origin, BCC proposes to employ the human body as carrier medium for digital communication purposes in a tangible interaction setup. In fact, BCC reinvents touch interactions by augmenting its user’s touch events with digital content. When a person holds two BCC-enabled objects in her/his hands, she/he creates a physical transmission path between them, which allows general data transmission between those devices. The user naturally controls that data transmission by simply touching or releasing the objects. BCC holds various exciting opportunities for the future generation of applications in terms of interactivity or simply as a communication alternative. Yet, the exact prerequisites, operating conditions, and possibilities of using BCC on application level have not been explored. Opposed to most work in the space, this dissertation is not primarily concerned with the BCC transmission mechanism itself, but is motivated by the vision of having BCC as an accessible component in complex end-user application design. In order to take a leap forward towards this vision, we propose a systematic breakdown of the problem space. By following a typical network stack approach, we can analyze separately the channel, the base system requirements, various networking concepts, and eventually, application level integration – all while keeping not just the common objective in mind, but also building an understanding how each of these layers affect each other.
- Published
- 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.