847 results on '"HUMAN-MACHINE INTERFACE"'
Search Results
2. Driving and Flying Simulators: A Review on Relevant Considerations and Trends
- Author
-
Estefany Acuña and David Serje
- Subjects
Aeronautics ,Computer science ,Mechanical Engineering ,Driving simulation ,Human–machine interface ,Pilot training ,Flight simulator ,Civil and Structural Engineering ,Variety (cybernetics) - Abstract
Flying and driving simulation has encouraged an enormous and growing community in a wide variety of areas such as research centers, driver or pilot training academies, vehicle-testing facilities, amusement parks, and even at home by household enthusiasts, providing carefully integrated visual and perceptual illusions of driving or flying real vehicles. The global research on this subject is explored during the period 2000 to 2019 from an interdisciplinary perspective based on a systematic methodology, providing both new and experienced researchers with broad guidance toward key aspects for further investigations and developments. Emphasis is given to the analysis of the findings and in particular to their applicability, to an extent not attempted earlier, by considering both human and machine aspects.
- Published
- 2021
- Full Text
- View/download PDF
3. Online programming system for robotic fillet welding in Industry 4.0
- Author
-
Francisco-Javier Badesa, Fernando M. Quintana, Ignacio Diaz-Cano, Arturo Morgado-Estevez, Pedro L. Galindo, and Miguel Lopez-Fuster
- Subjects
FOS: Computer and information sciences ,Industry 4.0 ,Computer science ,business.industry ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Systems and Control (eess.SY) ,Welding ,Electrical Engineering and Systems Science - Systems and Control ,Automation ,J.7 ,Industrial and Manufacturing Engineering ,Manufacturing engineering ,Computer Science Applications ,law.invention ,Computer Science - Robotics ,Shipbuilding ,Control and Systems Engineering ,law ,FOS: Electrical engineering, electronic engineering, information engineering ,Human–machine interface ,business ,Fillet (mechanics) ,Robotics (cs.RO) - Abstract
Fillet welding is one of the most widespread types of welding in the industry, which is still carried out manually or automated by contact. This paper aims to describe an online programming system for noncontact fillet welding robots with U and L shaped structures, which responds to the needs of the Fourth Industrial Revolution. In this paper, the authors propose an online robot programming methodology that eliminates unnecessary steps traditionally performed in robotic welding, so that the operator only performs three steps to complete the welding task. First, choose the piece to weld. Then, enter the welding parameters. Finally, it sends the automatically generated program to the robot. The system finally managed to perform the fillet welding task with the proposed method in a more efficient preparation time than the compared methods. For this, a reduced number of components was used compared to other systems, such as, a structured light 3D camera, two computers and a concentrator, in addition to the six axis industrial robotic arm. The operating complexity of the system has been reduced as much as possible. To the best of the authors knowledge, there is no scientific or commercial evidence of an online robot programming system capable of performing a fillet welding process, simplifying the process so that it is completely transparent for the operator and framed in the Industry 4.0 paradigm. Its commercial potential lies mainly in its simple and low cost implementation in a flexible system capable of adapting to any industrial fillet welding job and to any support that can accommodate it., 11 pages, 8 figures
- Published
- 2021
- Full Text
- View/download PDF
4. Configuring a VR simulator for the evaluation of advanced human–machine interfaces for hydraulic excavators
- Author
-
Federico Morosi and Giandomenico Caruso
- Subjects
Virtual reality simulator ,Excavator coordinated control ,business.industry ,Computer science ,Human–machine interface ,Process (computing) ,Usability ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Human-Computer Interaction ,Computer graphics ,Excavator ,Immersion (virtual reality) ,Multisensory feedbacks ,Haptic control ,Human–machine system ,business ,Excavator coordinated control, Virtual reality simulator, Haptic control, Human–machine interface, Multisensory feedbacks ,Software ,Simulation ,Haptic technology - Abstract
This study is aimed at evaluating the impact of different technical solutions of a virtual reality simulator to support the assessment of advanced human–machine interfaces for hydraulic excavator based on a new coordinated control paradigm and haptic feedbacks. By mimicking the end-effector movements, the control is conceived to speed up the learning process for novice operators and to reduce the mental overload on those already trained. The design of the device can fail if ergonomics, usability and performance are not grounded on realistic simulations where the combination of visual, auditory and haptic feedbacks make the users feel like being in a real environment rather than a computer-generated one. For this reason, a testing campaign involving 10 subjects was designed to discriminate the optimal set-up for the hardware to ensure a higher immersion into the VR experience. Both the audio–video configurations of the simulator (head-mounted display and surround system vs. monitor and embedded speakers) and the two types of haptic feedback for the soil–bucket interaction (contact vs. shaker) are compared in three different scenarios. The performance of both the users and simulator are evaluated by processing subjective and objective data. The results show how the immersive set-up improves the users’ efficiency and ergonomics without putting any extra mental or physical effort on them, while the preferred haptic feedback (contact) is not the more efficient one (shaker).
- Published
- 2021
- Full Text
- View/download PDF
5. Effects of Sensor Resolution and Localization Rate on the Performance of a Myokinetic Control Interface
- Author
-
Edoardo Sinibaldi, Christian Cipriani, Federico Masiero, and Francesco Clemente
- Subjects
Computer science ,Computation ,Interface (computing) ,Magnetic tracking ,Myokinetic interface ,02 engineering and technology ,Residual ,Tracking (particle physics) ,01 natural sciences ,Sensor selection ,Upper limb prosthetics ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Electrical and Electronic Engineering ,Instrumentation ,business.industry ,010401 analytical chemistry ,Transparency (human–computer interaction) ,Magnetostatics ,0104 chemical sciences ,Magnetic field ,Magnet ,Human-machine interface ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Magnetic tracking systems have been widely investigated in biomedical engineering due to the transparency of the human body to static magnetic fields. We recently proposed a novel human-machine interface for prosthetic application, namely the myokinetic interface. This controls multi-articulated prostheses by tracking magnets implanted in the residual muscles of individuals with amputation. Previous studies in this area focused solely on the choice and tuning of the localization algorithm. Here, we addressed the role of the intrinsic properties of the sensors, by analysing their effects on the tracking accuracy and on the computation time of the localization algorithm, through experimentally-verified computer simulations. We observed that the tracking accuracy is primarily affected by the localization rate, which is directly related to the sampling frequency of the sensors, and less significantly affected by the sensor resolution. The computation time, instead, proved positively correlated to the number of MMs, and negatively correlated with the localization rate. Our results may contribute to the development of novel human-machine interfaces for prosthetic limbs and could be extended to a broad range of applications involving magnetic tracking.
- Published
- 2021
- Full Text
- View/download PDF
6. First-Stage Evaluation of a Prototype Driver Distraction Human-Machine-Interface Warning System
- Author
-
Darren Wood, R. Schnittker, Tim Horberry, Michael G. Lenné, Brendan Lawrence, Michael Fitzharris, Jonny Kuo, and Christine Mulvihill
- Subjects
Warning system ,Computer science ,Distraction ,Human–machine interface ,General Medicine ,Stage (hydrology) ,Transportation and communications ,Simulation ,HE1-9990 - Abstract
Recent advances in vehicle technology permit the real-time monitoring of driver state to reduce distraction-related crashes, particularly within the heavy vehicle industry. Relatively little published research has evaluated the human machine interface (HMI) design for these systems. However, the efficacy of in-vehicle technology depends in large part on the acceptability among drivers of the system’s interface. Four variations of the HMI of a prototype multi-modal warning system developed by the authors for driver distraction were evaluated in a truck simulator with eight car drivers and six truck drivers. Driver acceptance of the HMIs was assessed using the System Acceptability Scale; and salience, comprehension and perceived effectiveness of components of the HMIs (modality, intensity of warning) were assessed using likert scales. The results showed that participants considered the HMIs to be acceptable and useful, and that the warning components were largely noticed, understood correctly, and perceived to be effective. Although this study identified no major design flaws with the recently developed HMIs, further simulator testing with a larger sample size is recommended to validate the findings. On-road evaluations to assess the impact of the HMIs on real world safety are a necessary pre-requisite for implementation.
- Published
- 2021
- Full Text
- View/download PDF
7. Human-machine system optimization in nuclear facility systems
- Author
-
Jonathan K. Corrado
- Subjects
Computer science ,020209 energy ,Human error ,Control (management) ,Context (language use) ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Automation ,0302 clinical medicine ,Human interaction ,0202 electrical engineering, electronic engineering, information engineering ,Production (economics) ,Human–machine system ,Nuclear safety ,business.industry ,System optimization ,TK9001-9401 ,Human performance ,Nuclear Energy and Engineering ,Risk analysis (engineering) ,Human-machine interface ,Nuclear engineering. Atomic power ,business ,Human error assessment and reduction technique - Abstract
Present computing power and enhanced technology is progressing at a dramatic rate. These systems can unravel complex issues, assess and control processes, learn, and—in many cases—fully automate production. There is no doubt that technological advancement is improving many aspects of life, changing the landscape of virtually all industries and enhancing production beyond what was thought possible. However, the human is still a part of these systems. Consequently, as the advancement of systems transpires, the role of humans within those systems will unavoidably continue to adapt as well. Due to the human tendency for error, this technological advancement should compel a persistent emphasis on human error reduction as part of maximizing system efficiency and safety—especially in the context of the nuclear industry. Within this context, as new systems are designed and the role of the human is transformed, human error should be targeted for a significant decrease relative to predecessor systems and an equivalent increase in system stability and safety. This article contends that optimizing the roles of humans and machines in the design and implementation of new types of automation in nuclear facility systems should involve human error reduction without ignoring the essential importance of human interaction within those systems.
- Published
- 2021
8. Using Personas with Visual Impairments to Explore the Design of an Accessible Self-Driving Vehicle Human-Machine Interface
- Author
-
Julian Brinkley
- Subjects
Medical Terminology ,Self driving ,Computer science ,Human–computer interaction ,Human–machine interface ,Persona ,Medical Assisting and Transcription ,Visually Impaired Persons - Abstract
Recent reports have suggested that most self-driving vehicle technology being developed is not currently accessible to users with disabilities. We purport that this problem may be at least partially attributable to knowledge gaps in practice-oriented user-centered design research. Missing, we argue, are studies that demonstrate the practical application of user-centered design methodologies in capturing the needs of users with disabilities in the design of automotive systems specifically. We have investigated user-centered design, specifically the use of personas, as a methodological tool to inform the design of a self-driving vehicle human-machine interface for blind and low vision users. We then explore the use of these derived personas in a series of participatory design sessions involving visually impaired co-designers. Our findings suggest that a robust, multi-method UCD process culminating with persona development may be effective in capturing the conceptual model of persons with disabilities and informing the design of automotive system.
- Published
- 2021
- Full Text
- View/download PDF
9. Impact of interface design on drivers’ behavior in partially automated cars: An on-road study
- Author
-
Sabine Langlois, Noé Monsaingeon, Céline Lemercier, Loïc Caroux, Axelle Mouginé, Cognition, Langues, Langage, Ergonomie (CLLE), École pratique des hautes études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Université Toulouse - Jean Jaurès (UT2J)-Centre National de la Recherche Scientifique (CNRS), Technocentre Renault [Guyancourt], and RENAULT
- Subjects
Situation awareness ,Computer science ,Interface (computing) ,Control (management) ,Transportation ,Mode (computer interface) ,Human–computer interaction ,0502 economics and business ,Mode transition ,Verbatim ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Automated vehicles ,050107 human factors ,Applied Psychology ,Civil and Structural Engineering ,Eye tracking ,050210 logistics & transportation ,Auditory feedback ,Multimodal interface ,business.industry ,Human–machine interface ,05 social sciences ,Speedometer ,Automation ,Alertness ,[SCCO.PSYC]Cognitive science/Psychology ,Automotive Engineering ,business - Abstract
International audience; In partially automated vehicles, the driver and the automated system share control of the vehicle. Consequently, the driver may have to switch between driving and monitoring activities. This can critically impact the driver’s situational awareness. The human–machine interface (HMI) is responsible for efficient collaboration between driver and system. It must keep the driver informed about the status and capabilities of the automated system, so that he or she knows who or what is in charge of the driving. The present study was designed to compare the ability of two HMIs with different information displays to inform the driver about the system’s status and capabilities: a driving-centered HMI that displayed information in a multimodal way, with an exocentric representation of the road scene, and a vehicle-centered HMI that displayed information in a more traditional visual way. The impact of these HMIs on drivers was compared in an on-road study. Drivers’ eye movements and response times for questions asked while driving were measured. Their verbalizations during the test were also transcribed and coded. Results revealed shorter response times for questions on speed with the exocentric and multimodal HMI. The duration and number of fixations on the speedometer were also greater with the driving-centered HMI. The exocentric and multimodal HMI helped drivers understand the functioning of the system, but was more visually distracting than the traditional HMI. Both HMIs caused mode confusions. The use of a multimodal HMI can be beneficial and should be prioritized by designers. The use of auditory feedback to provide information about the level of automation needs to be explored in longitudinal studies.
- Published
- 2021
- Full Text
- View/download PDF
10. Developments in the human machine interface technologies and their applications: a review
- Author
-
Parlad Kumar and Harpreet Singh
- Subjects
Technology ,Medical diagnostic ,business.industry ,Computer science ,Interface (computing) ,010401 analytical chemistry ,0206 medical engineering ,Biomedical Engineering ,Automotive industry ,02 engineering and technology ,General Medicine ,Self-Help Devices ,020601 biomedical engineering ,01 natural sciences ,0104 chemical sciences ,User-Computer Interface ,Systems engineering ,Humans ,Human–machine interface ,business ,Aerospace ,Man-Machine Systems ,Brain–computer interface - Abstract
Human-machine interface (HMI) techniques use bioelectrical signals to gain real-time synchronised communication between the human body and machine functioning. HMI technology not only provides a real-time control access but also has the ability to control multiple functions at a single instance of time with modest human inputs and increased efficiency. The HMI technologies yield advanced control access on numerous applications such as health monitoring, medical diagnostics, development of prosthetic and assistive devices, automotive and aerospace industry, robotic controls and many more fields. In this paper, various physiological signals, their acquisition and processing techniques along with their respective applications in different HMI technologies have been discussed.
- Published
- 2021
- Full Text
- View/download PDF
11. Experimental investigation into the basic application of force and position control for human-machine team lifting operations in manufacturing
- Author
-
Bryan Gaither, Killian Prue, and William R. Longhurst
- Subjects
0209 industrial biotechnology ,Engineering drawing ,Computer science ,Mechanical Engineering ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Task (project management) ,03 medical and health sciences ,020901 industrial engineering & automation ,0302 clinical medicine ,Container (abstract data type) ,Human–machine interface ,Human–machine system ,Material handling ,030217 neurology & neurosurgery ,Position control - Abstract
In manual material handling operations associated with manufacturing, often a two-person team lifts a container. The labor intensive task of lifting a container could be improved by replacing the two-person team with a human-machine team. It is hypothesized that a human-machine team could behave similarly to a two-person team when lifting a container. To test this hypothesis, the presented research experimentally investigated the application of force and position control to a machine that was working collaboratively with a human to lift a constructed container. A basic experimental approach to lifting and control was undertaken at a benchtop scale to evaluate the results for proof-of-concept and further development. For the experimental setup presented, the results show that a combined force and position control architecture delivered better lifting performance as compared to standalone force or position control. It was concluded that the combined force and position control strategy created better team behavior for the machine as it worked collaboratively with the human to lift a constructed container. The advantage of the control approach presented is its simplicity and its ability to be retrofitted to existing equipment. The novelty of the control approach lies within the way the force and position errors from independent controllers are combined into a single command signal with no priority given to either force or position.
- Published
- 2021
- Full Text
- View/download PDF
12. Self-Driving Vehicles and Pedestrian Interaction: Does an External Human-Machine Interface Mitigate the Threat of a Tinted Windshield or a Distracted Driver?
- Author
-
Vanessa Stange, Stefanie M. Faas, and Martin Baumann
- Subjects
Computer science ,Interface (computing) ,05 social sciences ,Eye contact ,Human Factors and Ergonomics ,Pedestrian ,Computer Science Applications ,Human-Computer Interaction ,Self driving ,Windshield ,0502 economics and business ,Human–machine interface ,050211 marketing ,0501 psychology and cognitive sciences ,050107 human factors ,Simulation - Abstract
With self-driving vehicles (SDVs), pedestrians lose the possibility of making eye contact with an attentive driver. This study investigated whether an external human-machine interface (eHMI) displa...
- Published
- 2021
- Full Text
- View/download PDF
13. Modulation of Velocity Perception by Engine Vibration While Driving
- Author
-
Motoki Tachiiri, Akihito Sano, and Yoshihiro Tanaka
- Subjects
0209 industrial biotechnology ,Sensory stimulation therapy ,General Computer Science ,Computer science ,Acoustics ,media_common.quotation_subject ,05 social sciences ,02 engineering and technology ,050105 experimental psychology ,Vibration ,020901 industrial engineering & automation ,Modulation ,Perception ,Human–machine interface ,0501 psychology and cognitive sciences ,Electrical and Electronic Engineering ,media_common - Abstract
While driving a vehicle, perceiving velocity is important for appropriate operation and is one of the most important factors for preventing collisions and traffic congestion. In contexts where perceiving velocity changes is difficult, such as on an undulating road, the velocity may exceed the speed limit or traffic congestion may occur due to heavy braking to avoid a collision. Hence, we proposed a method of modulating the perception of velocity through tactile stimulation to promote adequate operation for the driver. In contrast to methods using visual and auditory stimulation, this method has advantages of not increasing the visual cognitive load, not disturbing the enjoyment of music, and reliably stimulating the driver. In this study, we constructed a velocity perception model based on vibrotactile stimulation induced by the engine speed and proposed a method of changing the vibrotactile stimulation by altering the shift position of the transmission to modulate the perception of velocity without additional vibration actuators, regardless of the actual velocity. We measured the seat and engine vibration using two different vehicles. The results demonstrated that the peak acceleration frequencies are proportional to engine speed, indicating that the vibration depends upon the engine speed, not the velocity. We implemented a method of changing the shift position in an actual vehicle and verified the feasibility of the method through a psychophysical experiment. The results showed that drivers perceived a higher velocity with increasing engine speed and lower velocity with decreasing engine speed.
- Published
- 2021
- Full Text
- View/download PDF
14. Training simulators for manufacturing processes: Literature review and systematisation of applicability factors
- Author
-
Klaus-Dieter Thoben and Benjamin Knoke
- Subjects
General Computer Science ,Computer science ,General Engineering ,Human–machine interface ,Training (civil) ,Manufacturing engineering ,Education - Published
- 2021
- Full Text
- View/download PDF
15. Cobots in maxillofacial surgery – challenges for workplace design and the human-machine-interface
- Author
-
Fabian Nokodian, Marten Stepputat, Bernhard Frerich, Frederik Schmatz, Wilko Fluegge, Florian Beuss, and Publica
- Subjects
Dilemma ,ComputingMilieux_THECOMPUTINGPROFESSION ,Risk analysis (engineering) ,Work (electrical) ,Computer science ,Interface (Java) ,Psychological intervention ,General Earth and Planetary Sciences ,Human factors and ergonomics ,Robot ,Human–machine interface ,Context (language use) ,General Environmental Science - Abstract
The global shortage of skilled workers and the growing pressure on medical facilities to work profitably and efficiently is increasingly causing clinics and doctors to confront major problems. Especially routine medical interventions should be performed quickly and with few employees. Nevertheless, cost savings must not be at the expense of patients. One way to solve this dilemma is the use of flexible, collaborating robots, so called Cobots, that can be integrated into existing surgical processes. This paper shows the existing requirements for the workplace and the necessary human-machine interface for use in the medical environment. Furthermore, an approach to human-centered workplace design is reported. In addition to the design of the workplace using virtual ergonomics and workplace analyses, the kinematic chain as well as applicable end effectors for medical interventions are presented. In the context of a phantom study the use of the Cobot could be tested and evaluated for the first time.
- Published
- 2021
- Full Text
- View/download PDF
16. Comparing software frameworks of Augmented Reality solutions for manufacturing
- Author
-
Sri Sudha Vijay Keshav Kolla, Andre Sanchez, Peter Plapper, INTERREG V [sponsor], and Department of Engineering [research center]
- Subjects
Structure (mathematical logic) ,Augmented Reality ,SIMPLE (military communications protocol) ,Computer science ,media_common.quotation_subject ,Mechanical engineering [C10] [Engineering, computing & technology] ,Overlay ,Industry 4.0 ,computer.software_genre ,Industrial and Manufacturing Engineering ,Variety (cybernetics) ,Software framework ,Manufacturing ,Ingénierie mécanique [C10] [Ingénierie, informatique & technologie] ,Resource (project management) ,HoloLens ,Artificial Intelligence ,Human–computer interaction ,Mixed Reality ,Human-machine interface ,Augmented reality ,Quality (business) ,computer ,media_common - Abstract
Augmented reality (AR) is a technology that allows overlaying of virtual elements on top of the physical environment. This enhances the perception and conveys additional information to the user. With the emergence of industry 4.0 concepts in manufacturing landscape, AR found its way to improve existing Human-Machine Interfaces (HMI) on the shop-floor. The industrial setting has a wide variety of application opportunities from AR, ranging from training and digital work instructions to quality inspection and remote maintenance. Even though its implementation in the industry is rising in popularity, it is still mainly restricted to large companies due the limited availability of resources in Small and Medium Size Enterprises (SME). However, SMEs can benefit from AR solutions in its production processes. Therefore, this research aims to develop and present the results of comparison of two simple and cost-effective AR software frameworks for Hand Held Device (HHD) and a Head Mounted Device (HMD), which can be applied for developing AR applications for manufacturing. Two AR applications are developed using these software frameworks which are presented in the case study section. Android device is chosen as a HHD and HoloLens is the HMD used in the case study. The development structure can be reproduced by a wider range of enterprises with diverse needs and resource availability.
- Published
- 2021
- Full Text
- View/download PDF
17. Adaptive multi-modal interface model concerning mental workload in take-over request during semi-autonomous driving
- Author
-
Tetsuo Sawaragi, Weiya Chen, and Toshihiro Hiraoka
- Subjects
take-over request ,Computer science ,Interface model ,Working memory ,multi resource theory ,human-machine-interface ,Workload ,Take over ,Mental workload ,working memory ,Modal ,Human–computer interaction ,Human–machine interface ,cognitive channel - Abstract
With the development of automated driving technologies, human factors involved in automated driving are gaining increasing attention for a balanced implementation of the convenience brought by the technology and safety risk in commercial vehicle models. One influential human factor is mental workload. In the take-over request (TOR) from autonomous to manual driving at level 3 of International Society of Automotive Engineers' (SAE) Levels of Driving Automation, the time window for the driver to have full comprehension of the driving environment is extremely short, which means the driver is under high mental workload. To support the driver during a TOR, we propose an adaptive multi-modal interface model concerning mental workload. In this study, we evaluated the reliability of only part of the proposed model in a driving-simulator experiment as well as using the experimental data from a previous study.
- Published
- 2021
18. Wheelchair control for disabled patients using EMG/EOG based human machine interface: a review
- Author
-
Amanpreet Kaur
- Subjects
medicine.medical_specialty ,genetic structures ,Computer science ,medicine.medical_treatment ,Interface (computing) ,0206 medical engineering ,Control (management) ,Biomedical Engineering ,02 engineering and technology ,01 natural sciences ,Motion (physics) ,User-Computer Interface ,Physical medicine and rehabilitation ,Wheelchair ,medicine ,Humans ,Disabled Persons ,Man-Machine Systems ,Rehabilitation ,Electromyography ,010401 analytical chemistry ,Eye movement ,Signal Processing, Computer-Assisted ,General Medicine ,020601 biomedical engineering ,0104 chemical sciences ,body regions ,Electrooculography ,Wheelchairs ,Control system ,Human–machine interface - Abstract
The human-machine interface (HMI) and bio-signals have been used to control rehabilitation equipment and improve the lives of people with severe disabilities. This research depicts a review of electromyogram (EMG) or electrooculogram (EOG) signal-based control system for driving the wheelchair for disabled. For a paralysed person, EOG is one of the most useful signals that help to successfully communicate with the environment by using eye movements. In the case of amputation, the selection of muscles according to the distribution of power and frequency highly contributes to the specific motion of a wheelchair. Taking into account the day-to-day activities of persons with disabilities, both technologies are being used to design EMG or EOG based wheelchairs. This review paper examines a total of 70 EMG studies and 25 EOG studies published from 2000 to 2019. In addition, this paper covers current technologies used in wheelchair systems for signal capture, filtering, characterisation, and classification, including control commands such as left and right turns, forward and reverse motion, acceleration, deceleration, and wheelchair stop.
- Published
- 2020
- Full Text
- View/download PDF
19. Impact of HMI on driver’s distraction on a freeway under heavy foggy condition based on visual characteristics
- Author
-
Fu Qiang, Jianming Ma, Haijian Li, Xiaofan Feng, Xiaohua Zhao, and Dunli Hu
- Subjects
050210 logistics & transportation ,Computer science ,05 social sciences ,Transportation ,Connected vehicle ,Position (vector) ,Distraction ,0502 economics and business ,Visual attention ,Human–machine interface ,0501 psychology and cognitive sciences ,Safety Research ,050107 human factors ,Simulation - Abstract
Connected vehicle technology relying on Human Machine Interface (HMI) achieve a dominant position in the overall safety improvement. However, the impact of HMI on the driver’s visual attention cann...
- Published
- 2020
- Full Text
- View/download PDF
20. Participatory Design in the Classroom: Exploring the Design of an Autonomous Vehicle Human-Machine Interface with a Visually Impaired Co-Designer
- Author
-
Julian Brinkley, Kathryn M. Lucaites, Earl W. Huff, and Aminah Roberts
- Subjects
Medical Terminology ,Visually impaired ,Computer science ,Human–computer interaction ,Participatory design ,Personal mobility ,Human–machine interface ,Medical Assisting and Transcription ,Visually Impaired Persons - Abstract
Self-driving vehicles are the latest innovation in improving personal mobility and road safety by removing arguably error-prone humans from driving-related tasks. Such advances can prove especially beneficial for people who are blind or have low vision who cannot legally operate conventional motor vehicles. Missing from the related literature, we argue, are studies that describe strategies for vehicle design for these persons. We present a case study of the participatory design of a prototype for a self-driving vehicle human-machine interface (HMI) for a graduate-level course on inclusive design and accessible technology. We reflect on the process of working alongside a co-designer, a person with a visual disability, to identify user needs, define design ideas, and produce a low-fidelity prototype for the HMI. This paper may benefit researchers interested in using a similar approach for designing accessible autonomous vehicle technology.
- Published
- 2020
- Full Text
- View/download PDF
21. Classification of Hand Movements from EMG Signals for People with Motor Disabilities
- Author
-
dos Santos Flavio, dos Santos Francisco, and C. Alexandre C. Alexandre
- Subjects
medicine.medical_specialty ,General Computer Science ,Computer science ,medicine.disease ,Hand movements ,Cerebral palsy ,Classification rate ,Physical medicine and rehabilitation ,Assistive technology ,medicine ,ComputingMilieux_COMPUTERSANDSOCIETY ,Human–machine interface ,Brazilian population ,Electrical and Electronic Engineering - Abstract
People with disabilities correspond to about 25% of the Brazilian population. A great part of these people have physical impairments that difficult the use computer peripherals. This article presents the development of a system for detection of hand movements through the acquisition and classification of electromyographic (EMG) signals using machine learning techniques. The purpose of the proposed system is to be used by people with disabilities to control an adapted text editor. The signals are capture by surface EMG electrodes and used to the detect 4 different hand movements. In addition, a database with 3200 EMG signals generated by the hand movements was created, made by one user diagnosed with cerebral palsy and another user without diagnosed motor disabilities. Several tests were carried out, showing the good accuracy of the proposed system, with a success classification rate of 96% to 98%.
- Published
- 2020
- Full Text
- View/download PDF
22. Increasing helicopter flight safety in maritime operations with a head-mounted display
- Author
-
Walko, Christian and Schuchardt, Bianca Isabella
- Subjects
Situation awareness ,business.industry ,Computer science ,Human–machine interface ,Operational availability ,media_common.quotation_subject ,Aerospace Engineering ,Optical head-mounted display ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Transportation ,Usability ,Workload ,Augmented reality ,Helicopter ,Aeronautics ,HoloLens ,Helmet-mounted display ,Simulator ,Flight safety ,Conformal display ,Quality (business) ,Pilot assistance ,business ,media_common - Abstract
To increase flight safety and operational availability for helicopters, the potential benefits of helmet-mounted displays (HMD) are investigated, with a focus on maritime flight operations. Helicopters have long downtimes, due to harsh weather conditions or other visual impairments, especially in maritime scenarios. Flying in these poor conditions can drastically reduce flight safety. It is often difficult to recognize the horizon due to sea fog, and the absence of reference objects can complicate the maritime flight. These conditions and especially the downtimes cost money or, at worst, life’s. Therefore, DLR integrated the augmented reality glasses Microsoft HoloLens into DLR’s simulator AVES to use it as HMD for pilots. Subsequently, displays and symbology were developed and evaluated. To carry out a piloted simulator study, a maritime scenario was created to measure changes in the pilots’ performance with the HMD, like workload or situational awareness. The paper focuses (a) on the integration of the HoloLens into the simulator with its challenges, solutions and findings, (b) on the symbology and (c) on the piloted simulator study. Both the quality of the HoloLens as HMD and the study results are very positive. The pilots rated high usability, reduced workload, increased situational awareness and increased safety.
- Published
- 2020
- Full Text
- View/download PDF
23. Augmented reality for next generation infrastructure inspections
- Author
-
Miranda A Mellor, Chih-Yu Shen, David D. Mascarenas, Brian Bleck, John Morales, Troy Harden, JoAnn P Ballor, Philo Shelton, Alessandro Cattaneo, Oscar L McClain, Eric Martinez, Benjamin Narushof, Li-Ming R Yeong, Fernando Moreu, and Yongchao Yang
- Subjects
Computer science ,Mechanical Engineering ,010401 analytical chemistry ,Biophysics ,020101 civil engineering ,02 engineering and technology ,01 natural sciences ,0201 civil engineering ,0104 chemical sciences ,Work (electrical) ,Human–computer interaction ,Human–machine interface ,Augmented reality ,Structural health monitoring - Abstract
This article introduces the use of emerging augmented reality technology to enable the next generation of structural infrastructure inspection and awareness. This work is driven by the prevalence of visual structural inspection. It is known that current visual inspection techniques have multiple sources of variance that should be reduced in order to achieve less ambiguous visual inspections. Emerging augmented reality tools feature a variety of sensors, computation, and communication resources that can enable relevant structural inspection data to be collected at very high resolution in an unambiguous manner. This work shows how emerging augmented reality tools can be used to greatly enhance our ability to capture comprehensive, high-resolution, three-dimensional measurements of critical infrastructure. This work also provides detailed information on the software architecture for augmented reality structural inspection applications that helps meet the goals of the framework. The fact that the framework is designed to accommodate the considerations associated with high-consequence infrastructure implies that it is also comprehensive enough to be applied to less hazardous but still high-value infrastructure such as bridges, dams, and tunnels. Augmented reality has great potential to enable the next generation of smart infrastructure, and this work focuses on addressing how augmented reality can be leveraged to enable the next generation of structural awareness for high-consequence, long-lifespan structures.
- Published
- 2020
- Full Text
- View/download PDF
24. Augmented drawn construction symbols: A method for ad hoc robotic fabrication
- Author
-
Jay Hesslink, Asbjørn Søndergaard, Jens Pedersen, Dagmar Reinhardt, and Narendrakrishnan Neythalath
- Subjects
business.industry ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Building and Construction ,Computer Graphics and Computer-Aided Design ,Automation ,Manufacturing engineering ,Computer Science Applications ,Construction industry ,021105 building & construction ,Human–machine interface ,021104 architecture ,Augmented reality ,business ,Period (music) - Abstract
The global construction industry is one the least productive sectors over a 30-year period, which arguably could be related to virtually no implementation of digital and automation technologies within the construction industry. Construction processes arguably consist of expensive manual labor or manual operation of mechanized processes, where hand-drawn markings on work-objects or partly build structures are used to inform and steer the construction process or allows for ad hoc adjustments of elements. As such, the use of on-object, hand-drawn information is considered integral to the modus operandi of a plurality of construction trades, where timber construction and carpentry are of special interest. In contrast, emerging methods of digital production in timber construction implicitly or explicitly seek to eliminate the interpretive component to the construction work, imposing a top-down paradigm of file-to-factory execution. While such systems offer a performance increase compared to manual labor, it is notoriously sensitive to construction tolerances and requires a high level of specialism to be operated, which could alienate craft-educated workers. This research argues that developing methods for digital production compatible with on-site human interpretation and adaptation can help overcome these challenges. In addition, these methods offer the opportunity to increase the robustness and versatility of digital fabrication in the context of the construction site. The article reports on a new method titled “augmented drawn construction symbols” that through a visual communication system converts on-object hand-drawn markings to CAD drawings and sends them to a robotic system. The process is demonstrated on a full-scale prototypical robot setup.
- Published
- 2020
- Full Text
- View/download PDF
25. Intelligence Augmentation and Human-Machine Interface Best Practices for NDT 4.0 Reliability
- Author
-
Aldrin John
- Subjects
Mechanics of Materials ,Intelligence amplification ,Computer science ,business.industry ,Mechanical Engineering ,Nondestructive testing ,Best practice ,Human–machine interface ,General Materials Science ,business ,Reliability (statistics) ,Reliability engineering - Published
- 2020
- Full Text
- View/download PDF
26. Simulator Design for Anti-Collision System of Moving Type Drilling Machines Used in Pipe Handling and Tripping Process
- Author
-
Ku Namkug, Joo Hyoung Cha, Lee Jaeyong, and Kwon, Kiyoun
- Subjects
Drilling machines ,Computer science ,Tripping ,Process (computing) ,Collision system ,Human–machine interface ,Simulator design ,Simulation - Published
- 2020
- Full Text
- View/download PDF
27. SpaceMaze: incentivizing correct mobile crowdsourced sensing behaviour with a sensified minigame
- Author
-
Jussi Holopainen, Jan Felix Rohe, Patrick Schlosser, Andrea Schankin, Matthias Budde, Lina Hirschoff, and Michael Beigl
- Subjects
Ubiquitous computing ,Data collection ,Computer science ,05 social sciences ,Human error ,General Social Sciences ,02 engineering and technology ,Human-Computer Interaction ,Arts and Humanities (miscellaneous) ,Human–computer interaction ,020204 information systems ,Data quality ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Developmental and Educational Psychology ,Human–machine interface ,Environmental sensing ,050211 marketing ,Mobile sensing ,G440 Human-computer Interaction - Abstract
Modern mobile phones are equipped with many sensors, which can increasingly be used to sense various environmental phenomena. In particular, mobile sensing has enabled crowdsourced data collection at an unprecedented scale. However, as laypersons are involved in this, concerns regarding the data quality arise. This work explores the gamification of smartphone-based measurement processes in practice by embedding a sensing task into a mobile minigame. The underlying idea is — rather than to educate the user on how to correctly perform a measurement task — to opportunistically execute the measurement in the background once the smartphone is in a suitable context. To this end, this paper presents the design and evaluation of SpaceMaze, a smartphone game with the goal of minimizing user error by introducing appropriate game mechanics to influence the phone context, using the example of mobile noise level monitoring. A large user study that compares SpaceMaze to two non-gamified apps for noise level monitoring (N=360 in total) shows that SpaceMaze can successfully reduce user errors when compared to simple non-gamified ambient noise level monitoring applications and that the minigame is generally perceived as being enjoyable. Solutions for remaining problems, such as noise generated by the players, are discussed.
- Published
- 2020
- Full Text
- View/download PDF
28. Management of human-machine interface lifecycle
- Author
-
O. Pupena and A. Shyshak
- Subjects
Computer science ,business.industry ,Human–machine interface ,Software engineering ,business - Published
- 2020
- Full Text
- View/download PDF
29. INVESTIGATION AND DEVELOPMENT OF 'UNIVERSAL IMAGE DICTIONARY' FOR CREATION OF MAN-MACHINE INTERFACE
- Author
-
Aida Hakimova, Ekaterina Krivoshlykova, Aleksanra Belaya, Mariya Berberova, Polina Rosshchupkina, Daler Mirzoev, Oleg Zolotarev, and Alena Fedorova
- Subjects
010309 optics ,Engineering drawing ,020303 mechanical engineering & transports ,Development (topology) ,0203 mechanical engineering ,Computer science ,0103 physical sciences ,Human–machine interface ,General Materials Science ,02 engineering and technology ,01 natural sciences ,Image (mathematics) - Abstract
The project urgency is evidenced by the measurement of subject-matter popularity on the Internet – a query to the Net on the topic of the subject gives 18900 results, and a novelty is defined by specification, for instance, the “Universal Image Dictionary” query gives “No results found for” and 1 result M24.RU – 10 unknown: monsters, stairs, funnels and keys… www.m24.ru/articles/112668/ … The project offered supposes the creation of the bank of different images already existing and having a wide distribution and which can be a means of international communication for people having no any other channel for information exchange. Such images could include common gestures, traffic signs, signs in transport, in the streets, in public accommodations and in state offices. They will include both single images, and their combinations forming a single conceptual complex (rules of table etiquette, on transport, and at the stadium etc.). It should be emphasized clearly that the dictionary offered is intended for interpersonal communication. A computer identification of images cannot be in this case a basic purpose for project realization. The orientation to interpersonal communication gives us a possibility to choose images for the dictionary and, what is not less significant, to set problems actually solvable at every stage of its creation.
- Published
- 2020
- Full Text
- View/download PDF
30. In-Vehicle Device Control System by Hand Posture Recognition with Movement Detection Using Infrared Array Sensor
- Author
-
Fanxing Meng, Shigeyuki Tateno, and Yiwei Zhu
- Subjects
business.industry ,Computer science ,Posture recognition ,Control system ,Automotive industry ,In vehicle ,Human–machine interface ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Computer vision ,Artificial intelligence ,Movement detection ,business - Abstract
Nowadays, with the development of automotive driving technologies, more and more functions and devices with control systems based on tactile, optical, and acoustic sensors are assembled into cars. ...
- Published
- 2020
- Full Text
- View/download PDF
31. A DISCRETE-EVENT SIMULATION MODEL FOR DRIVER PERFORMANCE ASSESSMENT: APPLICATION TO AUTONOMOUS VEHICLE COCKPIT DESIGN OPTIMIZATION
- Author
-
Abdelkrim Doufene, Marija Jankovic, I. Iuskevich, Andreas M. Hein, Kahina Amokrane-Ferka, IRT SystemX (IRT SystemX), Laboratoire Génie Industriel (LGI), and CentraleSupélec-Université Paris-Saclay
- Subjects
FOS: Computer and information sciences ,model-based engineering ,Situation awareness ,Computer science ,Computer Science - Human-Computer Interaction ,autonomous vehicl ,050105 experimental psychology ,Human-Computer Interaction (cs.HC) ,design optimisation ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Discrete event simulation ,050107 human factors ,human-machine interface ,business.industry ,05 social sciences ,Workload ,General Medicine ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Automation ,Cockpit ,Sight ,ergonomics ,Systems engineering ,Task analysis ,Systems architecture ,business - Abstract
The latest advances in the design of vehicles with the adaptive level of automation pose new challenges in the vehicle-driver interaction. Safety requirements underline the need to explore optimal cockpit architectures with regard to driver cognitive and perceptual workload, eyes-off-the-road time and situation awareness. We propose to integrate existing task analysis approaches into system architecture evaluation for the early-stage design optimization. We built the discrete-event simulation tool and applied it within the multi-sensory (sight, sound, touch) cockpit design industrial project.
- Published
- 2020
- Full Text
- View/download PDF
32. Toward shared control between automated vehicles and users
- Author
-
Jacques Terken, Bastian Pfleging, Future Everyday, and EAISI Mobility
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Shared control ,05 social sciences ,Control (management) ,Human error ,02 engineering and technology ,Automation ,Domain (software engineering) ,020901 industrial engineering & automation ,Automated driving ,Human–computer interaction ,Automotive Engineering ,Human–machine interface ,0501 psychology and cognitive sciences ,Use case ,business ,Human machine interface ,050107 human factors - Abstract
Technological developments in the domain of vehicle automation are targeted toward driver-less, or driver-out-of-the-loop driving. The main societal motivation for this ambition is that the majority of (fatal) accidents with manually driven vehicles are due to human error. However, when interacting with technology, users often experience the need to customize the technology to their personal preferences. This paper considers how this might apply to vehicle automation, by a conceptual analysis of relevant use cases. The analysis proceeds by comparing how handling of relevant situations is likely to differ between manual driving and automated driving. The results of the analysis indicate that full out-of-the-loop automated driving may not be acceptable to users of the technology. It is concluded that a technology that allows shared control between the vehicle and the user should be pursued. Furthermore, implications of this view are explored for the concrete temporal dynamics of shared control, and general characteristics of human machine interface that support shared control are proposed. Finally, implications of the proposed view and directions for further research are discussed.
- Published
- 2020
- Full Text
- View/download PDF
33. The Human-Machine Interface (HMI) with NDE 4.0 Systems
- Author
-
John C. Aldrin
- Subjects
Computer science ,business.industry ,Embedded system ,Human–machine interface ,business - Published
- 2022
- Full Text
- View/download PDF
34. Frame, Game, and Circuit: Truth and the Human in Japanese Human-machine Interface Research
- Author
-
Grant Jun Otsuki
- Subjects
Archeology ,060101 anthropology ,Computer science ,business.industry ,05 social sciences ,Frame (networking) ,0507 social and economic geography ,Robotics ,06 humanities and the arts ,FOS: Sociology ,Arts and Humanities (miscellaneous) ,Action (philosophy) ,Human–computer interaction ,Anthropology ,Cybernetics ,Human–machine interface ,0601 history and archaeology ,Artificial intelligence ,business ,050703 geography - Abstract
This essay tracks the ‘human’ emergent in human-centred technologies (HCTs) in Japan. A primary aim of HCTs is to extract the true intention of the user and actualise it as machine action. Beyond this, HCT researchers maintained that creating such technologies would give them access to the truth of the human itself. In this essay, I examine the games of truth played in HCT, in which teachers exercise authority over students by defining their true intentions for them, and students learned what they needed to become good researchers. I then show how through analogical relations, human subjects with esteemed qualities became what HCTs must be, and HCTs a means for imagining what human beings truly are. What emerged were truths that transcended human and machine: all are systems of information and, in the end, the right machine can approach humanity enough to fulfil even the most human of responsibilities.
- Published
- 2022
- Full Text
- View/download PDF
35. Robotic Manipulation under Harsh Conditions Using Self‐Healing Silk‐Based Iontronics
- Author
-
Yujia Zhang, Tiger H. Tao, Zhitao Zhou, Mengwei Liu, Yanghong Zhang, and Nan Qin
- Subjects
Computer science ,General Chemical Engineering ,Science ,gesture/object recognition ,skin electronics/iontronics ,General Physics and Astronomy ,Medicine (miscellaneous) ,Biochemistry, Genetics and Molecular Biology (miscellaneous) ,High strain ,human–machine interface ,General Materials Science ,Sensitivity (control systems) ,Electronics ,Research Articles ,business.industry ,silk‐based iontronics ,General Engineering ,Cognitive neuroscience of visual object recognition ,Transparency (human–computer interaction) ,SILK ,Self-healing ,business ,Computer hardware ,Gesture ,Research Article - Abstract
Progress toward intelligent human–robotic interactions requires monitoring sensors that are mechanically flexible, facile to implement, and able to harness recognition capability under harsh environments. Conventional sensing methods have been divided for human‐side collection or robot‐side feedback and are not designed with these criteria in mind. However, the iontronic polymer is an example of a general method that operates properly on both human skin (commonly known as skin electronics or iontronics) and the machine/robotic surface. Here, a unique iontronic composite (silk protein/glycerol/Ca(II) ion) and supportive molecular mechanism are developed to simultaneously achieve high conductivity (around 6 kΩ at 50 kHz), self‐healing (within minutes), strong stretchability (around 1000%), high strain sensitivity and transparency, and universal adhesiveness across a broad working temperature range (−40–120 °C). Those merits facilitate the development of iontronic sensing and the implementation of damage‐resilient robotic manipulation. Combined with a machine learning algorithm and specified data collection methods, the system is able to classify 1024 types of human and robot hand gestures under challenging scenarios and to offer excellent object recognition with an accuracy of 99.7%., This work presents a silk‐based iontronic film which is designed to be simultaneously conductive, self‐healable, antifreezing, and antiheating, based on the developed unique material composition (silk protein/glycerol/Ca(II) ion) and dynamic molecular mechanism. Furthermore, when coupled with a specified machine learning algorithm, the approach permits accurate human/robotic gesture identification across over 1024 classes with high accuracy in harsh environment.
- Published
- 2022
36. Development of Surface EMG Game Control Interface for Persons with Upper Limb Functional Impairments
- Author
-
Joseph Muguro, Wahyu Caesarendra, Yuta Sasatake, Muhammad Syaiful Amri bin Suhaimi, Maciej Sułowicz, Waweru Njeri, Minoru Sasaki, Pringgo Widyo Laksono, Wahyu Rahmaniar, and Kojiro Matsushita
- Subjects
Electronic speed control ,medicine.medical_specialty ,T57-57.97 ,disability and functional impairment ,Applied mathematics. Quantitative methods ,medicine.diagnostic_test ,human-machine interface ,business.industry ,Computer science ,Interface (computing) ,Usability ,Neck rotation ,Electromyography ,Object (computer science) ,Signal ,game control ,Robot control ,Physical medicine and rehabilitation ,machine learning ,sEMG ,medicine ,business - Abstract
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used by people with disabilities or patients with functional impairment of the upper limb. This paper utilizes an equation estimation and a machine learning model to translate the signals into corresponding neck rotations. For testing, we designed two custom-made game scenes, a dynamic 1D object interception and a 2D maze scenery, in Unity 3D to be controlled by sEMG signal in real-time. Twenty-two (22) test subjects (mean age 27.95, std 13.24) participated in the experiment to verify the usability of the interface. From object interception, subjects reported stable control inferred from intercepted objects more than 73% accurately. In a 2D maze, a comparison of male and female subjects reported a completion time of 98.84 s. ± 50.2 and 112.75 s. ± 44.2, respectively, without a significant difference in the mean of the one-way ANOVA (p = 0.519). The results confirmed the usefulness of neck sEMG of sternocleidomastoid (SCM) as a control interface with little or no calibration required. Control models using equations indicate intuitive direction and speed control, while machine learning schemes offer a more stable directional control. Control interfaces can be applied in several areas that involve neck activities, e.g., robot control and rehabilitation, as well as game interfaces, to enable entertainment for people with disabilities.
- Published
- 2021
- Full Text
- View/download PDF
37. Human machine interface aspects of the ground control station for unmanned air transport
- Author
-
Max Friedrich, Niklas Peinecke, and Dagi Geister
- Subjects
Unmanned Aircraft Systems ,Focus (computing) ,Human Machine Interface Design ,SIMPLE (military communications protocol) ,Computer science ,Aviation ,business.industry ,Human Factors ,Context (language use) ,Ground control station ,Human Machine Interface ,Drone ,Ground Control Sta-tion ,Systems engineering ,Human–machine interface ,Engineering design process ,business - Abstract
The GCS (Ground Control Station) is an elementary part of the UAS (Unmanned Aircraft System). It provides the connection between the human pilot and the airborne part of the UAS, the drone. While early GCS have been merely more than simple remote controls, a modern GCS can do more than just relaying steering commands to the drone. Instead, it provides an important contribution to the safety of the UAS operation. This is achieved by presenting the pilot pre-processed information from the drone and secondary data sources. Utilizing design principles developed in human machine interface (HMI) theory, these information can be shown to the pilot without distraction and making optimal use of the data. This chapter summarizes design principles and challenges for HMIs in aviation use. The focus is on challenges related to UAS operations and UAS GCS in particular. The design process for the GCS U-FLY will be described, and some results of a real-world test in the context of two application projects will be presented.
- Published
- 2021
38. Improved Motion Classification With an Integrated Multimodal Exoskeleton Interface
- Author
-
Kevin Langlois, Joost Geeroms, Gabriel Van De Velde, Carlos Rodriguez-Guerrero, Tom Verstraten, Bram Vanderborght, Dirk Lefeber, Faculty of Engineering, and Applied Mechanics
- Subjects
Physical interface ,Computer science ,exoskeletons ,electromyogram ,Interface (computing) ,Biomedical Engineering ,Neurosciences. Biological psychiatry. Neuropsychiatry ,wearable sensor ,Electromyography ,Anticipatory control ,Motion (physics) ,EMG ,Artificial Intelligence ,Classifier (linguistics) ,medicine ,Computer vision ,Original Research ,human-machine interface ,medicine.diagnostic_test ,business.industry ,Pressure sensor ,Exoskeleton ,machine learning ,classification ,intention recognition ,Artificial intelligence ,business ,Neuroscience ,RC321-571 - Abstract
Human motion intention detection is an essential part of the control of upper-body exoskeletons. While surface electromyography (sEMG)-based systems may be able to provide anticipatory control, they typically require exact placement of the electrodes on the muscle bodies which limits the practical use and donning of the technology. In this study, we propose a novel physical interface for exoskeletons with integrated sEMG- and pressure sensors. The sensors are 3D-printed with flexible, conductive materials and allow multi-modal information to be obtained during operation. A K-Nearest Neighbours classifier is implemented in an off-line manner to detect reaching movements and lifting tasks that represent daily activities of industrial workers. The performance of the classifier is validated through repeated experiments and compared to a unimodal EMG-based classifier. The results indicate that excellent prediction performance can be obtained, even with a minimal amount of sEMG electrodes and without specific placement of the electrode.
- Published
- 2021
- Full Text
- View/download PDF
39. Flying a helicopter with the HoloLens as head-mounted display
- Author
-
Christian Walko and Malte-Jörn Maibach
- Subjects
human-machine interface ,BitTorrent tracker ,Computer science ,conformal display ,System identification ,General Engineering ,Process (computing) ,Holography ,Optical head-mounted display ,Kalman filter ,Tracking (particle physics) ,Atomic and Molecular Physics, and Optics ,augmented reality ,law.invention ,head-mounted display ,law ,HoloLens ,Calibration ,Augmented reality ,pilot assistance ,Simulation ,helicopter - Abstract
We describe the flight testing and the integration process of the Microsoft HoloLens 2 as head-mounted display (HMD) with DLR’s research helicopter. In the previous work, the HoloLens was integrated into a helicopter simulator. Now, while migrating the HoloLens into a real helicopter, the main challenge was the head tracking of the HoloLens, because it is not designed to operate on moving vehicles. Therefore, the internal head tracking is operated in a limited rotation-only mode, and resulting drift errors are compensated for with an external tracker, several of which have been tested in advance. The fusion is done with a Kalman filter, which contains a non-linear weighting. Internal tracking errors of the HoloLens caused by vehicle accelerations are mitigated with a system identification approach. For calibration, the virtual world is manually aligned using the helicopter’s noseboom. The external head tracker (EHT) is largely automatically calibrated using an optimization approach and therefore, works for all trackers and regardless of its mounting positions on vehicle and head. Most of the pretests were carried out in a car, which indicates the flexibility in terms of vehicle type. The flight tests have shown that the overall quality of this HMD solution is very good. The conformal holograms are almost jitter-free, there is no latency, and errors of lower frequencies are identical with the performance that the EHT can provide, which in combination greatly improves immersion. Profiting from almost all features of the HoloLens 2 is a major advantage, especially for rapid research and development.
- Published
- 2021
40. Adaptation of Gamification as a Man-Machine Interface to Franchise Management System
- Author
-
Burak Gunaydin, Furkan Kalabalikoglu, and Yusuf Altunel
- Subjects
Man-machine Interface ,Computer science ,Game ,Gamification ,Human–computer interaction ,Franchisee ,Management system ,Human-computer Interaction ,Natural (music) ,Human–machine interface ,Franchiser ,Franchise ,Representation (mathematics) ,Adaptation (computer science) ,Human communication - Abstract
5h International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2021, Ankara, 21 October 2021through 23 October 2021. Current man-machine interfaces are far from fitting human expectations in understanding and transmission of noteworthy signs and hints that are natural ingredients of human communication. There are certain indicators and interaction possibilities that can be passed between the two sides but as a result of complex behavior and dynamic changing conditions of environments they can be lost, or at least might require long and complex processing. Gamification is used to enhance the interaction possibilities providing the visual representation of environmental conditions and critical indicators, as well as providing the ability to send and receive requests using suitable techniques for human such as touches, and finger moves on Unity environment. Franchise Management System is selected as a case study and adapted to maintain the interactions between franchiser and franchisee. © 2021 IEEE.
- Published
- 2021
- Full Text
- View/download PDF
41. Biosignal-Based Human–Machine Interfaces for Assistance and Rehabilitation: A Survey
- Author
-
Ganesh R. Naik, Gaetano D. Gargiulo, Paolo Bifulco, Emilio Andreozzi, Jessica Centracchio, Daniele Esposito, Esposito, D., Centracchio, J., Andreozzi, E., Gargiulo, G. D., Naik, G. R., and Bifulco, P.
- Subjects
Emerging technologies ,Computer science ,medicine.medical_treatment ,Interface (computing) ,biosignals ,Review ,TP1-1185 ,robotic control ,Biochemistry ,Field (computer science) ,prosthetic control ,Analytical Chemistry ,rehabilitation ,smart environment control ,Human–computer interaction ,Surveys and Questionnaires ,assistive technology ,medicine ,Humans ,Human–machine system ,Biosignal ,Electrical and Electronic Engineering ,Instrumentation ,virtual reality control ,Rehabilitation ,gesture recognition ,communication ,Chemical technology ,Virtual Reality ,Robotics ,Atomic and Molecular Physics, and Optics ,Assistive technology, Biosignals, Communication, Gesture recognition, Human–Machine Interface, Prosthetic control, Rehabilitation, Robotic control, Smart environment control, Virtual reality control ,Gesture recognition ,Smart environment ,Human–Machine Interface - Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
- Published
- 2021
42. The Impact of Information Integration in a Simulation of Future Submarine Command and Control
- Author
-
Shayne Loft, Matthew Stoker, Steph Michailovs, Megan Schmitt, Troy A. W. Visser, Stephen Pond, Samuel Huf, and Jessica Irons
- Subjects
Behavioral Neuroscience ,Operator (computer programming) ,Situation awareness ,Computer science ,Real-time computing ,Command and control ,Human–machine interface ,Submarine ,Human Factors and Ergonomics ,Workload ,Applied Psychology ,Information integration - Abstract
Objective Examine the extent to which increasing information integration across displays in a simulated submarine command and control room can reduce operator workload, improve operator situation awareness, and improve team performance. Background In control rooms, the volume and number of sources of information are increasing, with the potential to overwhelm operator cognitive capacity. It is proposed that by distributing information to maximize relevance to each operator role (increasing information integration), it is possible to not only reduce operator workload but also improve situation awareness and team performance. Method Sixteen teams of six novice participants were trained to work together to combine data from multiple sensor displays to build a tactical picture of surrounding contacts at sea. The extent that data from one display were available to operators at other displays was manipulated (information integration) between teams. Team performance was assessed as the accuracy of the generated tactical picture. Results Teams built a more accurate tactical picture, and individual team members had better situation awareness and lower workload, when provided with high compared with low information integration. Conclusion A human-centered design approach to integrating information in command and control settings can result in lower workload, and enhanced situation awareness and team performance. Application The design of modern command and control rooms, in which operators must fuse increasing volumes of complex data from displays, may benefit from higher information integration based on a human-centered design philosophy, and a fundamental understanding of the cognitive work that is carried out by operators.
- Published
- 2021
43. A Robust and Wearable Triboelectric Tactile Patch as Intelligent Human-Machine Interface
- Author
-
Yan Wang, Jianchun Mi, Ziyi Zhang, Zhiyuan Hu, Yu Luan, Peng Xu, Mingrui Shu, Xinxiang Pan, Chuan Wang, Lin Qiao, Yawei Wang, Tiancong Zhao, Junpeng Wang, Chang Liu, and Minyi Xu
- Subjects
Technology ,Computer science ,Interface (computing) ,Wearable computer ,Article ,human–machine interface ,triboelectric nanogenerator ,hydrogels ,tactile patch ,robot control ,General Materials Science ,Wearable technology ,Triboelectric effect ,Microscopy ,QC120-168.85 ,business.industry ,QH201-278.5 ,Nanogenerator ,Robotics ,Engineering (General). Civil engineering (General) ,Robot control ,TK1-9971 ,Descriptive and experimental mechanics ,Robot ,Artificial intelligence ,Electrical engineering. Electronics. Nuclear engineering ,TA1-2040 ,business ,Computer hardware - Abstract
The human–machine interface plays an important role in the diversified interactions between humans and machines, especially by swaping information exchange between human and machine operations. Considering the high wearable compatibility and self-powered capability, triboelectric-based interfaces have attracted increasing attention. Herein, this work developed a minimalist and stable interacting patch with the function of sensing and robot controlling based on triboelectric nanogenerator. This robust and wearable patch is composed of several flexible materials, namely polytetrafluoroethylene (PTFE), nylon, hydrogels electrode, and silicone rubber substrate. A signal-processing circuit was used in this patch to convert the sensor signal into a more stable signal (the deviation within 0.1 V), which provides a more effective method for sensing and robot control in a wireless way. Thus, the device can be used to control the movement of robots in real-time and exhibits a good stable performance. A specific algorithm was used in this patch to convert the 1D serial number into a 2D coordinate system, so that the click of the finger can be converted into a sliding track, so as to achieve the trajectory generation of a robot in a wireless way. It is believed that the device-based human–machine interaction with minimalist design has great potential in applications for contact perception, 2D control, robotics, and wearable electronics.
- Published
- 2021
44. Man-Machine Interface Design Analysis of Multi-function Printing (Copying) Machine
- Author
-
Fangzichun Chen, Xinyue Shao, and Ting Qiu
- Subjects
Engineering drawing ,Copying ,Design analysis ,Computer science ,media_common.quotation_subject ,Human–machine interface ,Function (engineering) ,media_common - Published
- 2021
- Full Text
- View/download PDF
45. Human-Machine Interaction: Controlling of a Factory with an Augmented Reality Device
- Author
-
Carl Bareis, Carsten Wittenberg, Benedict Bauer, Michael Zeyer, and Florian Uhl
- Subjects
Focus (computing) ,Industry 4.0 ,Computer science ,Human–computer interaction ,Human machine interaction ,Human–machine interface ,Factory (object-oriented programming) ,Augmented reality ,User interface - Abstract
This paper describes an implementation of an augmented reality user interface for a model factory. The focus here is that the user interfaces are exactly where the user expects them to be, be it directly on the devices or that they follow the user.
- Published
- 2021
- Full Text
- View/download PDF
46. After You! Design and Evaluation of a Human Machine Interface for Cooperative Truck Overtaking Maneuvers on Freeways
- Author
-
Jana Fank, Frank Diermeyer, and Christian Knies
- Subjects
Truck ,Operations research ,Process (engineering) ,Computer science ,business.industry ,Heuristic evaluation ,Overtaking ,Interface (computing) ,Task analysis ,Human–machine interface ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,business ,Automation - Abstract
Truck overtaking maneuvers on freeways are inefficient, risky and promote high potential for conflict between road users. Collective perception based on V2X communication allow coordination with all parties to reduce the negative impact and could be installed in a timely manner compared to automation. However, the prerequisite for the success of this system is a human-machine interface that the driver can easily operate, trusts and accepts. In this approach, a user-centered conception and design of a human-machine interface for cooperative truck overtaking maneuver on freeways is presented. The development process is separated in two steps: After a prototype is build based on task analysis it is initially evaluated and improved iteratively with a heuristic evaluation by experts. The final prototype is tested in a simulator study with 30 truck drivers. The study provides initial feedback regarding the drivers' attitudes towards such a system and how it can be further improved.
- Published
- 2021
- Full Text
- View/download PDF
47. Supporting the Onboarding of 3D Printers through Conversational Agents
- Author
-
Shahrier Erfan Harun, Shi Liu, Thomas Ludwig, and Florian Jasche
- Subjects
Scope (project management) ,business.industry ,Computer science ,3D printing ,Onboarding ,computer.software_genre ,Chatbot ,User experience design ,Human–computer interaction ,Human–machine interface ,Dialog system ,business ,Internet of Things ,computer - Abstract
In view of its capacity to create physical objects for a wide range of different potential applications, 3D printing has become increasingly popular over the years. However, given its scope of application, 3D printing can be challenging. Novice users often need assistance from experts, who are not always available. Recent interest in the development of conversational agents opens up the possibility of assisting novice users in their interactions with 3D printers, thus improving their experience. In this paper, we illustrate a potential concept of a conversational agent and present a prototype of a Telegram chatbot to improve the user experience of 3D printing.
- Published
- 2021
- Full Text
- View/download PDF
48. Inkjet-printed iontronics for human-machine interface applications
- Author
-
Dace Gao
- Subjects
Computer science ,business.industry ,Human–machine interface ,business ,Computer hardware - Published
- 2021
- Full Text
- View/download PDF
49. Designing human-machine interface for unmanned vehicle with account for time for control transfer
- Author
-
Alexey Zabudsky, Andrej Vorob’yov, and Sultan Zhankaziev
- Subjects
050210 logistics & transportation ,Interface (Java) ,Computer science ,business.industry ,05 social sciences ,0211 other engineering and technologies ,Control transfer ,Control engineering ,02 engineering and technology ,Automation ,021105 building & construction ,0502 economics and business ,Human–machine interface ,Interface design ,business - Abstract
The article addresses the concept of human-machine interface operation and problems solved during interface design. We provide a general description of problems related to the drivers’ readiness to enhance the automation level of vehicles. A basic algorithm of human-machine interface operation for an automated vehicle is described.
- Published
- 2020
- Full Text
- View/download PDF
50. An Operator Interface for Autonomous Vehicles
- Author
-
Daishi Watabe, Zhi Wang, Kazuo Ogiwara, Hideyasu Sai, Yukimichi Saito, and Masayoshi Wada
- Subjects
Computer science ,Human–machine interface ,Operator interface ,Simulation - Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.