953 results on '"Human-computer interaction"'
Search Results
2. A service failure assessment model for smart product consumption experience based on customer perception
- Author
-
Ting Wei and Yuanwu Shi
- Subjects
Customer perception ,Human-computer interaction ,Failure mode and effect analysis ,Fuzzy TOPSIS ,Tolerance region ,Service failure evaluation ,Medicine ,Science - Abstract
Abstract Customer perception is an important consideration factor in evaluating the quality of human-computer interaction services. Sustainable user experiences and marketing strategies can be created by analyzing customer perception. By understanding consumer satisfaction with product services in the customer perception area, appropriate product service failure prevention strategies can be formulated. A service failure evaluation model is proposed in this study, which considers the customer tolerance area to accurately evaluate consumers’ behavioral experiences from purchasing to using products. The concept of tolerance area is introduced, and a combination of the fuzzy Failure Mode and Effect Analysis (FMEA) method and the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is used to construct a human-computer interaction service failure evaluation model. Potential service failure factors of smart speakers are accurately evaluated by this model, and these service failure factors are ranked within the tolerance area. The research identifies voice misinterpretation and signal connectivity issues as the primary risk factors impacting the quality of human-computer interaction for smart speakers. The application of this method not only enhances the evaluation of smart speaker human-computer interaction services quality but also aids in the precise identification and prioritization of critical failure modes. The proposed service failure prevention strategies can reduce consumer dissatisfaction and provide innovative references for smart product design and marketing. The findings bolster empirical evidence for service failure prevention strategies in smart products and pave the way for novel perspectives on enhancing the quality of human-computer interaction services.
- Published
- 2024
- Full Text
- View/download PDF
3. Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies
- Author
-
Colin Holbrook, Daniel Holman, Joshua Clingo, and Alan R. Wagner
- Subjects
Artificial intelligence ,Human–robot interaction ,Human–computer interaction ,Social robotics ,Decision-making ,Threat-detection ,Medicine ,Science - Abstract
Abstract This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants’ subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agent’s intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.
- Published
- 2024
- Full Text
- View/download PDF
4. AUTOMIND: OTOMOBİL NE KADAR 'OTO'? İNSAN-OTONOM ARAÇ ETKİLEŞİMİ ÜZERİNE BİR İNCELEME
- Author
-
Zeynep Altan and Uğur Güven Adar
- Subjects
otonom otomobiller ,otonomluk sınıflandırması ,i̇nsan-bilgisayar etkileşimi ,i̇nsan makine etkileşimi ,otonomluk seviyeleri ,autonomous vehicles ,autonomy classification ,human-computer interaction ,human-machine interaction ,levels of autonomy ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Tekerleğin icadından bu yana süregiden insan-makine etkileşimi günümüz modern dünyasında önemli bir dönüşüm geçirmekte; yapay zekâ teknolojileri, donanım ve yazılım, çevresel algılayıcılardaki ilerlemeler sayesinde insanlar ile otomobiller arasındaki etkileşimi değiştirmektedir. Buna bağlı olarak otonom sürüş seviyeleri artmış, otomobil ve insan arasındaki etkileşimin nasıl değiştiğini incelemek önem kazanmıştır. Bu bağlamda otomobillerde otonomluk seviyelerine bağlı olarak kullanılan yazılım ve donanım teknolojileri, insanlar ile otonom araçlar arasındaki etkileşim ve otonom araçlar ile çevre, şirketler arasındaki ilişkileri içeren üç temel kategori tanımlanmıştır. Değişen otonomluk seviyelerinin insan-makine etkileşimi üzerindeki etkisi, geçiş sürecindeki otonom araçların sürdürülebilirliği insanbilgisayar etkileşimi bağlamında ele alınmakta, geleceğe yönelik tahminler yapılmakta ve dijital pazarlamanın otonom araçlar üzerindeki etkisi örnekler üzerinden değerlendirilmektedir. Bu çalışma, otomotiv endüstrisi ve insan-bilgisayar etkileşimi bağlamında otonom araçların mevcut uygulamaları ve gelecek vaat eden yönlerinin kapsamlı bir değerlendirmesini sunmayı ve bu etkileşimleri sınıflandırmayı amaçlamaktadır. Yapılan inceleme sonucunda otonomluk seviyelerinin değişmesiyle insan-makine etkileşimi bir iş birliğine dönüşmekte, otomobillerdeki sürücüler bir yolcu olarak değerlendirilmekte ve kullanıcı deneyimi bir seyahat deneyimi haline gelmektedir.
- Published
- 2024
- Full Text
- View/download PDF
5. Social anthropology 4.0
- Author
-
Balthasar Mandy
- Subjects
collective intelligence ,decision making ,human-computer interaction ,sociotechnical systems ,Communication. Mass media ,P87-96 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Human-computer interaction as a coordinating element between human and machine is used in many different ways. Due to their digital processes, countless industries are dependent on an effective intermeshing of humans and machines. This often involves preparatory work or sub-processes being carried out by machines, which humans initiate, take up, continue, finalise or check. Tasks are broken down into sub-steps and completed by humans or machines. Aggregated cooperation conceals the numerous challenges of hybrid cooperation in which communication and coordination must be mastered in favour of joint decision-making. However, research into human-computer interaction can also be thought of differently than a mere aggregation of humans and machines. We want to propose a nature-inspired possibility that has been successfully practising the complex challenges of joint decision-making as proof of successful communication and coordination for millions of years. Collective intelligence and the processes of self-organisation offer biomimetic concepts that can be used to rethink socio-technical systems as a symbiosis in the form of a human-computer organism. For example, the effects of self-organisation such as emergence could be used to exceed the result of an aggregation of humans and machines as a future social anthropology 4.0 many times over.
- Published
- 2024
- Full Text
- View/download PDF
6. Design of Human–Computer Interaction Gesture Recognition System Based on a Flexible Biosensor
- Author
-
Qianhui Chen
- Subjects
Flexible sensor ,Human–computer interaction ,Gesture recognition system ,Template matching ,RBF neural network ,PID control ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract The continuous development of high-speed Internet technology has made the application of robots increasingly widespread. Current robots and human–computer interaction systems mostly use rigid materials, such as metals and semiconductors, which have limitations in terms of deformability and flexibility. In addition, the biocompatibility and user comfort of these materials are also an issue. Therefore, research into new flexible biosensors is essential to improve the flexibility, comfort, and interactivity of these systems. This research will select polymer hydrogel as the electrode material of the sensor and polydimethylsiloxane as the base material of the sensor to design a resistance flexible biosensor to solve the poor flexibility. The research will use a template-matching method to verify the feasibility of gesture recognition of the flexible sensor. The remote control system of the robot finger is designed by a proportional-integral differential controller tuned by aradial basis function neural network. The feasibility of the research system is verified by simulation and scene experiments. The flexible sensor studied and prepared had a sensitivity of 0.7269, a tensile limit of 300%, and a thickness of 0.16 mm, showing good sensitivity and stability. The recognition accuracy of the sensor designed in the study was 92.8%, which was 8.1% higher than that of the data glove. Compared with traditional proportional-integral derivative (PID) controllers, the improved controller system error was within 10 to 3 rad, which had better adaptability and stability. Key information includes the design method of the flexible biosensor, its high sensitivity and stability under multiple stretches, and the proposal and validation of a new RBFNN–PID control model. These results showed that using this new sensor and control model significantly improved the control accuracy of mechanical fingers and the effect of gesture recognition. These results have important implications for the development of more advanced human–computer interaction systems. They not only improve the performance and reliability of the system, but also improve the user's interactive experience. These technologies are particularly promising in the fields of prosthetics for disabled people, advanced game controllers, and remotely controlled robots operating in hazardous environments. The research results are expected to lead to the development of advanced prosthetics, augmented reality devices, advanced game controllers, and automated robots. The main contribution of the research is to design a resistive flexible biosensor, which improves the traditional sensor's poor flexibility and large size and improves the sensor's ability to sense small changes. Future research may focus on further improving the sensor's long-term stability and performance under a variety of environmental conditions. In addition, commercializing these technologies and making them universal is also an important direction for the future.
- Published
- 2024
- Full Text
- View/download PDF
7. Children and adults produce distinct technology- and human-directed speech
- Author
-
Michelle Cohn, Santiago Barreda, Katharine Graf Estes, Zhou Yu, and Georgia Zellou
- Subjects
Speech adaptation ,Human–computer interaction ,Anthropomorphism ,Children ,Medicine ,Science - Abstract
Abstract This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether “Alexa seems like a real person or not”, further indicating that children’s conceptualization of the system’s competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human–computer interaction frameworks, providing support for routinized theories of spoken interaction with technology.
- Published
- 2024
- Full Text
- View/download PDF
8. Evolution of interaction-free usage in the wake of AI
- Author
-
Herrmann Thomas
- Subjects
human-computer interaction ,interaction-free usage ,socio-technical systems ,conversation ,collaboration ,Communication. Mass media ,P87-96 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Interaction-free usage (IfU) will be one of the quantitatively dominant forms of computer use in the future. In qualitative terms, this form of use will cover a wide range of applications, also software that supports communication and cooperation. Digital twins for cooperation and communication will be employed by individual users to maintain a variety of social networking activities. Generative AI will play a decisive role in this development, autonomously identifying user needs, replacing the predominant form of use through prompting with question-and-answer dialogs. These dialogs will also be used to preconfigure systems for IfU phases. The counterpart to IfU, which will become ever less-frequent, is intervening interaction, when users intervene to explore and adjust the performance of AI-based systems in exceptional situations or to optimize them for future task handling.
- Published
- 2024
- Full Text
- View/download PDF
9. Prediction Models of Collaborative Behaviors in Dyadic Interactions: An Application for Inclusive Teamwork Training in Virtual Environments
- Author
-
Ashwaq Zaini Amat, Abigale Plunk, Deeksha Adiani, D. Mitchell Wilkes, and Nilanjan Sarkar
- Subjects
human behavior recognition ,human–computer interaction ,probabilistic modeling ,collaborative virtual environment ,cross-neurotype collaboration ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
Collaborative virtual environment (CVE)-based teamwork training offers a promising avenue for inclusive teamwork training. The incorporation of a feedback mechanism within virtual training environments can enhance the training experience by scaffolding learning and promoting active collaboration. However, an effective feedback mechanism requires a robust prediction model of collaborative behaviors. This paper presents a novel approach using hidden Markov models (HMMs) to predict human behavior in collaborative interactions based on multimodal signals collected from a CVE-based teamwork training simulator. The HMM was trained using k-fold cross-validation, achieving an accuracy of 97.77%. The HMM was evaluated against expert-labeled data and compared against a rule-based prediction model, demonstrating the superior predictive capabilities of the HMM, with the HMM achieving 90.59% accuracy compared to 76.53% for the rule-based model. These results highlight the potential of HMMs to predict collaborative behaviors that could be used in a feedback mechanism to enhance teamwork training experiences despite the complexity of these behaviors. This research contributes to advancing inclusive and supportive virtual learning environments, bridging gaps in cross-neurotype collaborations.
- Published
- 2024
- Full Text
- View/download PDF
10. Trust in automation and the accuracy of human–algorithm teams performing one-to-one face matching tasks
- Author
-
Daniel J. Carragher, Daniel Sturman, and Peter J. B. Hancock
- Subjects
Identity verification ,Human–computer interaction ,Face recognition ,Human factors ,Collaborative decision-making ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract The human face is commonly used for identity verification. While this task was once exclusively performed by humans, technological advancements have seen automated facial recognition systems (AFRS) integrated into many identification scenarios. Although many state-of-the-art AFRS are exceptionally accurate, they often require human oversight or involvement, such that a human operator actions the final decision. Previously, we have shown that on average, humans assisted by a simulated AFRS (sAFRS) failed to reach the level of accuracy achieved by the same sAFRS alone, due to overturning the system’s correct decisions and/or failing to correct sAFRS errors. The aim of the current study was to investigate whether participants’ trust in automation was related to their performance on a one-to-one face matching task when assisted by a sAFRS. Participants (n = 160) completed a standard face matching task in two phases: an unassisted baseline phase, and an assisted phase where they were shown the identification decision (95% accurate) made by a sAFRS prior to submitting their own decision. While most participants improved with sAFRS assistance, those with greater relative trust in automation achieved larger gains in performance. However, the average aided performance of participants still failed to reach that of the sAFRS alone, regardless of trust status. Nonetheless, further analysis revealed a small sample of participants who achieved 100% accuracy when aided by the sAFRS. Our results speak to the importance of considering individual differences when selecting employees for roles requiring human–algorithm interaction, including identity verification tasks that incorporate facial recognition technologies.
- Published
- 2024
- Full Text
- View/download PDF
11. Comparing human text classification performance and explainability with large language and machine learning models using eye-tracking
- Author
-
Jeevithashree Divya Venkatesh, Aparajita Jaiswal, and Gaurav Nanda
- Subjects
Human-AI alignment ,Large language models ,Explainable AI ,Eye tracking ,Cognitive engineering ,Human–computer interaction ,Medicine ,Science - Abstract
Abstract To understand the alignment between reasonings of humans and artificial intelligence (AI) models, this empirical study compared the human text classification performance and explainability with a traditional machine learning (ML) model and large language model (LLM). A domain-specific noisy textual dataset of 204 injury narratives had to be classified into 6 cause-of-injury codes. The narratives varied in terms of complexity and ease of categorization based on the distinctive nature of cause-of-injury code. The user study involved 51 participants whose eye-tracking data was recorded while they performed the text classification task. While the ML model was trained on 120,000 pre-labelled injury narratives, LLM and humans did not receive any specialized training. The explainability of different approaches was compared based on the top words they used for making classification decision. These words were identified using eye-tracking for humans, explainable AI approach LIME for ML model, and prompts for LLM. The classification performance of ML model was observed to be relatively better than zero-shot LLM and non-expert humans, overall, and particularly for narratives with high complexity and difficult categorization. The top-3 predictive words used by ML and LLM for classification agreed with humans to a greater extent as compared to later predictive words.
- Published
- 2024
- Full Text
- View/download PDF
12. An enhanced speech emotion recognition using vision transformer
- Author
-
Samson Akinpelu, Serestina Viriri, and Adekanmi Adegun
- Subjects
Human–computer interaction ,Deep learning ,Speech emotion recognition ,CNN ,Vision transformer ,Mel spectrogram ,Medicine ,Science - Abstract
Abstract In human–computer interaction systems, speech emotion recognition (SER) plays a crucial role because it enables computers to understand and react to users’ emotions. In the past, SER has significantly emphasised acoustic properties extracted from speech signals. The use of visual signals for enhancing SER performance, however, has been made possible by recent developments in deep learning and computer vision. This work utilizes a lightweight Vision Transformer (ViT) model to propose a novel method for improving speech emotion recognition. We leverage the ViT model’s capabilities to capture spatial dependencies and high-level features in images which are adequate indicators of emotional states from mel spectrogram input fed into the model. To determine the efficiency of our proposed approach, we conduct a comprehensive experiment on two benchmark speech emotion datasets, the Toronto English Speech Set (TESS) and the Berlin Emotional Database (EMODB). The results of our extensive experiment demonstrate a considerable improvement in speech emotion recognition accuracy attesting to its generalizability as it achieved 98%, 91%, and 93% (TESS-EMODB) accuracy respectively on the datasets. The outcomes of the comparative experiment show that the non-overlapping patch-based feature extraction method substantially improves the discipline of speech emotion recognition. Our research indicates the potential for integrating vision transformer models into SER systems, opening up fresh opportunities for real-world applications requiring accurate emotion recognition from speech compared with other state-of-the-art techniques.
- Published
- 2024
- Full Text
- View/download PDF
13. VisCI: A visualization framework for anomaly detection and interactive optimization of composite index
- Author
-
Zhiguang Zhou, Yize Li, Yuna Ni, Weiwen Xu, Guoting Hu, Ying Lai, Peixiong Chen, and Weihua Su
- Subjects
Anomaly detection ,Composite index ,Human–computer interaction ,Visual analysis ,Information technology ,T58.5-58.64 - Abstract
Composite index is always derived with the weighted aggregation of hierarchical components, which is widely utilized to distill intricate and multidimensional matters in economic and business statistics. However, the composite indices always present inevitable anomalies at different levels oriented from the calculation and expression processes of hierarchical components, thereby impairing the precise depiction of specific economic issues. In this paper, we propose VisCI, a visualization framework for anomaly detection and interactive optimization of composite index. First, LSTM-AE model is performed to detect anomalies from the lower level to the higher level of the composite index. Then, a comprehensive array of visual cues is designed to visualize anomalies, such as hierarchy and anomaly visualization. In addition, an interactive operation is provided to ensure accurate and efficient index optimization, mitigating the adverse impact of anomalies on index calculation and representation. Finally, we implement a visualization framework with interactive interfaces, facilitating both anomaly detection and intuitive composite index optimization. Case studies based on real-world datasets and expert interviews are conducted to demonstrate the effectiveness of our VisCI in commodity index anomaly exploration and anomaly optimization.
- Published
- 2024
- Full Text
- View/download PDF
14. Human-computer interactions with farm animals—enhancing welfare through precision livestock farming and artificial intelligence
- Author
-
Suresh Neethirajan, Stacey Scott, Clara Mancini, Xavier Boivin, and Elizabeth Strand
- Subjects
precision livestock farming ,animal-computer interaction ,artificial intelligence ,farm animal welfare ,human-computer interaction ,one welfare ,Veterinary medicine ,SF600-1100 - Abstract
While user-centered design approaches stemming from the human-computer interaction (HCI) field have notably improved the welfare of companion, service, and zoo animals, their application in farm animal settings remains limited. This shortfall has catalyzed the emergence of animal-computer interaction (ACI), a discipline extending technology’s reach to a multispecies user base involving both animals and humans. Despite significant strides in other sectors, the adaptation of HCI and ACI (collectively HACI) to farm animal welfare—particularly for dairy cows, swine, and poultry—lags behind. Our paper explores the potential of HACI within precision livestock farming (PLF) and artificial intelligence (AI) to enhance individual animal welfare and address the unique challenges within these settings. It underscores the necessity of transitioning from productivity-focused to animal-centered farming methods, advocating for a paradigm shift that emphasizes welfare as integral to sustainable farming practices. Emphasizing the ‘One Welfare’ approach, this discussion highlights how integrating animal-centered technologies not only benefits farm animal health, productivity, and overall well-being but also aligns with broader societal, environmental, and economic benefits, considering the pressures farmers face. This perspective is based on insights from a one-day workshop held on June 24, 2024, which focused on advancing HACI technologies for farm animal welfare.
- Published
- 2024
- Full Text
- View/download PDF
15. A comprehensive review on NUI, multi-sensory interfaces and UX design for applications and devices for visually impaired users
- Author
-
Lauryn Arora, Akansh Choudhary, Margi Bhatt, Jayakumar Kaliappan, and Kathiravan Srinivasan
- Subjects
user experience ,user interface ,human–computer interaction ,visually impaired ,digital health ,Public aspects of medicine ,RA1-1270 - Abstract
In today’s world, there has been a significant increase in the use of devices, gadgets, and mobile applications in our daily activities. Although this has had a significant impact on the lives of the general public, people who are Partially Visually Impaired SPVI, which includes a much broader range of vision loss that includes mild to severe impairments, and Completely Visually Impaired (CVI), who have no light perception, still face significant obstacles when trying to access and use these technologies. This review article aims to provide an overview of the NUI, Multi-sensory Interfaces and UX Design (NMUD) of apps and devices specifically tailored CVI and PVI individuals. The article begins by emphasizing the importance of accessible technology for the visually impaired and the need for a human-centered design approach. It presents a taxonomy of essential design components that were considered during the development of applications and gadgets for individuals with visual impairments. Furthermore, the article sheds light on the existing challenges that need to be addressed to improve the design of apps and devices for CVI and PVI individuals. These challenges include usability, affordability, and accessibility issues. Some common problems include battery life, lack of user control, system latency, and limited functionality. Lastly, the article discusses future research directions for the design of accessible apps and devices for visually impaired individuals. It emphasizes the need for more user-centered design approaches, adherence to guidelines such as the Web Content Accessibility Guidelines, the application of e-accessibility principles, the development of more accessible and affordable technologies, and the integration of these technologies into the wider assistive technology ecosystem.
- Published
- 2024
- Full Text
- View/download PDF
16. A Human‐Computer Interaction Strategy for An FPGA Platform Boosted Integrated 'Perception‐Memory' System Based on Electronic Tattoos and Memristors
- Author
-
Yang Li, Zhicheng Qiu, Hao Kan, Yang Yang, Jianwen Liu, Zhaorui Liu, Wenjing Yue, Guiqiang Du, Cong Wang, and Nam‐Young Kim
- Subjects
electronic tattoo ,FPGA platform ,human‐computer interaction ,integrated system ,memristor ,Science - Abstract
Abstract The integrated “perception‐memory” system is receiving increasing attention due to its crucial applications in humanoid robots, as well as in the simulation of the human retina and brain. Here, a Field Programmable Gate Array (FPGA) platform‐boosted system that enables the sensing, recognition, and memory for human‐computer interaction is reported by the combination of ultra‐thin Ag/Al/Paster‐based electronic tattoos (AAP) and Tantalum Oxide/Indium Gallium Zinc Oxide (Ta2O5/IGZO)‐based memristors. Notably, the AAP demonstrates exceptional capabilities in accommodating the strain caused by skin deformation, thanks to its unique structural design, which ensures a secure fit to the skin and enables the prolonged monitoring of physiological signals. By utilizing Ta2O5/IGZO as the functional layer, a high switching ratio is conferred to the memristor, and an integrated system for sensing, distinguishing, storing, and controlling the machine hand of multiple human physiological signals is constructed together with the AAP. Further, the proposed system implements emergency calls and smart homes using facial electromyogram signals and utilizing logical entailment to realize the control of the music interface. This innovative “perception‐memory” integrated system not only serves the disabled, enhancing human‐computer interaction but also provides an alternative avenue to enhance the quality of life and autonomy of individuals with disabilities.
- Published
- 2024
- Full Text
- View/download PDF
17. HYBRIDMINDS—summary and outlook of the 2023 international conference on the ethics and regulation of intelligent neuroprostheses
- Author
-
Maria Buthut, Georg Starke, Tugba Basaran Akmazoglu, Annalisa Colucci, Mareike Vermehren, Amanda van Beinum, Christoph Bublitz, Jennifer Chandler, Marcello Ienca, and Surjo R. Soekadar
- Subjects
neurotechnology ,neuroprosthetics ,brain-computer interface ,human-computer interaction ,neuroethics ,neurorights ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Neurotechnology and Artificial Intelligence (AI) have developed rapidly in recent years with an increasing number of applications and AI-enabled devices that are about to enter the market. While promising to substantially improve quality of life across various severe medical conditions, there are also concerns that the convergence of these technologies, e.g., in the form of intelligent neuroprostheses, may have undesirable consequences and compromise cognitive liberty, mental integrity, or mental privacy. Therefore, various international organizations, such as the Organization for Economic Cooperation and Development (OECD) or United Nations Educational, Scientific and Cultural Organization (UNESCO), have formed initiatives to tackle such questions and develop recommendations that mitigate risks while fostering innovation. In this context, a first international conference on the ethics and regulation of intelligent neuroprostheses was held in Berlin, Germany, in autumn 2023. The conference gathered leading experts in neuroscience, engineering, ethics, law, philosophy as well as representatives of industry, policy making and the media. Here, we summarize the highlights of the conference, underline the areas in which a broad consensus was found among participants, and provide an outlook on future challenges in development, deployment, and regulation of intelligent neuroprostheses.
- Published
- 2024
- Full Text
- View/download PDF
18. Virtual Reality Shopping-Insights: A data-driven framework to assist the design and development of Virtual Reality shopping environments
- Author
-
Rubén Grande, Javier A. Albusac, David Vallejo, Carlos González-Morcillo, Santiago Sánchez-Sobrino, and José J. Castro-Schez
- Subjects
Virtual reality ,VR shopping ,Behaviour analysis ,Human–computer interaction ,Software development ,Marketing ,Computer software ,QA76.75-76.765 - Abstract
In this paper, Virtual Reality Shopping Insights (VRSI) is presented, a framework that aids the development of Virtual Reality (VR) shopping applications as well as collecting data and analysing it. VRSI aims to help software developers and researchers by abstracting the layers needed to setup VR technology on applications developed with Unity. Moreover, it provides data registration tools that monitors user activity from non invasive data sources. This data can be useful for marketing analysts to understand user behaviour inside V-commerce applications, helping them to improve the setup and layout of such environments with data-driven decisions. Furthermore, we present an example of VR shopping application developed with VRSI.
- Published
- 2024
- Full Text
- View/download PDF
19. Voice accentedness, but not gender, affects social responses to a computer tutor
- Author
-
Allison Jones and Georgia Zellou
- Subjects
voice gender ,accentedness ,human-computer interaction ,social evaluation ,learning ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The current study had two goals: First, we aimed to conduct a conceptual replication and extension of a classic study by Nass et al. in 1997 who found that participants display voice-gender bias when completing a tutoring session with a computer. In the present study, we used a more modern paradigm (i.e., app-based tutoring) and commercially-available TTS voices. Second, we asked whether participants provided different social evaluations of non-native-accented and native-accented American English-speaking machines. In the present study, 85 American participants completed a tutoring session with a system designed to look like a device application (we called it a “TutorBot”). Participants were presented with facts related to two topics: ‘love and relationships’ and ‘computers and technology’. Tutoring was provided either by a female or male TTS voice. Participants heard either native-English accented voices or non-native-English accented (here, Castilian Spanish-accented) voices. Overall, we find no effect of voice gender on any of the dependent measures: listeners recalled facts and rated female and male voices equivalently across topics and conditions. Yet, participants rated non-native accented TTS voices as less competent, less knowledgeable, and less helpful after completing the tutoring session. Finally, when participants were tutored on facts related to ‘love and relationships’, they showed better accuracy at recall and provided higher ratings for app competency, likeability, and helpfulness (and knowledgeable, but only for native-accented voices). These results are relevant for theoretical understandings of human-computer interaction, particularly the extent to which human-based social biases are transferred to machines, as well as for applications to voice-AI system design and usage.
- Published
- 2024
- Full Text
- View/download PDF
20. Gamifying cultural heritage: Exploring the potential of immersive virtual exhibitions
- Author
-
Hanbing Wang, Ze Gao, Xiaolin Zhang, Junyan Du, Yidan Xu, and Ziqi Wang
- Subjects
Cultural Heritage ,Gamification ,Human–computer interaction ,Immersive virtual exhibition ,Review ,Information technology ,T58.5-58.64 ,Telecommunication ,TK5101-6720 - Abstract
This paper reviews the potential of gamified cultural heritage in immersive virtual exhibitions. A systematic literature review following PRISMA guidelines identified 78 relevant papers from ACM and IEEE databases. Gamification and immersive technologies can provide interactive experiences to engage visitors and enhance their understanding of exhibits’ historical and cultural significance. Theoretical frameworks, including gamification theory, heritage interpretation theory, participatory heritage, immersive experience theory, and pedagogy, guide designing compelling experiences. Case studies like “Rome Reborn”, “Sutton House Stories”, and “Assassin’s Creed: Origins” demonstrate the efficacy of gamification in disseminating heritage. Key strategies include integrating augmented/virtual reality, multimodal data and 3D reconstruction, interactive narratives and gameplay, personalized experiences, advanced interfaces, balancing education and entertainment, and ensuring cultural sensitivity. Future work can explore AI-adaptive experiences, AR/VR integration, remote collaboration, educational game elements, and digital creativity models. Gamification and immersion provide innovative preservation and inheritance of cultural heritage. This review promotes digitalization and identifies literature gaps, supporting reflection on engagement’s past, present, and future. It aims to enable a broader appreciation of cultural heritage through technology.
- Published
- 2024
- Full Text
- View/download PDF
21. The role of digital technologies and Human-Computer Interaction for the future of education
- Author
-
Herczeg Michael
- Subjects
future of education ,e-learning ,human-computer interaction ,artificial intelligence ,Communication. Mass media ,P87-96 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Over the last 50 years education has been transformed by digital technologies. Many efforts have been made to create and apply “digital” teaching and learning methods, tools and platforms. The last 25 years of computer-based education can be characterized by the availability of digital information sources and the implementation and operation of digital learning management platforms based on the Internet. The question about further meaningful and effective progress in the next decades is openly discussed. Many expect strong influences and changes from artificial intelligence systems that generate contextualized information from various sources. Some see interactive virtual worlds expanding and partially replacing the physical world. Many believe in the further development of learning management and communication platforms. Others do not expect much valuable change at all due to the slow pace of complex educational systems with recent studies even showing a decline in the quality of educational outcomes during the last five years as a result of even too much digitalization in education. This paper discusses these positions with an emphasis on the roles of humans and computers and their interfaces, i.e. Human-Computer Interaction, for future learning and teaching with rapidly changing information technologies in the next 25 and 50 years.
- Published
- 2024
- Full Text
- View/download PDF
22. Robust sign language detection for hearing disabled persons by Improved Coyote Optimization Algorithm with deep learning
- Author
-
Mashael M Asiri, Abdelwahed Motwakel, and Suhanda Drar
- Subjects
coyote optimization algorithm ,disabled persons ,human-computer interaction ,machine learning ,sign language detection ,Mathematics ,QA1-939 - Abstract
Sign language (SL) recognition for individuals with hearing disabilities involves leveraging machine learning (ML) and computer vision (CV) approaches for interpreting and understanding SL gestures. By employing cameras and deep learning (DL) approaches, namely convolutional neural networks (CNN) and recurrent neural networks (RNN), these models analyze facial expressions, hand movements, and body gestures connected with SL. The major challenges in SL recognition comprise the diversity of signs, differences in signing styles, and the need to recognize the context in which signs are utilized. Therefore, this manuscript develops an SL detection by Improved Coyote Optimization Algorithm with DL (SLR-ICOADL) technique for hearing disabled persons. The goal of the SLR-ICOADL technique is to accomplish an accurate detection model that enables communication for persons using SL as a primary case of expression. At the initial stage, the SLR-ICOADL technique applies a bilateral filtering (BF) approach for noise elimination. Following this, the SLR-ICOADL technique uses the Inception-ResNetv2 for feature extraction. Meanwhile, the ICOA is utilized to select the optimal hyperparameter values of the DL model. At last, the extreme learning machine (ELM) classification model can be utilized for the recognition of various kinds of signs. To exhibit the better performance of the SLR-ICOADL approach, a detailed set of experiments are performed. The experimental outcome emphasizes that the SLR-ICOADL technique gains promising performance in the SL detection process.
- Published
- 2024
- Full Text
- View/download PDF
23. Innovative application of artificial intelligence in a multi-dimensional communication research analysis: a critical review
- Author
-
Muhammad Asif and Zhou Gouqing
- Subjects
Artificial intelligence ,Communication ,Human–computer interaction ,Social formations ,Media studies ,Computational linguistics. Natural language processing ,P98-98.5 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Artificial intelligence (AI) imitates the human brain’s capacity for problem-solving and making decisions by using computers and other devices. People engage with artificial intelligence-enabled products like virtual agents, social bots, and language-generation software, to name a few. The paradigms of communication theory, which have historically put a significant focus on human-to-human communication, do not easily match these gadgets. AI in multidimensional touch is the subject of this review article, which provides a comprehensive analysis of the most recent research published in the field of AI, specifically related to communication. Additionally, we considered several theories and models (communication theory, AI-based persuasion theory, social exchange theory, Frames of mind, Neural network model, L-LDA model, and Routine model) to explain a complex phenomenon and to create a conceptual framework that is appropriate for this goal and a voluntary relationship between two or more people that lasts for an extended period. Communication and media studies focus on human–machine communication (HMC), a rapidly developing research area. It is our intention to continue investigating the beneficial and detrimental effects of artificial intelligence on human communication as well as to identify novel concepts, theories, and challenges as the research process develops.
- Published
- 2024
- Full Text
- View/download PDF
24. Exploring the efficacy of collaborative learning in a remote robotics laboratory: a comparative analysis of performance and pedagogical approaches
- Author
-
Long Teng, Yuk Ming Tang, Raymond P. H. Wu, Gary C. P. Tsui, Yung Po Tsang, and Chak Yin Tang
- Subjects
Remote laboratory ,Collaborative learning ,Robotics ,Human–computer interaction ,Special aspects of education ,LC8-6691 - Abstract
Abstract In today's world, remote-controlled robots are widely used across various industries due to their ability to enhance working efficiency in various applications. Learning about robot operation and human–computer interaction has emerged as a popular topic in recent times. Indeed, learning robotics can be challenging for many students as it requires knowledge of programming, control systems, electronics, etc. Collaborative learning in a physical robotics setting is common in higher education and has received significant attention for its potential to enhance individual learning outcomes. However, the effectiveness of learning robotics in a remote setting is still a matter of debate. In this study, we establish a remote laboratory environment to teach undergraduate students in the engineering discipline. Students are required to utilize a robotic arm to grasp designated objects collaboratively among students through synchronous interactions online. To compare students’ performance under different pedagogical teaching approaches, students are divided into two groups. They each perform the task individually and collaboratively, albeit in a different order. Our study adopts a quantitative method to measure students' learning outcomes based on the assessment of performing the laboratory tasks and completion time. The results indicate a noteworthy improvement in the individual performance of the group of students who engage in collaborative work prior to the individual tasks. These findings have implications for other remote laboratory setups and highlight the effectiveness of collaborative learning in higher education.
- Published
- 2024
- Full Text
- View/download PDF
25. A bonus task boosts people's willingness to offload cognition to an algorithm
- Author
-
Basil Wahn and Laura Schmitz
- Subjects
Cognitive offloading ,Human–computer collaboration ,Human–computer interaction ,Social cognition ,Algorithmic aversion ,Algorithmic appreciation ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract With the increased sophistication of technology, humans have the possibility to offload a variety of tasks to algorithms. Here, we investigated whether the extent to which people are willing to offload an attentionally demanding task to an algorithm is modulated by the availability of a bonus task and by the knowledge about the algorithm’s capacity. Participants performed a multiple object tracking (MOT) task which required them to visually track targets on a screen. Participants could offload an unlimited number of targets to a “computer partner”. If participants decided to offload the entire task to the computer, they could instead perform a bonus task which resulted in additional financial gain—however, this gain was conditional on a high performance accuracy in the MOT task. Thus, participants should only offload the entire task if they trusted the computer to perform accurately. We found that participants were significantly more willing to completely offload the task if they were informed beforehand that the computer’s accuracy was flawless (Experiment 1 vs. 2). Participants’ offloading behavior was not significantly affected by whether the bonus task was incentivized or not (Experiment 2 vs. 3). These results combined with those from our previous study (Wahn et al. in PLoS ONE 18:e0286102, 2023), which did not include a bonus task but was identical otherwise, show that the human willingness to offload an attentionally demanding task to an algorithm is considerably boosted by the availability of a bonus task—even if not incentivized—and by the knowledge about the algorithm’s capacity.
- Published
- 2024
- Full Text
- View/download PDF
26. Modeling perceiving of recommendations provided by clinical decision support system based on predictive modeling within dental preventive screening
- Author
-
Alexander N. Soldatov, Ivan K. Soldatov, and Sergey V. Kovalchuk
- Subjects
decision support systems ,human-computer interaction ,predictive modeling ,machine learning ,caries ,preventive screening ,Optics. Light ,QC350-467 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The results of a current study of the perception of clinical decision support systems (CDSS) in the framework of preventive screening by dentists in schools of the Russian Ministry of Defense (cadet corps) are presented. Using the example of the scenario under consideration, a prototype of the CDSS based on machine learning was evaluated. To assess perception, a survey was conducted demonstrating the results of the prototype and assessing the perceived characteristics of the provided predictive modeling results. A model was built based on a Bayesian network to evaluate the considered indicators, which demonstrated an increase in the quality of prediction of perceived indicators, taking into account the influence of latent states of the operator’s subjective perception. The proposed approach is planned to be used in the future to increase the efficiency of doctor-CDSS interaction.
- Published
- 2024
- Full Text
- View/download PDF
27. Where scrollbars are clicked, and why
- Author
-
Oliver Herbort, Philipp Raßbach, and Wilfried Kunde
- Subjects
Scrolling ,Anticipation ,Action planning ,Human–computer interaction ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract Scrolling is a widely used mean to interact with visual displays, usually to move content to a certain target location on the display. Understanding how user scroll might identify potentially suboptimal use and allows to infer users’ intentions. In the present study, we examined where users click on a scrollbar depending on the intended scrolling action. In two online experiments, click positions were systematically adapted to the intended scrolling action. Click position selection could not be explained as strict optimization of the distance traveled with the cursor, memory load, or motor-cognitive factors. By contrast, for identical scrolling actions click positions strongly depended on the context and on previous scrolls. The behavior of our participants closely resembled behavior observed for manipulation of other physical devices and suggested a simple heuristic of movement planning. The results have implications for modeling human–computer interaction and may contribute to predicting user behavior.
- Published
- 2024
- Full Text
- View/download PDF
28. Application of adaptive intelligent human-machine interaction in underwater command and control systems
- Author
-
NING Yunhui, CHEN Ke, YOU Yue, ZHOU Yun, JIAO Yuan
- Subjects
intelligent ,adaptive interface ,human-computer interaction ,command and control ,Military Science - Abstract
Intelligent command and control has become a new style of modern warfare command, and the high dynamic, strong adversarial, and massive data battlefield environment has put forward higher requirements for command efficiency and task execution ability. The traditional human-computer interaction mode has low interaction efficiency and lacks a human-centered interaction design, which cannot fully utilize the effectiveness of underwater command and control. This article is based on intelligent human-machine interaction and principal component analysis to design an interaction module that can generate an adaptive interface for underwater command and control based on user and business needs. By building an underwater command and control intelligent simulation platform, it has been confirmed that adaptive intelligent human-machine interaction can simplify the command and control process, reduce interaction time, and improve combat efficiency, with strong application value.
- Published
- 2024
- Full Text
- View/download PDF
29. Concept, Task, and Application of Social Robots in Information Behavior Research
- Author
-
LIU Yang, LYU Shuyue, LI Ruojun
- Subjects
social robot ,information behavior ,human-computer interaction ,development context ,systematic review ,Bibliography. Library science. Information resources ,Agriculture - Abstract
[Purpose/Significance] The advent and emergence of social robots represent a closer development trend in human-computer interaction. However, the study of the information behavior of social robots faces many challenges that arise from the need to simulate human social behavior. This challenge includes technical hurdles such as a multi-level understanding of human emotions, extraction of multi-modal information features, situational awareness, as well as the establishment of long-term user profiling, data privacy, and ethical considerations in personalized interaction. However, existing research tends to focus narrowly on specific applications and lacks a holistic review. This paper attempts to provide a thorough review of both domestic and international studies of social robots in the area of information behavior. It aims to elucidate the theoretical evolution and technological foundations of social robots, thereby enriching our understanding of their role in the landscape of information behavior research. [Method/Process] Using a rigorous literature review methodology, we meticulously analyze the current state and prospective trajectory of research on the information behavior of social robots. First, we extract and scrutinize the theoretical foundations and salient research topics within the field. We then delineate the core tasks of social robots, which include data acquisition, language processing, emotion analysis, information retrieval, and intelligent communication. Furthermore, we synthesize research on the information behavior of social robots in various application domains such as education, healthcare, and service sectors. We delve into the intricacies of human-computer interaction in these contexts and provide comprehensive insights. Finally, we explore future directions in the field. [Results/Conclusions] Our examination of the information behavior of social robots reveals both promising potential and notable challenges. This paper provides a fundamental elucidation of the social robot concept, identifies current research foci, and addresses prevailing challenges. Regarding the construction of data resource and related technologies, we systematically delineate the task architecture of social robots, and highlight their wide-ranging applications in various domains. Furthermore, we provide an in-depth examination of human-computer interaction scenarios in critical domains such as education, healthcare, and service delivery, offering prescient guidance for future research efforts in social robotics. Nonetheless, our findings underscore the nascent stage of development of social robotics, which requires a concerted focus on advancing interaction quality assessment, enhancing social cognitive capabilities, managing user information disclosure, and refining emotional intelligence. By prioritizing these avenues, we aim to improve the quality of human-robot interaction and provide users with enriched and personalized service experiences, thereby catalyzing the continued evolution and broader integration of social robotics technology.
- Published
- 2024
- Full Text
- View/download PDF
30. Diagnostic decisions of specialist optometrists exposed to ambiguous deep-learning outputs
- Author
-
Josie Carmichael, Enrico Costanza, Ann Blandford, Robbert Struyven, Pearse A. Keane, and Konstantinos Balaskas
- Subjects
Ophthalmology ,OCT ,Retinal disease ,Artificial intelligence ,Human–computer interaction ,Medicine ,Science - Abstract
Abstract Artificial intelligence (AI) has great potential in ophthalmology. We investigated how ambiguous outputs from an AI diagnostic support system (AI-DSS) affected diagnostic responses from optometrists when assessing cases of suspected retinal disease. Thirty optometrists (15 more experienced, 15 less) assessed 30 clinical cases. For ten, participants saw an optical coherence tomography (OCT) scan, basic clinical information and retinal photography (‘no AI’). For another ten, they were also given AI-generated OCT-based probabilistic diagnoses (‘AI diagnosis’); and for ten, both AI-diagnosis and AI-generated OCT segmentations (‘AI diagnosis + segmentation’) were provided. Cases were matched across the three types of presentation and were selected to include 40% ambiguous and 20% incorrect AI outputs. Optometrist diagnostic agreement with the predefined reference standard was lowest for ‘AI diagnosis + segmentation’ (204/300, 68%) compared to ‘AI diagnosis’ (224/300, 75% p = 0.010), and ‘no Al’ (242/300, 81%, p =
- Published
- 2024
- Full Text
- View/download PDF
31. Deep convolutional neural network-based Leveraging Lion Swarm Optimizer for gesture recognition and classification
- Author
-
Mashael Maashi, Mohammed Abdullah Al-Hagery, Mohammed Rizwanullah, and Azza Elneil Osman
- Subjects
human-computer interaction ,swarm intelligence ,cnn model ,gesture recognition ,transfer learning ,Mathematics ,QA1-939 - Abstract
Vision-based human gesture detection is the task of forecasting a gesture, namely clapping or sign language gestures, or waving hello, utilizing various video frames. One of the attractive features of gesture detection is that it makes it possible for humans to interact with devices and computers without the necessity for an external input tool like a remote control or a mouse. Gesture detection from videos has various applications, like robot learning, control of consumer electronics computer games, and mechanical systems. This study leverages the Lion Swarm optimizer with a deep convolutional neural network (LSO-DCNN) for gesture recognition and classification. The purpose of the LSO-DCNN technique lies in the proper identification and categorization of various categories of gestures that exist in the input images. The presented LSO-DCNN model follows a three-step procedure. At the initial step, the 1D-convolutional neural network (1D-CNN) method derives a collection of feature vectors. In the second step, the LSO algorithm optimally chooses the hyperparameter values of the 1D-CNN model. At the final step, the extreme gradient boosting (XGBoost) classifier allocates proper classes, i.e., it recognizes the gestures efficaciously. To demonstrate the enhanced gesture classification results of the LSO-DCNN approach, a wide range of experimental results are investigated. The brief comparative study reported the improvements in the LSO-DCNN technique in the gesture recognition process.
- Published
- 2024
- Full Text
- View/download PDF
32. Harnessing the stream: algorithmic imaginary and coping strategies for live-streaming e-commerce entrepreneurs on Douyin
- Author
-
Xinlu Wang and Shule Cao
- Subjects
Algorithmic imaginary ,Live streaming e-commerce ,Human–computer interaction ,Algorithmic visibility ,Douyin ,Social Sciences ,Social sciences and state - Asia (Asian studies only) ,H53 ,Sociology (General) ,HM401-1281 - Abstract
Abstract When an algorithm is embedded into the platform economy as a digital infrastructure, it affects the visibility of information, the distribution of interests, and the labor process. In the context of live-streaming e-commerce, entrepreneurs interact with algorithms and consumers in real time to obtain more traffic. Compared with platform users, entrepreneurs are more sensitive to changes in the algorithm when facing great uncertainty in live-streaming and have become “algorithm experts”. This paper focuses on the short video platform Douyin and adopts the methods of field research and in-depth interviews. We interviewed 45 live commerce entrepreneurs and explored how entrepreneurs understand and interact with algorithms, particularly how their understanding of algorithms differs from that of users. By examining algorithmic imaginaries and coping practices, the authors hope to further the understanding of the relationships among labor, technology, and economic activity in new social contexts. In our study, we found that unlike platform users, who are often resilient, entrepreneurs have more active attitudes to learn about and interact with algorithms. They use words such as “admission tickets” and “like the wind” to describe algorithms. Unlike content creators, entrepreneurs’ understanding of arithmetic is influenced by the logic of traditional trade business when faced with traffic uncertainty. To seek certainty from a blurry stream, they adapt the methods of farther learning, frequent algorithm tests, more sophisticated work division, and resignation to fate. This study enriches the research perspective in the field of STS from a social-economic perspective and discusses the impact of technology on individual movement in the context of e-commerce. At the same time, this study also explores the technological ideology behind precarious work, which reflects individuals’ expectations of social order.
- Published
- 2024
- Full Text
- View/download PDF
33. Privacy-concerned averaged human activeness monitoring and normal pattern recognizing with single passive infrared sensor using one-dimensional modeling
- Author
-
Tajim Md. Niamat Ullah Akhund and Kenbu Teramoto
- Subjects
Human activeness recognition ,Passive infrared sensor ,Privacy-conscious activity monitoring ,Abnormal activeness prediction ,Human–computer interaction ,Human sensor interaction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Detecting human activity through cameras and machine learning methods raises significant privacy concerns, while alternatives like thermal cameras can be expensive. Passive infrared (PIR) sensors present a cost-effective and privacy-preserving solution, commonly used in home settings for motion detection. This study introduces a system for monitoring human activeness using a single PIR sensor, focusing on privacy preservation. The proposed one-dimensional model, based on the Laplace distribution, emphasizes the role of the parameter μ in defining velocity distributions. Through real-world experiments with a Raspberry Pi and PIR sensor, the effectiveness of the model in capturing human activeness is validated. The study investigates how different μ values correlate with activity levels and detect abnormalities. Additionally, the paper addresses the stochastic nature of human behavior, and the impact of μ on predictability and variability, and provides insights into detection thresholds and interval times. The findings highlight the potential for enhancing abnormality detection and suggest a comprehensive understanding of human activeness.
- Published
- 2025
- Full Text
- View/download PDF
34. Design and optimization of an open personalized human-computer interaction system for New Year Painting based on the learner's model
- Author
-
Zaozao Guo, Muhamad Firdaus Ramli, and Wenpeng Zhang
- Subjects
Learner modeling ,Human-computer interaction ,Emotion ,Database ,New Year Painting ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstracts: With the rapid development of information technology such as big data and learning analytics, intelligent systems, a product of the deep integration of technology and education, have emerged. In this paper, a human-computer interaction teaching system for traditional New Year Painting is proposed based on the learner model. Firstly, the attention mechanism based long and short term memory network is used to mine the emotion from the course review text of learners, and the association rule algorithm and ID3 algorithm are used to initialize and dynamically update the text. Constructing a personalized HCI teaching system with the learner as the center. Based on the smart learning model, the functional modules of the human-computer interaction teaching system are analyzed and designed in detail, including online learning, online testing and educational information. The design of the database of the intelligent teaching system is proposed, and the design process of the database is fully demonstrated in terms of both database relationship design and database table structure design, taking into account the security of the database. Finally, the learner model and personalized human-computer interaction system that incorporate the emotions of this paper are tested for performance, and the results show that the prediction accuracy of this paper's model is about 3 % higher than the standard model DKT on the 2009 dataset, about 3 % higher than the standard model DKT on the AUC index, and about 4 % lower than the standard model DKT on the RMSE index. Students learn through the personalized human-computer interaction system, and their mastery of the traditional art of New Year's Paintings is more thorough, and the learning effect is significantly improved.
- Published
- 2024
- Full Text
- View/download PDF
35. Challenges in Human-Computer Interaction from a Retrospective Perspective: A Global Reflection with Emphasis on Latin America
- Author
-
Wilson J. Sarmiento, Christian Sturm, and César A. Collazos
- Subjects
human-computer interaction ,computer science ,user interface ,generative artificial intelligence ,software development ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Undeniably, the design and construction of tools to support task execution marked a turning point in our development as a species and society. Some even argue that this point defines our transition to intelligent beings. One of the most significant consequences is that we understand our ability to change and modify the environment for our benefit and comfort. However, it took several thousand years, with the advent of the information and computing era, for us to feel the need to reflect on how we interact with and relate to a particular tool, one that has enabled significant transformations in our environment. This need transformed the human-computer relationship into a subject of interest, study, and research [1], [2].
- Published
- 2024
- Full Text
- View/download PDF
36. Maintaining Meaningful Human Interaction in AI-Enhanced Language Learning Environments: A Systematic Review
- Author
-
نعمات إدريس محمد سعيد عمر
- Subjects
Artificial intelligence ,language learning ,Human-computer interaction ,adaptive learning ,Technology integration ,Oriental languages and literatures ,PJ - Abstract
This systematic review examines strategies for designing AI-enhanced language learning environments anchored in collaborative partnerships between humans and AI. The review involved searching multiple databases for relevant literature published between 2000-2023, applying inclusion/exclusion criteria, and coding articles according to a predefined scheme. A total of 10 studies were identified that addressed guidelines for structuring roles, coordinating AI with human priorities, assessing user perceptions, applying AI to personalized learning, or leveraging AI capabilities while maintaining central human involvement. Key findings indicate guidelines emphasize delineating roles between humans and AI through frameworks balancing autonomy and expertise. Techniques show potential for aligning AI with human input, though ensuring real-world coordination requires ongoing refinement. Research underscores generally positive user perceptions depending on individual attributes and initial adoption intentions. Personalized learning through AI modeling emerges as promising when guided by educators. Designing AI to enhance rather than replace teachers emphasizes collaborative problem-solving. Findings offer guidance on thoughtful AI integration respecting people as learning relationships evolve. Continued investigation refining coordinated approaches across contexts could help realize equitable AI-augmented models optimizing outcomes through empowered human partnerships as technologies progress. This provides direction for responsibly advancing the field to maximize AI's contributions to language education.
- Published
- 2024
- Full Text
- View/download PDF
37. Application of intelligent internet of things and interaction design in Museum Tour
- Author
-
Yajing Hou
- Subjects
Museum Tour ,Interactive design ,Internet of things ,Human-computer interaction ,Navigation system ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
The rapid development of information technology and the continuous improvement of human spiritual and cultural level have made people's acquisition of knowledge and information increasingly urgent. The information transmission method of traditional navigation systems is single and passive. This study aims to improve user experience and enhance the interactivity of museum visits. This article combines interactive technology in the Internet of Things to conduct in-depth research on the tour guide system in museums. By utilizing IoT technology, optimize the interaction design of the intelligent navigation system, and enhance the convenience and experience of visitors. In the experimental analysis, this article compared the main ticket purchasing methods of tourists, the level of understanding of museum staff about the collection, the amount of information obtained by tourists, and the satisfaction of tourists. The comparison results show that the proportion of electronic ticket purchases has increased by 35 % in terms of ticket purchasing methods. The staff's understanding of the collection has increased by 19 %, the amount of information received by tourists has increased by 12.7 %, and tourist satisfaction has also increased by 24.3 %. The conclusion indicates that the interactive design navigation system based on intelligent Internet of Things and interactive design can effectively meet the needs of different users and provide the public with a richer and deeper cultural experience.
- Published
- 2024
- Full Text
- View/download PDF
38. An Improved Design Guideline for Dyslexia-Friendly Applications Based on Eye-Tracking Data
- Author
-
Husniza Husni, Nurul Ida Syaheera Mohd Nasri, and Mohamed Ali Saip
- Subjects
Interaction design ,eye-tracking usability ,human-computer interaction ,dyslexia ,Information technology ,T58.5-58.64 - Abstract
Addressing dyslexia through digital product design presents distinct challenges and necessitates the development of tailored strategies or guidelines to facilitate children in reading accurately and efficiently. Hence, a design guideline was formulated by integrating Interaction Design (IxD) principles to enhance comprehension and minimise reading errors while using digital applications. Nonetheless, the existing design guidelines for dyslexia people have yet to be confirmed and updated to cater to the current and latest five IxD dimensions. Therefore, this paper presents an eye-tracking usability test that was conducted to identify usability issues pertaining to the design guideline by performing the test on an application called BacaDisleksia, which was developed based on the existing guideline. A usability test was conducted using the Tobii eye-tracking tool through an in-person and moderated session with six dyslexic children. The test reveals pertinent design issues by analysing heat maps and gaze plots. Based on these findings, this paper proposes a refined design guideline with five IxD dimensions and strategies conducive to dyslexia-friendly application design by incorporating space and time components. The results contribute to the development of comprehensive design guideline for people with dyslexia, which aligns with UNESCO’s objective of utilising technology to promote inclusion for disabled learners. This effort underscores the significance of informed design decisions in digital innovation for better-serving individuals with dyslexia and similar learning challenges.
- Published
- 2024
39. Editorial: HCI and worker well-being
- Author
-
Eva Geurts, Gustavo Rovelo Ruiz, Kris Luyten, and Philippe Palanque
- Subjects
worker well-being ,Industry 5.0 ,human-computer interaction ,human-centered approaches ,well-being ,Electronic computers. Computer science ,QA75.5-76.95 - Published
- 2024
- Full Text
- View/download PDF
40. A Resource-Efficient Deep Learning: Fast Hand Gestures on Microcontrollers
- Author
-
Tuan Kiet Tran Mach, Khai Nguyen Van, and Minhhuy Le
- Subjects
Human-Computer Interaction ,Microcontroller ,TinyML ,Intelligent system ,Hand gesture recognition ,Embedded ,Computer engineering. Computer hardware ,TK7885-7895 ,Systems engineering ,TA168 - Abstract
Hand gesture recognition using a camera provides an intuitive and promising means of human-computer interaction and allows operators to execute commands and control machines with simple gestures. Research in hand gesture recognition-based control systems has garnered significant attention, yet the deploying of microcontrollers into this domain remains relatively insignificant. In this study, we propose a novel approach utilizing micro-hand gesture recognition built on micro-bottleneck Residual and micro-bottleneck Conv blocks. Our proposed model, comprises only 42K parameters, is optimized for size to facilitate seamless operation on resource-constrained hardware. Benchmarking conducted on STM32 microcontrollers showcases remarkable efficiency, with the model achieving an average prediction time of just 269ms, marking a 7× faster over the state-of-art model. Notably, despite its compact size and enhanced speed, our model maintains competitive performance result, achieving an accuracy of 99.6% on the ASL dataset and 92% on OUHANDS dataset. These findings underscore the potential for deploying advanced control methods on compact, cost-effective devices, presenting promising avenues for future research and industrial applications.
- Published
- 2024
- Full Text
- View/download PDF
41. ChatGPT: perspectives from human–computer interaction and psychology
- Author
-
Jiaxi Liu
- Subjects
ChatGPT ,large language model ,human–computer interaction ,psychology ,society ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The release of GPT-4 has garnered widespread attention across various fields, signaling the impending widespread adoption and application of Large Language Models (LLMs). However, previous research has predominantly focused on the technical principles of ChatGPT and its social impact, overlooking its effects on human–computer interaction and user psychology. This paper explores the multifaceted impacts of ChatGPT on human–computer interaction, psychology, and society through a literature review. The author investigates ChatGPT’s technical foundation, including its Transformer architecture and RLHF (Reinforcement Learning from Human Feedback) process, enabling it to generate human-like responses. In terms of human–computer interaction, the author studies the significant improvements GPT models bring to conversational interfaces. The analysis extends to psychological impacts, weighing the potential of ChatGPT to mimic human empathy and support learning against the risks of reduced interpersonal connections. In the commercial and social domains, the paper discusses the applications of ChatGPT in customer service and social services, highlighting the improvements in efficiency and challenges such as privacy issues. Finally, the author offers predictions and recommendations for ChatGPT’s future development directions and its impact on social relationships.
- Published
- 2024
- Full Text
- View/download PDF
42. A high-performance general computer cursor control scheme based on a hybrid BCI combining motor imagery and eye-tracking
- Author
-
Jiakai Zhang, Yuqi Zhang, Xinlong Zhang, Boyang Xu, Huanqing Zhao, Tinghui Sun, Ju Wang, Shaojie Lu, and Xiaoyan Shen
- Subjects
Neuroscience ,Human-computer interaction ,Science - Abstract
Summary: This study introduces a novel virtual cursor control system designed to empower individuals with neuromuscular disabilities in the digital world. By combining eye-tracking with motor imagery (MI) in a hybrid brain-computer interface (BCI), the system enhances cursor control accuracy and simplicity. Real-time classification accuracy reaches 87.92% (peak of 93.33%), with cursor stability in the gazing state at 96.1%. Integrated into common operating systems, it enables tasks like text entry, online chatting, email, web surfing, and picture dragging, with an average text input rate of 53.2 characters per minute (CPM). This technology facilitates fundamental computing tasks for patients, fostering their integration into the online community and paving the way for future developments in BCI systems.
- Published
- 2024
- Full Text
- View/download PDF
43. Advancements in brain-machine interfaces for application in the metaverse
- Author
-
Yang Liu, Ruibin Liu, Jinnian Ge, and Yue Wang
- Subjects
brain-machine interface ,metaverse ,human-computer interaction ,neurosecurity ,application ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
In recent years, with the shift of focus in metaverse research toward content exchange and social interaction, breaking through the current bottleneck of audio-visual media interaction has become an urgent issue. The use of brain-machine interfaces for sensory simulation is one of the proposed solutions. Currently, brain-machine interfaces have demonstrated irreplaceable potential as physiological signal acquisition tools in various fields within the metaverse. This study explores three application scenarios: generative art in the metaverse, serious gaming for healthcare in metaverse medicine, and brain-machine interface applications for facial expression synthesis in the virtual society of the metaverse. It investigates existing commercial products and patents (such as MindWave Mobile, GVS, and Galea), draws analogies with the development processes of network security and neurosecurity, bioethics and neuroethics, and discusses the challenges and potential issues that may arise when brain-machine interfaces mature and are widely applied. Furthermore, it looks ahead to the diverse possibilities of deep and varied applications of brain-machine interfaces in the metaverse in the future.
- Published
- 2024
- Full Text
- View/download PDF
44. Value sensitive design
- Author
-
Leif Hemming Pedersen
- Subjects
Human-computer interaction ,design ,moral psychology ,participatory design ,value-oriented semi-structured interviews ,value sketch ,Journalism. The periodical press, etc. ,PN4699-5650 - Abstract
In this section, Journalistica puts a spotlight on research methods used in journalism studies and/or journalism practice.
- Published
- 2024
- Full Text
- View/download PDF
45. Introduction to the Special Thematic Issue 'Virtual Reality and Eye Tracking'
- Author
-
Béatrice Hasler and Rudolf Groner
- Subjects
Virtual Reality ,augmented reality ,Eye Tracking ,immersion ,gaze interaction ,human-computer interaction ,Human anatomy ,QM1-695 - Abstract
Technological advancements have made it possible to integrate eye tracking in virtual reality (VR) and augmented reality (AR). Many new VR/AR headsets already include eye tracking as a standard feature. While its application previously has been mostly limited to research, we now see installations of eye tracking into consumer level VR products in entertainment, training, and therapy. The combination of eye tracking and VR creates new opportunities for end users, creators, and researchers alike: The high level of immersion – while shielded from visual distractions of the physical environment – leads to natural behavior inside the virtual environment. This enables researchers to study how humans perceive and interact with three-dimensional environments in experimentally controlled, ecologically valid settings. Simultaneously, eye tracking in VR poses new challenges to gaze analyses and requires the establishment of new tools and best practices in gaze interaction and psychological research from controlling influence factors, such as simulator sickness, to adaptations of algorithms in various situations. This thematic special issue introduces and discusses novel applications, challenges and possibilities of eye tracking and gaze interaction in VR from an interdisciplinary perspective, including contributions from the fields of psychology, human-computer interaction, human factors, engineering, neuroscience, and education. It addresses a variety of issues and topics, such as practical guidelines for VR-based eye tracking technologies, exploring new research avenues, evaluation of gaze-based assessments, and training interventions.
- Published
- 2024
- Full Text
- View/download PDF
46. Intelligent Vehicle Violation Detection System Under Human–Computer Interaction and Computer Vision
- Author
-
Yang Ren
- Subjects
Vehicle violation detection system ,Computer vision ,Human–computer interaction ,Kalman filtering ,Mean filtering ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In view of the current problems of low detection accuracy, poor stability and slow detection speed of intelligent vehicle violation detection systems, this article will use human–computer interaction and computer vision technology to solve the existing problems. First, the picture data required for the experiment is collected through the Bit Vehicle model dataset, and computer vision technology is used for preprocessing. Then, use Kalman filtering to track and study the vehicle to help better predict the trajectory of the vehicle in the area that needs to be detected; finally, use human–computer interaction technology to build the interactive interface of the system and improve the operability of the system. The violation detection system based on computer vision technology has an accuracy of more than 96.86% for the detection of the eight types of violations extracted, and the average detection is 98%. Through computer vision technology, the system can accurately detect and identify vehicle violations in real time, effectively improving the efficiency and safety of traffic management. In addition, the system also pays special attention to the design of human–computer interaction, provides an intuitive and easy-to-use user interface, and enables traffic managers to easily monitor and manage traffic conditions. This innovative intelligent vehicle violation detection system is expected to help the development of traffic management technology in the future.
- Published
- 2024
- Full Text
- View/download PDF
47. A real-time air-writing model to recognize Bengali characters
- Author
-
Mohammed Abdul Kader, Muhammad Ahsan Ullah, Md Saiful Islam, Fermín Ferriol Sánchez, Md Abdus Samad, and Imran Ashraf
- Subjects
air-writing ,bengali character ,human-computer interaction ,hand gestures ,machine learning ,Mathematics ,QA1-939 - Abstract
Air-writing is a widely used technique for writing arbitrary characters or numbers in the air. In this study, a data collection technique was developed to collect hand motion data for Bengali air-writing, and a motion sensor-based data set was prepared. The feature set as then utilized to determine the most effective machine learning (ML) model among the existing well-known supervised machine learning models to classify Bengali characters from air-written data. Our results showed that medium Gaussian SVM had the highest accuracy (96.5%) in the classification of Bengali character from air writing data. In addition, the proposed system achieved over 81% accuracy in real-time classification. The comparison with other studies showed that the existing supervised ML models predicted the created data set more accurately than many other models that have been suggested for other languages.
- Published
- 2024
- Full Text
- View/download PDF
48. Investigation of Social Factor in Conversational Entrainments
- Author
-
Yuning Liu, Di Zhou, Aijun Li, Jianwu Dang, Shogo Okada, and Masashi Unoki
- Subjects
Social factor ,conversational entrainment ,communication accommodation ,human-computer interaction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Social aspects such as social roles and status play a crucial role in human-to-human conversations, where the interlocutors adapt to each other to achieve conversational entrainment and gain approval. Due to their complexity, these aspects are challenging to quantify in current dialogue systems and human-machine interaction systems. To simplify this problem, we assume that social aspects have a consistent effect on the same style of conversational scenarios. Therefore, we define the concept of “social factor” to measure the quantitative influence of social aspects on conversational entrainments, and propose a method to extract the social factor in different conversation styles. To do so, we designed a Chinese corpus with four conversation scenarios, arguing, comforting, convincing, and sharing happiness to investigate the importance of the social factor. We also employed an existing English corpus to predict the trajectory of conversational entrainment using the social factor. The importance of the social factors was evaluated by comparing the proposed method with a conventional method using speech features. The accuracy for classifying conversation scenarios in Chinese corpus was 52.9% by using the proposed social factor, and 51.7% by using the conventional speech features. For the English corpus, the accuracy of predicting the trajectory of conversational entrainment was 48.8% by using social factor, and 49.0% by conventional speech features. These results indicate that the social factor plays the same importance as that of the speech features. When combining speech features and social factor, the accuracy increases 6.1% in the Chinese corpus, and 2.0% in the English corpus. This suggests that social factor and speech features have distinguishable information that can compensate for each other. This study demonstrated that the social factor may be important in quantifying the pragmatic information involved in the conversations.
- Published
- 2024
- Full Text
- View/download PDF
49. NeuroSense: A Novel EEG Dataset Utilizing Low-Cost, Sparse Electrode Devices for Emotion Exploration
- Author
-
Tommaso Colafiglio, Angela Lombardi, Paolo Sorino, Elvira Brattico, Domenico Lofu, Danilo Danese, Eugenio Di Sciascio, Tommaso Di Noia, and Fedelucio Narducci
- Subjects
Emotion recognition ,EEG dataset ,low-cost EEG devices ,machine learning ,human-computer interaction ,Russell’s model ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Emotion recognition is crucial in affective computing, aiming to bridge the gap between human emotional states and computer understanding. This study presents NeuroSense, a novel electroencephalography (EEG) dataset utilizing low-cost, sparse electrode devices for emotion exploration. Our dataset comprises EEG signals collected with the portable 4-electrodes device Muse 2 from 30 participants who, thanks to a neurofeedback setting, watch 40 music videos and assess their emotional responses. These assessments use standardized scales gauging arousal, valence, and dominance. Additionally, participants rate their liking for and familiarity with the videos. We develop a comprehensive preprocessing pipeline and employ machine learning algorithms to translate EEG data into meaningful insights about emotional states. We verify the performance of machine learning (ML) models using the NeuroSense dataset. Despite utilizing just 4 electrodes, our models achieve an average accuracy ranging from 75% to 80% across the four quadrants of the dimensional model of emotions. We perform statistical analyses to assess the reliability of the self-reported labels and the classification performance for each participant, identifying potential discrepancies and their implications. We also compare our results with those obtained using other public EEG datasets, highlighting the advantages and limitations of sparse electrode setups in emotion recognition. Our results demonstrate the potential of low-cost EEG devices in emotion recognition, highlighting the effectiveness of ML models in capturing the dynamic nature of emotions. The NeuroSense dataset is publicly available, inviting further research and application in human-computer interaction, mental health monitoring, and beyond.
- Published
- 2024
- Full Text
- View/download PDF
50. Expanding Softness and Hardness Sensations in Mid-Air Ultrasonic Haptic Interfaces Combining Amplitude and Spatiotemporal Modulation
- Author
-
Qingyu Sun, Mingxin Zhang, Yasutoshi Makino, and Hiroyuki Shinoda
- Subjects
Mid-air haptics ,ultrasonic haptic displays ,softness perception ,softness rendering ,human-computer interaction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Ultrasonic mid-air haptic interfaces have limited ability to simulate compliance and stiffness. In our previous research, we introduced spatiotemporal modulation (STM) of the ultrasonic focus to simulate the contact process between a finger and a virtual surface, extending the perceived softness range by varying the rate of contact area change and focal point speed. In this study, we combine STM with amplitude modulation (AM) to further explore the effects of the two aforementioned factors, along with output power variation, on perceived softness during the simulated contact between a palm and a virtual small sphere. Psychophysical experiments revealed that haptic stimuli with a higher contact area change rate, faster focal point speed, and lower output power amplitude were perceived as softer. By adjusting these parameters, we achieved a wide perceptual range, simulating objects perceived as up to 5.3 times softer and 4 times harder than the baseline modulus. Our findings contribute to advancing soft object simulation and enhance the rendering capabilities of ultrasonic haptic displays for VR and AR applications.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.