126 results on '"human-likeness"'
Search Results
2. Embodied, visible, and courteous: exploring robotic social touch with virtual idols.
- Author
-
Onishi, Yuya, Ogawa, Kosuke, Tanaka, Kazuaki, Nakanishi, Hideyuki, and Osawa, Hirotaka
- Subjects
ROBOT hands ,ROBOTICS ,HAPTIC devices ,HANDSHAKING ,INTIMACY (Psychology) - Abstract
In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the handshake peculiar to idols, which tried to keep holding the user's hand after the conversation, increased the feeling of intimacy to the virtual idol. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Examining the roles of social presence and human-likeness on Iranian EFL learners' motivation using artificial intelligence technology: a case of CSIEC chatbot.
- Author
-
Ebadi, Saman and Amini, Asieh
- Subjects
- *
ENGLISH as a foreign language , *ARTIFICIAL intelligence in education , *ACADEMIC motivation , *CHATBOTS , *FEEDBACK control systems , *STRUCTURAL equation modeling - Abstract
Artificial Intelligence (AI) technology in the educational context, particularly chatbotics, has made significant changes in learning English. This mixed-methods study is intended to explore university students' attitudes toward the potential role of artificial intelligence (AI)-assisted mobile applications. Meanwhile, the role of social presence and human-likeness on learner motivation was examined through a chatbot lens. A total of 256 English as a foreign language (EFL) learners interacted with a chatbot known as Computer Simulation in Educational Communication (CSIEC). Participants' audio-recorded practices, transcriptions, three scales of social presence, learner motivation, and human-likeness, along with a semi-structured focus group interview, were used to collect data, Structural Equation Modeling (SEM) was used for data analysis. Moreover, thematic analysis was adopted to explore the participants' attitudes and perceptions toward using CSIES. The quantitative results indicated that learner motivation was significantly predicted by social presence and human-likeness. The thematic analysis of qualitative data reflected that the attributed descriptions to the CSIEC teacher enhanced learners' motivation, eagerness, and confidence to learn English. The findings of this study may be used to guide future research in using chatbots outside the classroom to serve as learning companions, and educators can utilize them to tailor assessment and feedback procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Smiling in the Face and Voice of Avatars and Robots: Evidence for a ‘Smiling McGurk Effect’.
- Author
-
Torre, Ilaria, Holk, Simon, Yadollahi, Elmira, Leite, Iolanda, McDonnell, Rachel, and Harte, Naomi
- Abstract
Multisensory integration influences emotional perception, as the McGurk effect demonstrates for the communication between humans. Human physiology implicitly links the production of visual features with other modes like the audio channel: Face muscles responsible for a smiling face also stretch the vocal cords that result in a characteristic smiling voice. For artificial agents capable of multimodal expression, this linkage is modeled explicitly. In our studies, we observe the influence of visual and audio channels on the perception of the agents’ emotional expression. We created videos of virtual characters and social robots either with matching or mismatching emotional expressions in the audio and visual channels. In two online studies, we measured the agents’ perceived valence and arousal. Our results consistently lend support to the ‘emotional McGurk effect’ hypothesis, according to which face transmits valence information, and voice transmits arousal. When dealing with dynamic virtual characters, visual information is enough to convey both valence and arousal, and thus audio expressivity need not be congruent. When dealing with robots with fixed facial expressions, however, both visual and audio information need to be present to convey the intended expression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Matching digital companions with customers: The role of perceived similarity.
- Author
-
Gelbrich, Katja, Kerath, Alina, and Chun, HaeEun Helen
- Subjects
CONSUMERS ,INTELLIGENT personal assistants ,PERCEPTION (Philosophy) ,RESEMBLANCE (Philosophy) ,FRIENDSHIP ,MARKETING - Abstract
Digital companions are an advanced form of digital agents that do not only provide advice and support but accompany people on their day‐to‐day customer journeys. This article sheds light on the psychological processes underlying customers' responses to these digital companions (i.e., virtual friends or co‐consumers). We propose that framing them as matched with customers on goal‐relevant attributes (i.e., attributes related to customers' consumption goals) fosters positive customer outcomes (i.e., consumption enjoyment and positive word‐of‐mouth), mediated by perceived similarity in these attributes. Importantly, in this matching context, humanlikeness serves as a boundary condition for perceived similarity to occur. Furthermore, the effect of perceived similarity on customer outcomes is driven by perceived connectedness. In Study 1, in the context of experiential learning, we identified shared interest and personality as goal‐relevant attributes underlying perceived similarity. With the manipulation of the match frame and humanlike versus artificial voice of the digital companion, Study 2 supports our propositions and highlights shared interest, but not personality, as the core driver. We provide recommendations on how to design and market digital companions to foster connection and favorable customer outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Eye Contact Matters for Consumer Trust – Even with Robots.
- Author
-
Kaiser, Carolin, Schallner, René, and Manewitsch, Vladimir
- Subjects
EYE contact ,TRUST ,CONSUMERS ,CONSUMER behavior ,ROBOTS - Abstract
The integration of AI into consumer services is transforming the way people make decisions. AI is becoming more human-like, with chatbots, voice assistants and robots adopting human features and behavior. Consumers react differently to assistants with a human-like appearance than to advice from a web page, and the behavior of AI advisors influences consumer trust and decision-making. The results of an experiment comparing human advisors, robotic advisors with and without eye contact, and text-based services show that human advisors are trusted the most, but robotic advisors are preferred over text-based services. Human-like advisors increase trust and satisfaction. Extended human features like eye contact are essential for establishing trust and positive consumer responses. Companies should consider using humanoid advisors and incorporating eye contact to enhance customer experience. Consumers should be aware of the influence of human-like AI and stay informed about AI developments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Embodied, visible, and courteous: exploring robotic social touch with virtual idols
- Author
-
Yuya Onishi, Kosuke Ogawa, Kazuaki Tanaka, and Hideyuki Nakanishi
- Subjects
handshake ,social touch ,haptic devices ,virtual interaction ,human-likeness ,virtual idol ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user’s hand after the conversation, increased the feeling of intimacy to the virtual idol.
- Published
- 2024
- Full Text
- View/download PDF
8. Attitudes toward service robots: analyses of explicit and implicit attitudes based on anthropomorphism and construal level theory
- Author
-
Akdim, Khaoula, Belanche, Daniel, and Flavián, Marta
- Published
- 2023
- Full Text
- View/download PDF
9. The Human Likeness of Government Chatbots – An Empirical Study from Norwegian Municipalities
- Author
-
Følstad, Asbjørn, Larsen, Anna Grøndahl, Bjerkreim-Hanssen, Nina, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lindgren, Ida, editor, Csáki, Csaba, editor, Kalampokis, Evangelos, editor, Janssen, Marijn, editor, Viale Pereira, Gabriela, editor, Virkar, Shefali, editor, Tambouris, Efthimios, editor, and Zuiderwijk, Anneke, editor
- Published
- 2023
- Full Text
- View/download PDF
10. Human-Like Movements of Industrial Robots Positively Impact Observer Perception.
- Author
-
Hostettler, Damian, Mayer, Simon, and Hildebrand, Christian
- Subjects
ROBOT motion ,INDUSTRIAL robots ,HUMAN-robot interaction ,HUMANOID robots ,ROBOTS ,INDUSTRIAL research ,ROBOTICS - Abstract
The number of industrial robots and collaborative robots on manufacturing shopfloors has been rapidly increasing over the past decades. However, research on industrial robot perception and attributions toward them is scarce as related work has predominantly explored the effect of robot appearance, movement patterns, or human-likeness of humanoid robots. The current research specifically examines attributions and perceptions of industrial robots—specifically, articulated collaborative robots—and how the type of movements of such robots impact human perception and preference. We developed and empirically tested a novel model of robot movement behavior and demonstrate how altering the movement behavior of a robotic arm leads to differing attributions of the robot's human-likeness. These findings have important implications for emerging research on the impact of robot movement on worker perception, preferences, and behavior in industrial settings. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Is voice really persuasive? The influence of modality in virtual assistant interactions and two alternative explanations
- Author
-
Ischen, Carolin, Araujo, Theo B., Voorveld, Hilde A.M., Van Noort, Guda, and Smit, Edith G.
- Published
- 2022
- Full Text
- View/download PDF
12. Attributing Intentionality to Artificial Agents: Exposure Versus Interactive Scenarios
- Author
-
Parenti, Lorenzo, Marchesi, Serena, Belkaid, Marwen, Wykowska, Agnieszka, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cavallo, Filippo, editor, Cabibihan, John-John, editor, Fiorini, Laura, editor, Sorrentino, Alessandra, editor, He, Hongsheng, editor, Liu, Xiaorui, editor, Matsumoto, Yoshio, editor, and Ge, Shuzhi Sam, editor
- Published
- 2022
- Full Text
- View/download PDF
13. Identification of Distinctive Behavior Patterns of Bots and Human Teams in Soccer
- Author
-
Bogdan, Georgii Mola, Mozgovoy, Maxim, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sachdeva, Shelly, editor, Watanobe, Yutaka, editor, and Bhalla, Subhash, editor
- Published
- 2022
- Full Text
- View/download PDF
14. Predictors Affecting Effects of Virtual Influencer Advertising among College Students.
- Author
-
Um, Namhyun
- Abstract
Currently, in many realms, such as entertainment and marketing communications, human influencers have been replaced by virtual ones. As a result, marketing researchers are devoting more attention to the use of virtual influencers. The current study investigates predictors affecting the effects of virtual influencer advertising. Specifically, this study is designed to examine the effects of para-social interaction as relationships between virtual influencer and audiences. In addition, this study delves into the effects of perceived human-likeness, perceived predictability, and perceived authenticity in the evaluation of virtual influencer advertising. For this study, a total of 179 college students majoring in advertising and public relations participated in exchange for course credits. To collect data, an online survey site was created through Qualtrics. This study found that parasocial interactions with a virtual influencer positively affect attitude toward a virtual influencer. Furthermore, perceived human-likeness, perceived predictability, and perceived authenticity also positively influence attitude toward a virtual influencer. Lastly, study findings suggest that attitude toward a virtual influencer has a positive impact on attitude toward adverts. Theoretical as well as practical implications are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Customer comfort during service robot interactions.
- Author
-
Becker, Marc, Mahr, Dominik, and Odekerken-Schröder, Gaby
- Abstract
Customer comfort during service interactions is essential for creating enjoyable customer experiences. However, although service robots are already being used in a number of service industries, it is currently not clear how customer comfort can be ensured during these novel types of service interactions. Based on a 2 × 2 online between-subjects design including 161 respondents using pictorial and text-based scenario descriptions, we empirically demonstrate that human-like (vs machine-like) service robots make customers feel more comfortable because they facilitate rapport building. Social presence does not underlie this relationship. Importantly, we find that these positive effects diminish in the presence of service failures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Emotional intelligence of Large Language Models.
- Author
-
Wang, Xuena, Li, Xueting, Yin, Zi, Wu, Yue, and Liu, Jia
- Subjects
- *
LANGUAGE models , *EMOTIONAL intelligence , *EMOTIONS , *EMOTION recognition , *PSYCHOMETRICS - Abstract
Large Language Models (LLMs) have demonstrated remarkable abilities across numerous disciplines, primarily assessed through tasks in language generation, knowledge utilization, and complex reasoning. However, their alignment with human emotions and values, which is critical for real-world applications, has not been systematically evaluated. Here, we assessed LLMs' Emotional Intelligence (EI), encompassing emotion recognition, interpretation, and understanding, which is necessary for effective communication and social interactions. Specifically, we first developed a novel psychometric assessment focusing on Emotion Understanding (EU), a core component of EI. This test is an objective, performancedriven, and text-based evaluation, which requires evaluating complex emotions in realistic scenarios, providing a consistent assessment for both human and LLM capabilities. With a reference frame constructed from over 500 adults, we tested a variety of mainstream LLMs. Most achieved above-average Emotional Quotient (EQ) scores, with GPT-4 exceeding 89% of human participants with an EQ of 117. Interestingly, a multivariate pattern analysis revealed that some LLMs apparently did not rely on the human-like mechanism to achieve human-level performance, as their representational patterns were qualitatively distinct from humans. In addition, we discussed the impact of factors such as model size, training method, and architecture on LLMs' EQ. In summary, our study presents one of the first psychometric evaluations of the human-like characteristics of LLMs, which may shed light on the future development of LLMs aiming for both high intellectual and emotional intelligence. Project website: https://emotional-intelligence.github.io/ [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Human-Like Trajectory Planning for Autonomous Vehicles Based on Spatiotemporal Geometric Transformation.
- Author
-
Liu, Zhaolin, Chen, Jiqing, Xia, Hongyang, and Lan, Fengchong
- Abstract
Human-driven vehicles and different levels of autonomous vehicles are expected to coexist on roads in the future. However, autonomous systems behave differently from their human-driver counterparts, these two behaviors are incompatible with one another, thereby negatively impacting traffic efficiency and safety. Herein, we present the construction of human-like trajectories for use in autonomous vehicles as a possible solution to this issue. We present a trajectory planning method based on the spatiotemporal geometric transformation of driving scenarios to generate human-like trajectories. Speed and safety redundancy data were collected through driving tests to understand human driving behaviors. Self-driving scenarios were abstracted as a Lorentz coordinate system under a three-dimensional Minkowski space-time. A surrounding-manifold tensor equation was established using differential geometry theory to depict the relationship between the trajectory constraints and the geometric spatiotemporal background. A metric tensor field can be solved from the equation to construct the corresponding “volcano space-time,” which is a three-dimensional general Riemannian space for placing the subject vehicle and the surroundings. The geodesics of the volcano space-time are solved using the geodesic equation and are projected back into the three-dimensional Minkowski space-time. Geodesic trajectories were fitted as Bézier curves in this study and corrected according to the vehicle dynamics constraints for trackability. In simulation and real vehicle tests, trajectories generated using the proposed algorithm exhibited collision avoidance and trackability, and the algorithm offered behaviors that were similar to those of human drivers under the same scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. The Transition of Robot Identity from Partner to Competitor and Its Implications for Human–Robot Interaction.
- Author
-
Yang, Danni and He, Xianyou
- Subjects
HUMAN-robot interaction ,ROBOTS ,HUMANOID robots ,ROBOTICS competitions ,SOCIAL robots ,SOCIAL interaction - Abstract
With the rapid development of technology, have humans come to regard robots as their competitors? If so, how has this perception affected human–robot interactions? This present study investigated three questions about this topic. First, do humans in social circumstances spontaneously perceive robots as competitors when there is no obvious conflict of interest? If so, what factors play a role? Finally, does this competitiveness hamper interactions between humans and robots? Experiment 1 assessed the sense of competitiveness by measuring the emotional responses of subjects to a job-seeking robot. As observers, individuals responded positively to the robot's failures and adversely to its successes, revealing a competitive drive of humans toward robots. Experiment 1 further identified that competitiveness increased as a function of the robot's human-like appearance, indicating that robot human-likeness is an influential element. Experiment 2 attempted to find if human awareness of competition with robots negatively impacted human–robot interaction and, more specifically, if humans in a directly competitive relationship intentionally sabotage the robot's performance. Results demonstrated that during competition, humans focused on improving their own performance rather than sabotaging the robot's. Comparing Experiment 1 (no direct competition) to Experiment 2 (direct competition) revealed that human preference for the robot decreased significantly, indicating that competition negatively impacts human–robot interaction. This study showed humans' competitive awareness toward robots in social settings and the factors that drive it. In addition, it provides preliminary empirical evidence on how competition affects human–robot interaction in social settings and how humans will behave in the future while competing directly with robots for jobs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Assessment of Human-Likeness and Anthropomorphism of Robots: A Literature Review
- Author
-
Rothstein, Nina, Kounios, John, Ayaz, Hasan, de Visser, Ewart J., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Ayaz, Hasan, editor, and Asgher, Umer, editor
- Published
- 2021
- Full Text
- View/download PDF
20. BioPhi: A platform for antibody design, humanization, and humanness evaluation based on natural antibody repertoires and deep learning
- Author
-
David Prihoda, Jad Maamary, Andrew Waight, Veronica Juan, Laurence Fayadat-Dilman, Daniel Svozil, and Danny A. Bitton
- Subjects
antibody humanization ,humanness ,human-likeness ,immunogenicity ,deimmunization ,immune repertoires ,machine learning ,deep learning ,Therapeutics. Pharmacology ,RM1-950 ,Immunologic diseases. Allergy ,RC581-607 - Abstract
Despite recent advances in transgenic animal models and display technologies, humanization of mouse sequences remains one of the main routes for therapeutic antibody development. Traditionally, humanization is manual, laborious, and requires expert knowledge. Although automation efforts are advancing, existing methods are either demonstrated on a small scale or are entirely proprietary. To predict the immunogenicity risk, the human-likeness of sequences can be evaluated using existing humanness scores, but these lack diversity, granularity or interpretability. Meanwhile, immune repertoire sequencing has generated rich antibody libraries such as the Observed Antibody Space (OAS) that offer augmented diversity not yet exploited for antibody engineering. Here we present BioPhi, an open-source platform featuring novel methods for humanization (Sapiens) and humanness evaluation (OASis). Sapiens is a deep learning humanization method trained on the OAS using language modeling. Based on an in silico humanization benchmark of 177 antibodies, Sapiens produced sequences at scale while achieving results comparable to that of human experts. OASis is a granular, interpretable and diverse humanness score based on 9-mer peptide search in the OAS. OASis separated human and non-human sequences with high accuracy, and correlated with clinical immunogenicity. BioPhi thus offers an antibody design interface with automated methods that capture the richness of natural antibody repertoires to produce therapeutics with desired properties and accelerate antibody discovery campaigns. The BioPhi platform is accessible at https://biophi.dichlab.org and https://github.com/Merck/BioPhi.
- Published
- 2022
- Full Text
- View/download PDF
21. The human antibody sequence space and structural design of the V, J regions, and CDRH3 with Rosetta
- Author
-
Samuel Schmitz, Emily A. Schmitz, James E. Crowe, and Jens Meiler
- Subjects
Human-likeness ,antibody design ,rosetta ,immunome repertoire ,biostatistics ,HumanizationABBREVIATIONS ,Therapeutics. Pharmacology ,RM1-950 ,Immunologic diseases. Allergy ,RC581-607 - Abstract
The human adaptive immune response enables the targeting of epitopes on pathogens with high specificity. Infection with a pathogen induces somatic hyper-mutation and B-cell selection processes that govern the shape and diversity of the antibody sequence landscape. To date, even the largest immunome repertoires of adaptive immune receptors acquired by next-generation sequencing cannot fully capture the vast antibody sequence space of a single individual, which is estimated to be at least 1012 potential sequences. Degeneracy of the genetic code means that the number of possible nucleotide triplets (64) is greater than the number of canonical amino acids (20), resulting in some amino acids being encoded by multiple triplets and different amino acids sharing the same nucleotide in 1 or 2 positions in the triplet. We hypothesize that the degeneracy of the genetic code can be used to statistically model an enlarged space of human antibody amino acid sequences, accommodating for the discrepancy between the observed and the hypothesized antibody sequence space. Facilitated by Bayesian statistics and immunome repertoire clustering, we calculated amino acid probabilities from single nucleotide frequencies to infer a human amino acid sequence space that is used to design human-like antibodies with Rosetta. We show that antibodies designed with our restraints are on average up to 16.6% more human-like in the V and J regions compared to the Rosetta designs produced without constraints. The human-likeness of the heavy-chain CDR3 region (CDRH3) could be increased for 8 of 27 antibodies compared to Rosetta designs with a similar number of mutations and could be successfully applied on Mus musculus antibodies to demonstrate humanization.
- Published
- 2022
- Full Text
- View/download PDF
22. Perceived authenticity of virtual characters makes the difference
- Author
-
Junru Huang and Younbo Jung
- Subjects
artificial agents ,virtual characters ,perceived authenticity ,human-machine communication ,human-likeness ,perceived realness ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Conventionally, human-controlled and machine-controlled virtual characters are studied separately under different theoretical frameworks based on the ontological nature of the particular virtual character. However, in recent years, the technological advancement has made the boundaries between human and machine agency increasingly blurred. This manuscript proposes a theoretical framework that can explain how various virtual characters, regardless of their ontological agency, can be treated as unique social actors with a focus on perceived authenticity. Specifically, drawing on the authenticity model in computer-mediated communication proposed by Lee (2020) and a typology of virtual characters, a multi-layered perceived authenticity model is proposed to demonstrate how virtual characters do not have to be perceived as humans and yet can be perceived as authentic to their human interactants.
- Published
- 2022
- Full Text
- View/download PDF
23. Emotional Influence of Pupillary Changes of Robots with Different Human-Likeness Levels on Human.
- Author
-
Xue, Junting, Huang, Yanqun, Li, Xu, Li, Jutao, Zhang, Peng, and Kang, Zhiyu
- Subjects
DIGITAL video ,HUMANOID robots ,HUMAN beings ,SOCIAL robots ,EMPATHY ,ROBOTS ,EMOTIONS - Abstract
This study explored the emotional influence of pupillary change (PC) of robots with different human-likeness levels on people. Images of the eye areas of five agents, including one human and four existing typical humanoid robots with varying human-likeness levels, were edited into five 27-s videos. In the experimental group, we showed five videos with PC applied to the eyes of agents to 31 participants, and in the control group, five videos without PC were shown to another 31 participants. Afterward, the participants were asked to rate their feelings about the videos. The results showed that PC did not change people's emotions towards agents independently. However, PC applied to the eyes of a robot representing an agent of no threat who may evoke empathy subconsciously enhanced people's positive emotions, while PC applied to human images increased people's negative emotions and reduced the feeling of familiarity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. A Literature Review of the Research on the Uncanny Valley
- Author
-
Zhang, Jie, Li, Shuo, Zhang, Jing-Yu, Du, Feng, Qi, Yue, Liu, Xun, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Rau, Pei-Luen Patrick, editor
- Published
- 2020
- Full Text
- View/download PDF
25. "I was so scared I quit": Uncanny valley effects of robots' human-likeness on employee fear and industry turnover intentions.
- Author
-
Shum, Cass, Kim, Hyun Jeong, Calhoun, Jennifer R., and Putra, Eka Diraksa
- Subjects
INDUSTRIAL robots ,ROBOTS ,INTENTION ,EMPLOYEE services ,HOSPITALITY industry personnel ,ROBOT industry ,RESEARCH personnel ,TEACHER turnover - Abstract
Because of the increased usage of service robots in the hospitality and tourism industries, researchers and practitioners are interested in learning to facilitate interactions between employees and service robots. However, there is little information on how service robots' humanlike appearance affects employee emotions and industry turnover intentions. Drawing upon uncanny valley theory, a quasi-scenario-based experiment was conducted using four types of service robots. After watching a video on one of the service robots, participants rated perceived human-likeness, tech savviness, fear of robots, and industry turnover intentions. This study reports that perceived human-likeness has an inverted-U shaped nonlinear relationship with employees' fear of robots, moderated by employees' tech-savviness. The result further indicates that the fear of robots is positively related to industry turnover intentions. Most research hypotheses lend support to the uncanny valley theory and have practical implications for the design and implementation of service robots in hospitality and tourism workplaces. • Using uncanny valley theory, this study examines the effects of perceived human-likeness on employees' fear. • Perceived human-likeness has an inverted-U shaped nonlinear relationship with employees' fear of robots. • Employee tech-savviness moderates the inverted-U shaped nonlinear relationship. • Employees' fear of robots increases industry turnover intentions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. How Do Humans Identify Human-Likeness from Online Text-Based Q&A Communication?
- Author
-
Mori, Erika, Takeuchi, Yugo, Tsuchikura, Eiji, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, and Kurosu, Masaaki, editor
- Published
- 2019
- Full Text
- View/download PDF
27. It's a Match: Task Assignment in Human–Robot Collaboration Depends on Mind Perception.
- Author
-
Wiese, Eva, Weis, Patrick P., Bigman, Yochanan, Kapsaskis, Kyra, and Gray, Kurt
- Subjects
SOCIAL robots ,SEX discrimination ,EMOTIONAL state ,HUMAN-robot interaction ,TASKS ,SOCIAL perception - Abstract
Robots are becoming more available for workplace collaboration, but many questions remain. Are people actually willing to assign collaborative tasks to robots? And if so, exactly which tasks will they assign to what kinds of robots? Here we leverage psychological theories on person-job fit and mind perception to investigate task assignment in human–robot collaborative work. We propose that people will assign robots to jobs based on their "perceived mind," and also that people will show predictable social biases in their collaboration decisions. In this study, participants performed an arithmetic (i.e., calculating differences) and a social (i.e., judging emotional states) task, either alone or by collaborating with one of two robots: an emotionally capable robot or an emotionally incapable robot. Decisions to collaborate (i.e., to assign the robots to generate the answer) rates were high across all trials, especially for tasks that participants found challenging (i.e., the arithmetic task). Collaboration was predicted by perceived robot-task fit, such that the emotional robot was assigned the social task. Interestingly, the arithmetic task was assigned more to the emotionally incapable robot, despite the emotionally capable robot being equally capable of computation. This is consistent with social biases (e.g., gender bias) in mind perception and person-job fit. The theoretical and practical implications of this work for HRI are being discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Examining the effects of robots' physical appearance, warmth, and competence in frontline services: The Humanness‐Value‐Loyalty model.
- Author
-
Belanche, Daniel, Casaló, Luis V., Schepers, Jeroen, and Flavián, Carlos
- Subjects
HUMAN-like design of robots ,PHYSICAL characteristics (Human body) ,HUMAN behavior ,LOYALTY ,CUSTOMER services ,CONSUMER attitudes ,VALUE (Economics) ,SOCIAL interaction - Abstract
Because of continuous improvements in their underlying technologies, customers perceive frontline robots as social actors with a high level of humanness, both in appearance and behavior. Advancing from mere theoretical contributions to this study field, this article proposes and empirically validates the humanness‐value‐loyalty model (HVL model). This study analyzes to what extent robots' perceived physical human‐likeness, perceived competence, and perceived warmth affect customers' service value expectations and, subsequently, their loyalty intentions. Following two pretests to select the most suitable robots and ensure scenario realism, data were collected by means of a vignette experimental study and analyzed using the partial least squares method. The results reveal that human‐likeness positively affects four dimensions of service value expectations. Perceived competence of the robot influences mainly utilitarian expectations (i.e., functional and monetary value), while perceived warmth influences relational expectations (i.e., emotional value). Interestingly, and contrary to theoretical predictions, the influence of the robot's warmth on service value expectations is more pronounced for customers with a lower need for social interaction. In sum, this study contributes to a better understanding of customers' reactions to artificial intelligence‐enabled technologies with humanized cognitive capabilities and also suggests interesting research avenues to advance on this emerging field. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. The Valley of non-Distraction: Effect of Robot's Human-likeness on Perception Load.
- Author
-
Ingle, Daisy, Marcus, Nadine, and Johal, Wafa
- Subjects
ROBOT design & construction ,VISUAL perception ,ROBOTS ,ANTHROPOMORPHISM ,STIMULUS & response (Psychology) ,PSYCHOLOGICAL research ,TEXT messages - Abstract
Previous research in psychology has found that human faces have the capability of being more distracting under high perceptual load conditions compared to non-face objects. This project aims to assess the distracting potential of robot faces based on their human-likeliness. As a first step, this paper reports on our initial findings based on an online study. We used a letter search task where participants had to search for a target letter within a circle of 6 letters, whilst an irrelevant distractor image was also present. The results of our experiment replicated previous results with human faces and non-face objects. Additionally, in the tasks where the irrelevant distractors are images of robot faces, the human-likeness of the robot influenced the response time (RT). Interestingly, the robot Alter produced results significantly different than all other distractor robots. The outcome of this is a distraction model related to human-likeness of robots. Our results show the impact of anthropomorphism on distracting potential and thus should be taken into account when designing robots. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. The Exploration of the Uncanny Valley from the Viewpoint of the Robot's Nonverbal Behaviour.
- Author
-
Thepsoonthorn, Chidchanok, Ogawa, Ken-ichiro, and Miyake, Yoshihiro
- Subjects
ROBOTS ,QUESTIONNAIRES ,HUMAN-robot interaction - Abstract
Many studies have been conducted to find approaches to overcome the Uncanny Valley. However, the focus on the influence of the robot's appearance leaves a big missing part: the influence of the robot's nonverbal behaviour. This impedes the complete exploration of the Uncanny Valley. In this study, we explored the Uncanny Valley from the viewpoint of the robot's nonverbal behaviour in regard to the Uncanny Valley hypothesis. We observed a relationship between the participants' ratings on human-likeness of the robot's nonverbal behavior and affinity toward the robot's nonverbal behavior, and define the point where the affinity toward the robot's nonverbal behavior significantly drops down as the Uncanny Valley. In this study, an experiment of human–robot interaction was conducted. The participants were asked to interact with a robot with different nonverbal behaviours, ranging from 0 (no nonverbal behavior, speaking only) to 3 (gaze, head nodding, and gestures) combinations and to rate the perceived human-likeness and affinity toward the robot's nonverbal behavior by using a questionnaire. Additionally, the participants' fixation duration was measured during the experiment. The result showed a biphasic relationship between human-likeness and affinity rating results. A curve resembling the Uncanny Valley is found. The result was also supported by participants' fixation duration. It showed that the participants had the longest fixation at the robot when the robot expressed the nonverbal behaviours that fall into the Uncanny Valley. This exploratory study provides evidence suggesting the existence of the Uncanny Valley from the viewpoint of the robot's nonverbal behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. I Am Looking for Your Mind: Pupil Dilation Predicts Individual Differences in Sensitivity to Hints of Human-Likeness in Robot Behavior
- Author
-
Serena Marchesi, Francesco Bossi, Davide Ghiglino, Davide De Tommaso, and Agnieszka Wykowska
- Subjects
intentional stance ,human–robot interaction ,pupil dilation ,individual differences ,human-likeness ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.
- Published
- 2021
- Full Text
- View/download PDF
32. GETTING USED TO VOICE ASSISTANTS: EXAMINING DRIVERS AND CONSEQUENCES OF AI ENABLED DEVICES.
- Author
-
Darda, Pooja, Pei-Shan Soon, and Gaur, Sanjaya Singh
- Subjects
ARTIFICIAL intelligence ,INTELLIGENT agents - Published
- 2022
33. Attitude Towards Humanoid Robots and the Uncanny Valley Hypothesis
- Author
-
Łupkowski Paweł and Gierszewska Marta
- Subjects
uncanny valley hypothesis ,human-likeness ,computer-generated models ,attitude towards robots ,belief in human nature uniqueness (bhnu) ,negative attitudes toward robots that display human traits (narht) ,hri ,social robotics ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The main aim of the presented study was to check whether the well-established measures concerning the attitude towards humanoid robots are good predictors for the uncanny valley effect. We present a study in which 12 computer rendered humanoid models were presented to our subjects. Their declared comfort level was cross-referenced with the Belief in Human Nature Uniqueness (BHNU) and the Negative Attitudes toward Robots that Display Human Traits (NARHT) scales. Subsequently, there was no evidence of a statistical significance between these scales and the existence of the uncanny valley phenomenon. However, correlations between expected stress level while human-robot interaction and both BHNU, as well as NARHT scales, were found. The study covered also the evaluation of the perceived robots’ characteristic and the emotional response to them.
- Published
- 2019
- Full Text
- View/download PDF
34. Survey of How Human Players Divert In-game Actions for Other Purposes: Towards Human-Like Computer Players
- Author
-
Temsiririrkkul, Sila, Sato, Naoyuki, Nakagawa, Kenta, Ikeda, Kokolo, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Munekata, Nagisa, editor, Kunita, Itsuki, editor, and Hoshino, Junichi, editor
- Published
- 2017
- Full Text
- View/download PDF
35. Realism of the face lies in skin and eyes: Evidence from virtual and human agents
- Author
-
Julija Vaitonytė, Pieter A. Blomsma, Maryam Alimardani, and Max M. Louwerse
- Subjects
Intelligent virtual agents ,Face perception ,Corneal reflections ,Skin reflectance ,Human-likeness ,Uncanny valley ,Electronic computers. Computer science ,QA75.5-76.95 ,Psychology ,BF1-990 - Abstract
Despite advancements in computer graphics and artificial intelligence, it remains unclear which aspects of intelligent virtual agents (IVAs) make them identifiable as human-like agents. In three experiments and a computational study, we investigated which specific facial features in static IVAs contribute to judging them human-like. In Experiment 1, participants were presented with facial images of state-of-the-art IVAs and humans and asked to rate these stimuli on human-likeness. The results showed that IVAs were judged less human-like compared to photographic images of humans, which led to the hypothesis that the discrepancy in human-likeness was driven by skin and eye reflectance. A follow-up computational analysis confirmed this hypothesis, showing that the faces of IVAs had smoother skin and a reduced number of corneal reflections than human faces. In Experiment 2, we validated these findings by systematically manipulating the appearance of skin and eyes in a set of human photographs, including both female and male faces as well as four different races. Participants indicated as quickly as possible whether the image depicted a real human face or not. The results showed that smoothening the skin and removing corneal reflections affected the perception of human-likeness when quick perceptual decisions needed to be made. Finally, in Experiment 3, we combined the images of IVA faces and those of humans, unaltered and altered, and asked participants to rate them on human-likeness. The results confirmed the causal role of both features for attributing human-likeness. Both skin and eye reflectance worked in tandem in driving judgements regarding the extent to which the face was perceived human-like in both IVAs and humans. These findings are of relevance to computer graphics artists and psychology researchers alike in drawing attention to those facial characteristics that increase realism in IVAs.
- Published
- 2021
- Full Text
- View/download PDF
36. The Eternal Robot: Anchoring Effects in Humans' Mental Models of Robots and Their Self
- Author
-
Daniel Ullrich, Andreas Butz, and Sarah Diefenbach
- Subjects
human-robot-interaction ,mental models ,human-likeness ,robotness ,anchoring effects ,design goals ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Current robot designs often reflect an anthropomorphic approach, apparently aiming to convince users through an ideal system, being most similar or even on par with humans. The present paper challenges human-likeness as a design goal and questions whether simulating human appearance and performance adequately fits into how humans think about robots in a conceptual sense, i.e., human's mental models of robots and their self. Independent of the technical possibilities and limitations, our paper explores robots' attributed potential to become human-like by means of a thought experiment. Four hundred eighty-one participants were confronted with fictional transitions from human-to-robot and robot-to-human, consisting of 20 subsequent steps. In each step, one part or area of the human (e.g., brain, legs) was replaced with robotic parts providing equal functionalities and vice versa. After each step, the participants rated the remaining humanness and remaining self of the depicted entity on a scale from 0 to 100%. It showed that the starting category (e.g., human, robot) serves as an anchor for all former judgments and can hardly be overcome. Even if all body parts had been exchanged, a former robot was not perceived as totally human-like and a former human not as totally robot-like. Moreover, humanness appeared as a more sensible and easier denied attribute than robotness, i.e., after the objectively same transition and exchange of the same parts, the former human was attributed less humanness and self left compared to the former robot's robotness and self left. The participants' qualitative statements about why the robot has not become human-like, often concerned the (unnatural) process of production, or simply argued that no matter how many parts are exchanged, the individual keeps its original entity. Based on such findings, we suggest that instead of designing most human-like robots in order to reach acceptance, it might be more promising to understand robots as an own “species” and underline their specific characteristics and benefits. Limitations of the present study and implications for future HRI research and practice are discussed.
- Published
- 2020
- Full Text
- View/download PDF
37. The Questioning Turing Test.
- Author
-
Damassino, Nicola
- Subjects
- *
TURING test , *INTELLIGENCE tests , *QUESTIONING - Abstract
The Turing Test (TT) is best regarded as a model to test for intelligence, where an entity's intelligence is inferred from its ability to be attributed with 'human-likeness' during a text-based conversation. The problem with this model, however, is that it does not care if or how well an entity produces a meaningful conversation, as long as its interactions are humanlike enough. As a consequence, the TT attracts projects that concentrate on how best to fool the judges. In light of this, I propose a new version of the TT: the Questioning Turing Test (QTT). Here, the entity has to produce an enquiry rather than a conversation; and it is parametrised along two further dimensions in addition to 'human-likeness': 'correctness', evaluating if the entity accomplishes the enquiry; and 'strategicness', evaluating how well the entity accomplishes the enquiry, in terms of the number of questions asked (the fewer, the better). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. "Can Computer Based Human-Likeness Endanger Humanness?" – A Philosophical and Ethical Perspective on Digital Assistants Expressing Feelings They Can't Have".
- Author
-
Porra, Jaana, Lacity, Mary, and Parks, Michael S.
- Subjects
EMOTIONS ,INTELLIGENT personal assistants ,SOCIAL interaction ,COMPUTERS ,PHILOSOPHY ,USER interfaces - Abstract
Digital assistants engage with us with increasingly human-like conversations, including the expression of human emotions with such utterances as "I am sorry...", "I hope you enjoy...", "I am grateful...", or "I regret that...". By 2021, digital assistants will outnumber humans. No one seems to stop to ask if creating more digital companions that appear increasingly human is really beneficial to the future of our species. In this essay, we pose the question: "How human should computer-based human-likeness appear?" We rely on the philosophy of humanness and the theory of speech acts to consider the long-term consequences of living with digital creatures that express human-like feelings. We argue that feelings are the very substance of our humanness and therefore are best reserved for human interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Service robot implementation: a theoretical framework and research agenda.
- Author
-
Belanche, Daniel, Casaló, Luis V., Flavián, Carlos, and Schepers, Jeroen
- Subjects
ROBOTS ,ROBOT design & construction ,MARKETING research ,DEFINITIONS ,ARTIFICIAL intelligence - Abstract
Copyright of Service Industries Journal is the property of Routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
40. Dolphins in the Human Mind: What Characteristics Do German Students Attribute to Dolphins, Compared with Apes and Whales? An Exploratory Study.
- Author
-
Stumpf, Eva
- Subjects
- *
HUMAN-animal relationships , *BOTTLENOSE dolphin , *DOLPHINS , *APES , *WHALES , *BRAIN - Abstract
Research on anthrozoology has greatly increased in recent decades, especially with regard to anthropomorphism and attitudes toward animals in general. Nevertheless, these studies have rarely distinguished between different nonhuman species. Previous studies have indicated human preferences for apes, a finding which usually is explained by their strong likeness to humans. Moreover, anthrozoological research has rarely focused on dolphins even though they seem to have a unique position in human–animal relationships, one that has been reported throughout history. This paper presents two studies examining which characteristics humans attribute to dolphins, apes, and whales (Study 1) and whether humans attribute more positive and fewer negative characteristics to dolphins than to apes or whales (Study 2). In study 1, 86 German university students were asked to name characteristics of one species (dolphins, apes, or whales). The participants suggested many more positive than negative attributes and predominately characterized apes as human-like. In study 2, the important attributes from study 1 were included in a questionnaire comprising items referring to 12 positive and six negative attributes. The participants in study 2 (n = 258 German university students) rated dolphins significantly more positively than apes on six of the 12 positive and on five of the six negative attributes. Furthermore, the participants rated dolphins more positively than whales on most of the attributes. In sum, the results of these two studies confirm the attribution of positive characteristics to dolphins, but provide no evidence of a glorification of dolphins as frequently suggested by anecdotal reports. The findings are discussed with regard to the frequently claimed influence of human-likeness on human preferences for different species. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Let the Machine Decide: When Consumers Trust or Distrust Algorithms.
- Author
-
Castelo, Noah, Bos, Maarten W., and Lehmann, Donald
- Subjects
SUSPICION ,ALGORITHMS ,EMOTIONS ,TRUST ,ARTIFICIAL intelligence - Abstract
Thanks to the rapid progress in the field of artificial intelligence algorithms are able to accomplish an increasingly comprehensive list of tasks, and often they achieve better results than human experts. Nevertheless, many consumers have ambivalent feelings towards algorithms and tend to trust humans more than they trust machines. Especially when tasks are perceived as subjective, consumers often assume that algorithms will be less effective, even if this belief is getting more and more inaccurate. To encourage algorithm adoption, managers should provide empirical evidence of the algorithm's superior performance relative to humans. Given that consumers trust in the cognitive capabilities of algorithms, another way to increase trust is to demonstrate that these capabilities are relevant for the task in question. Further, explaining that algorithms can detect and understand human emotions can enhance adoption of algorithms for subjective tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Robots in Frontline Services: The Influence of Human-Likeness, Competence and Warmth on Service Value and Loyalty Intentions.
- Author
-
Schepers, Jeroen, Belanche, Daniel, Flavian, Carlos, and Casalo, Luis
- Subjects
QUALITY of service ,CHOICE (Psychology) ,SOCIAL interaction ,CONSUMER behavior ,TECHNOLOGICAL innovations ,CUSTOMER relationship management - Abstract
The introduction of frontline robots is an innovation that may affect customer choices and change the services industry. However, despite increasing interest, recent works in this emerging field are mainly theoretical. To further advance this topic, this work tries to empirically evaluate customer's perceptions of and reactions toward frontline service robots. Like other technological innovations, frontline robots should enhance the service value and the customer-provider relationship. Unlike other technological innovations, robots are perceived as social actors by customers and the adoption process is therefore likely to feature other factors. This work analyzes to what extent perceived physical human-likeness of the robot and social cognition cues--i.e., perceived competence and perceived warmth of the robot--affect customers' perceptions of service value and their loyalty intentions towards the service provider. The moderating role of customers' need for social interaction is also evaluated. Results of an experimental design confirmed most of the aforementioned relationships. Need for social interaction moderates the effect of robots' warmth, but contrary to our expectations warmth is more valued among consumers with lower need for social (i.e., human) interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2019
43. Improving the Human-Likeness of Game AI’s Moves by Combining Multiple Prediction Models
- Author
-
Ogawa, Tatsuyoshi, Hsueh, Chu-Hsuan, Ikeda, Kokolo, Ogawa, Tatsuyoshi, Hsueh, Chu-Hsuan, and Ikeda, Kokolo
- Abstract
Strong game AI’s moves are sometimes strange or difficult for humans to understand. To achieve better human-computer interaction, researchers try to create human-like game AI. For chess and Go, supervised learning with deep neural networks is one of the most effective methods to predict human moves. In this study, we first show that supervised learning is also effective in Shogi (Japanese chess) to predict human moves. We also find that the AlphaZero-based model more accurately predicted moves of players with higher skill. We then investigate two evaluation metrics for measuring human-likeness, where one is move-matching accuracy that is often used in existing works, and the other is likelihood (the geometric mean of human moves’ probabilities predicted by the model). To create game AI that is more human-like, we propose two methods to combine multiple move prediction models. One uses a Classifier to select a suitable prediction model according to different situations, and the other is Blend that mixes probabilities from different prediction models because we observe that each model is good at some situations where other models cannot predict well. We show that the Classifier method increases the move-matching accuracy by 1%-3% but fails to improve the likelihood. The Blend method increases the move-matching accuracy by 3%-4% and the likelihood by 2%-5%., The 15th International Conference on Agents and Artificial Intelligence (ICAART 2023), Lisbon, Portugal
- Published
- 2023
44. Investigating the Emotional Impact of Social Robots : A Comparative Study on the Influence of Appearance and Application Area on Human Emotions
- Author
-
Wallén, Tyra and Wallén, Tyra
- Abstract
The rapid development of social robots, designed to interact with humans, has led to increased research on user acceptance and emotions in human-robot interaction. Social acceptance is an important area to investigate if the development of social robots is to be useful. Investigating how people feel about social robots is one tool to assess acceptance toward them, and research has shown that positive emotions could invoke higher acceptance. Possible factors that have been shown to affect peoples’ attitudes regarding social robots is (1) the human-likeness and appearance of the robot and (2) the application area of the robot. Therefore, this thesis research questions address the effect of human-likeness and application areas of social robots on people's emotions. The findings indicate that in the context of companionship, people have varying emotional responses based on the appearance of the social robot. Highly human-like robots evoke more positive emotions, while low human-likeness robots elicit more negative emotions. This suggests that individuals prefer human-like social robots in intimate interactions like companionship. The results also reveal an effect of application areas, where people respond more positively to highly human-like robots used for tasks like lecturing students or companionship for older adults. Regarding less human-like social robots, people tend to respond with greater positive emotions when used within commerce. This suggests that a simpler-looking robot with low human-likeness is more suitable for commercial applications. Negative emotions expressed in the healthcare condition may reflect mistrust in robots' abilities and the sensitivity of the healthcare area. Developers and designers should consider the emotional responses that might be evoked by the task or appearance of the social robot, to ensure successful integration into society.
- Published
- 2023
45. The media inequality, uncanny mountain, and the singularity is far from near: Iwaa and Sophia robot versus a real human being.
- Author
-
Hoorn, Johan F. and Huang, Ivy S.
- Abstract
• Human-likeness is about intrinsic rather than extrinsic qualities of a humanoid robot. • Human-likeness is not higher for human-looking robots compared to non-human looking robots. • Human and non-human looking robots are not experienced differently (i.e. involvement, eeriness, task-related experiences). • There is little evidence for Media Equation nor for Uncanny effects and to date, the Singularity between humans and machines seems far off. Design of Artificial Intelligence and robotics habitually assumes that adding more humanlike features improves the user experience, mainly kept in check by suspicion of uncanny effects. Three strands of theorizing are brought together for the first time and empirically put to the test: Media Equation (and in its wake, Computers Are Social Actors), Uncanny Valley theory, and as an extreme of human-likeness assumptions, the Singularity. We measured the user experience of real-life visitors of a number of seminars who were checked in either by Smart Dynamics' Iwaa, Hanson's Sophia robot, Sophia's on-screen avatar, or a human assistant. Results showed that human-likeness was not in appearance or behavior but in attributed qualities of being alive. Media Equation, Singularity, and Uncanny hypotheses were not confirmed. We discuss the imprecision in theorizing about human-likeness and rather opt for machines that 'function adequately.' [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Effects of Perspective Taking on Ratings of Human Likeness and Trust
- Author
-
Reidy, Kaitlyn, Markin, Kristy, Kohn, Spencer, Wiese, Eva, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Tapus, Adriana, editor, André, Elisabeth, editor, Martin, Jean-Claude, editor, Ferland, François, editor, and Ammi, Mehdi, editor
- Published
- 2015
- Full Text
- View/download PDF
47. Agent Appearance Modulates Mind Attribution and Social Attention in Human-Robot Interaction
- Author
-
Martini, Molly C., Buzzell, George A., Wiese, Eva, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Tapus, Adriana, editor, André, Elisabeth, editor, Martin, Jean-Claude, editor, Ferland, François, editor, and Ammi, Mehdi, editor
- Published
- 2015
- Full Text
- View/download PDF
48. Predictors Affecting Effects of Virtual Influencer Advertising among College Students
- Author
-
Namhyun Um
- Subjects
Renewable Energy, Sustainability and the Environment ,Geography, Planning and Development ,Building and Construction ,Management, Monitoring, Policy and Law ,virtual influencer ,para-social interaction ,human-likeness ,authenticity ,predictability - Abstract
Currently, in many realms, such as entertainment and marketing communications, human influencers have been replaced by virtual ones. As a result, marketing researchers are devoting more attention to the use of virtual influencers. The current study investigates predictors affecting the effects of virtual influencer advertising. Specifically, this study is designed to examine the effects of para-social interaction as relationships between virtual influencer and audiences. In addition, this study delves into the effects of perceived human-likeness, perceived predictability, and perceived authenticity in the evaluation of virtual influencer advertising. For this study, a total of 179 college students majoring in advertising and public relations participated in exchange for course credits. To collect data, an online survey site was created through Qualtrics. This study found that parasocial interactions with a virtual influencer positively affect attitude toward a virtual influencer. Furthermore, perceived human-likeness, perceived predictability, and perceived authenticity also positively influence attitude toward a virtual influencer. Lastly, study findings suggest that attitude toward a virtual influencer has a positive impact on attitude toward adverts. Theoretical as well as practical implications are discussed.
- Published
- 2023
- Full Text
- View/download PDF
49. Customer comfort during service robot interactions
- Author
-
Marc Becker, Dominik Mahr, Gaby Odekerken-Schröder, Marketing & Supply Chain Management, RS: GSBE other - not theme-related research, RS: GSBE Theme Creativity, Innovation & Entrepreneurship, RS: GSBE Theme Human Decisions and Policy Design, RS: GSBE Theme Data-Driven Decision-Making, RS: GSBE Theme Learning and Work, RS: GSBE FSD, and RS: GSBE UM-BIC
- Subjects
RAPPORT ,PERCEPTION ,Service robots ,SATISFACTION ,Strategy and Management ,Service failures ,SOCIAL PRESENCE ,Business and International Management ,Human-likeness ,BEHAVIORS ,Customer comfort ,EXPERIENCES ,ANTECEDENTS - Abstract
Customer comfort during service interactions is essential for creating enjoyable customer experiences. However, although service robots are already being used in a number of service industries, it is currently not clear how customer comfort can be ensured during these novel types of service interactions. Based on a 2 × 2 online between-subjects design including 161 respondents using pictorial and text-based scenario descriptions, we empirically demonstrate that human-like (vs machine-like) service robots make customers feel more comfortable because they facilitate rapport building. Social presence does not underlie this relationship. Importantly, we find that these positive effects diminish in the presence of service failures.
- Published
- 2023
50. What is Human-like?: Decomposing Robots' Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database.
- Author
-
Phillips, Elizabeth, Xuan Zhao, Ullman, Daniel, and Malle, Bertram F.
- Subjects
HUMANOID robots ,DATABASES ,SOCIAL robots ,PRINCIPAL components analysis ,MANIPULATORS (Machinery) - Abstract
Anthropomorphic robots, or robots with human-like appearance features such as eyes, hands, or faces, have drawn considerable attention in recent years. To date, what makes a robot appear human-like has been driven by designers' and researchers' intuitions, because a systematic understanding of the range, variety, and relationships among constituent features of anthropomorphic robots is lacking. To fill this gap, we introduce the ABOT (Anthropomorphic roBOT) Database--a collection of 200 images of real-world robots with one or more human-like appearance features (http://www.abotdatabase.info). Harnessing this database, Study 1 uncovered four distinct appearance dimensions (i.e., bundles of features) that characterize a wide spectrum of anthropomorphic robots and Study 2 identified the dimensions and specific features that were most predictive of robots' perceived human-likeness. With data from both studies, we then created an online estimation tool to help researchers predict how human-like a new robot will be perceived given the presence of various appearance features. The present research sheds new light on what makes a robot look human, and makes publicly accessible a powerful new tool for future research on robots' human-likeness. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.