20 results on '"human-machine trust"'
Search Results
2. Multi-agent modelling and analysis of the knowledge learning of a human-machine hybrid intelligent organization with human-machine trust.
- Author
-
Xue, Chaogai, Zhang, Haoxiang, and Cao, Haiwang
- Subjects
ORGANIZATIONAL learning ,TRUST ,LEARNING ,BLENDED learning ,MACHINE learning - Abstract
Machine learning (ML) technologies have changed the paradigm of knowledge discovery in organizations and transformed traditional organizational learning to human-machine hybrid intelligent organizational learning. However, the general distrust among humans towards knowledge derived from machine learning has hindered effective knowledge exchange between humans and machines, thereby compromising the efficiency of human-machine hybrid intelligent organizational learning. To explore this issue, we used multi-agent simulation to construct a knowledge learning model of a human-machine hybrid intelligent organization with human-machine trust. The simulation showed that whether human-machine trust has a positive effect on knowledge level depends on the initial input and the magnitude of the effect depends on the human learning propensity (exploration and exploitation). When humans reconfigure machine learning excessively, whether human-machine trust has a positive effect on the knowledge level depends on human learning propensity (exploration and exploitation). Maintaining appropriate human-machine trust in turbulent environments assists humans in integrating diverse knowledge to meet changing knowledge needs. Our study extends the human-machine hybrid intelligence organizational learning model by modeling human-machine trust. It will assist managers in effectively designing the most economical level of human-machine trust, thereby enhancing the efficiency of human-machine collaboration in human-machine hybrid intelligent organization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Exploring Nurses' Behavioural Intention to Adopt AI Technology: The Perspectives of Social Influence, Perceived Job Stress and Human–Machine Trust.
- Author
-
Chen, Chin‐Hung and Lee, Wan‐I
- Subjects
- *
SUBJECTIVE stress , *JOB stress , *ARTIFICIAL intelligence , *STRUCTURAL equation modeling , *SOCIAL influence - Abstract
ABSTRACT Aim Design Methods Results Conclusion Impact Patient or Public Contribution This study examines how social influence, human–machine trust and perceived job stress affect nurses' behavioural intentions towards AI‐assisted care technology adoption from a new perspective and framework. It also explores the interrelationships between different types of social influence and job stress dimensions to fill gaps in academic literature.A quantitative cross‐sectional study.Five hospitals in Taiwan that had implemented AI solutions were selected using purposive sampling. The scales, adapted from relevant literature, were translated into Chinese and modified for context. Questionnaires were distributed to nurses via snowball sampling from May 15 to June 10, 2023. A total of 283 valid questionnaires were analysed using the partial least squares structural equation modelling method.Conformity, obedience and human–machine trust were positively correlated with behavioural intention, while compliance was negatively correlated. Perceived job stress did not significantly affect behavioural intention. Compliance was positively associated with all three job stress dimensions: job uncertainty, technophobia and time pressure, while obedience was correlated with job uncertainty.Social influence and human–machine trust are critical factors in nurses' intentions to adopt AI technology. The lack of significant effects from perceived stress suggests that nurses' personal resources mitigate potential stress associated with AI implementation. The study reveals the complex dynamics regarding different types of social influence, human–machine trust and job stress in the context of AI adoption in healthcare.This research extends beyond conventional technology acceptance models by incorporating perspectives on organisational internal stressors and AI‐related job stress. It offers insights into the coping mechanisms during the pre‐adaption AI process in nursing, highlighting the need for nuanced management approaches. The findings emphasise the importance of considering technological and psychosocial factors in successful AI implementation in healthcare settings.No Patient or Public Contribution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Multi-agent modelling and analysis of the knowledge learning of a human-machine hybrid intelligent organization with human-machine trust
- Author
-
Chaogai Xue, Haoxiang Zhang, and Haiwang Cao
- Subjects
Human-machine coordination ,human-machine trust ,organizational learning ,multi-agent simulation ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Systems engineering ,TA168 - Abstract
Machine learning (ML) technologies have changed the paradigm of knowledge discovery in organizations and transformed traditional organizational learning to human-machine hybrid intelligent organizational learning. However, the general distrust among humans towards knowledge derived from machine learning has hindered effective knowledge exchange between humans and machines, thereby compromising the efficiency of human-machine hybrid intelligent organizational learning. To explore this issue, we used multi-agent simulation to construct a knowledge learning model of a human-machine hybrid intelligent organization with human-machine trust. The simulation showed that whether human-machine trust has a positive effect on knowledge level depends on the initial input and the magnitude of the effect depends on the human learning propensity (exploration and exploitation). When humans reconfigure machine learning excessively, whether human-machine trust has a positive effect on the knowledge level depends on human learning propensity (exploration and exploitation). Maintaining appropriate human-machine trust in turbulent environments assists humans in integrating diverse knowledge to meet changing knowledge needs. Our study extends the human-machine hybrid intelligence organizational learning model by modeling human-machine trust. It will assist managers in effectively designing the most economical level of human-machine trust, thereby enhancing the efficiency of human-machine collaboration in human-machine hybrid intelligent organization.
- Published
- 2024
- Full Text
- View/download PDF
5. A Human-Machine Trust Evaluation Method for High-Speed Train Drivers Based on Multi-Modal Physiological Information.
- Author
-
Li, Huimin, Liang, Mengxuan, Niu, Ke, and Zhang, Yaqiong
- Abstract
AbstractWith the development of intelligent transportation, it has become mainstream for drivers and automated systems to cooperate to complete train driving tasks. Human-machine trust has become one of the biggest challenges in achieving safe and effective human-machine cooperative driving. Accurate evaluation of human-machine trust is of great significance to calibrate human-machine trust, realize trust management, reduce safety accidents caused by trust bias, and achieve performance and safety goals. Based on typical driving scenarios of high-speed trains, this paper designs a train fault judgment experiment. By adjusting the machine’s reliability, the driver’s trust is cultivated to form their cognition of the machine. When the driver’s cognition is stable, data from the Trust in Automated (TIA) scale and modes of physiological information, including electrodermal activity (EDA), electrocardiograms (ECG), respiration (RSP), and functional near-infrared spectroscopy (fNIRS), are collected during the fault judgment experiment. Based on analysis of this multi-modal physiological information, a human-machine trust classification model for high-speed train drivers is proposed. The results show that when all four modes of physiological information are used as input, the random forest classification model is most accurate, reaching 93.14%. This indicates that the human-machine trust level of the driver can be accurately represented by physiological information, thus inputting the driver’s physiological information into the classification model outputs their level of human-machine trust. The human-machine trust classification model of high-speed train drivers built in this paper based on multi-modal physiological information establishes the corresponding relationship between physiological trust and human-machine trust level. Human-machine trust level is characterized by physiological trust monitoring, which provides support for the dynamic management of trust. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Research on interaction and trust theory model for cockpit human-machine fusion intelligence.
- Author
-
Ya Duan, Yandong Cai, Ran Peng, Hua Zhao, Yue Feng, and Xiaolong You
- Subjects
MODEL theory ,HUMAN-machine systems ,AERONAUTICS equipment ,SUPPLY & demand - Abstract
Based on Boyd’s “Observation Orientation-Decision-Action (OODA)” aerial combat theory and the principles of operational success, an analysis of the operational division patterns for cross-generational human-machine collaboration was conducted. The research proposed three stages in the development of aerial combat human-machine fusion intelligence: “Human-Machine Separation, Functional Coordination,” “Human-Machine Trust, Task Coordination,” and “Human-Machine Integration, Deep Fusion.” Currently, the transition from the first stage to the second stage is underway, posing challenges primarily related to the lack of effective methods guiding experimental research on human-machine fusion interaction and trust. Building upon the principles of decision neuroscience and the theory of supply and demand relationships, the study analyzed the decision-making patterns of human-machine fusion intelligence under different states. By investigating the correlations among aerial combat mission demands, dynamic operational limits of human-machine tasks, and aerial combat mission performance, a theoretical model of human-machine fusion interaction and trust was proposed. This model revealed the mechanistic coupling of human-machine interactions in aerial tasks, aiming to optimize the decision-making processes of human-machine systems to enhance mission performance. It provides methodological support for the design and application of intelligent collaborative interaction modes in aviation equipment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. The Influencing Factors of Human-Machine Trust: A Behavioral Science Perspective
- Author
-
Cai, Hanyu, Wang, Chang, Zhu, Yi, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Wu, Meiping, editor, Niu, Yifeng, editor, Gu, Mancang, editor, and Cheng, Jin, editor
- Published
- 2022
- Full Text
- View/download PDF
8. Empowering human-AI teams via Intentional Behavioral Synchrony
- Author
-
Mohammad Y. M. Naser and Sylvia Bhattacharya
- Subjects
human-AI teaming ,Intentional Behavioral Synchrony (IBS) ,human-machine trust ,Human-Machine Interaction (HMI) ,multimodal fusion ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
As Artificial Intelligence (AI) proliferates across various sectors such as healthcare, transportation, energy, and military applications, the collaboration between human-AI teams is becoming increasingly critical. Understanding the interrelationships between system elements - humans and AI - is vital to achieving the best outcomes within individual team members' capabilities. This is also crucial in designing better AI algorithms and finding favored scenarios for joint AI-human missions that capitalize on the unique capabilities of both elements. In this conceptual study, we introduce Intentional Behavioral Synchrony (IBS) as a synchronization mechanism between humans and AI to set up a trusting relationship without compromising mission goals. IBS aims to create a sense of similarity between AI decisions and human expectations, drawing on psychological concepts that can be integrated into AI algorithms. We also discuss the potential of using multimodal fusion to set up a feedback loop between the two partners. Our aim with this work is to start a research trend centered on exploring innovative ways of deploying synchrony between teams of non-human members. Our goal is to foster a better sense of collaboration and trust between humans and AI, resulting in more effective joint missions.
- Published
- 2023
- Full Text
- View/download PDF
9. Committing to interdependence: Implications from game theory for human–robot trust
- Author
-
Razin Yosef S. and Feigh Karen M.
- Subjects
human–machine trust ,human–robot interaction ,design and human factors ,acceptability and trust ,modelling and simulating humans ,Technology - Abstract
Human–robot interaction (HRI) and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. HRI has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory has concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This article presents initial steps in closing the gap between these fields. By using insights and experimental findings from interdependence theory and social psychology, this work starts by analyzing a large game theory competition data set to demonstrate that the strongest predictors for a wide variety of human–human trust interactions are the interdependence-derived variables for commitment and trust that we have developed. It then presents a second study with human subject results for more realistic trust scenarios, involving both human–human and human–machine trust. In both the competition data and our experimental data, we demonstrate that the interdependence metrics better capture social “overtrust” than either rational or normative psychological reasoning, as proposed by game theory. This work further explores how interdependence theory – with its focus on commitment, coercion, and cooperation – addresses many of the proposed underlying constructs and antecedents within human–robot trust, shedding new light on key similarities and differences that arise when robots replace humans in trust interactions.
- Published
- 2021
- Full Text
- View/download PDF
10. Machine Learning-Based Surgical State Perception and Collaborative Control for a Vascular Interventional Robot.
- Author
-
Yan, Yonggan, Wang, Hongbo, Yu, Haoyang, Wang, Fuhao, Fang, Junyu, Niu, Jianye, and Guo, Shuxiang
- Abstract
In robot-assisted vascular interventional surg- ery (VIS), surgeons often need to operate outside the operating room to avoid exposure to X-ray. However, it greatly changes the operating ways of surgeons, which affects judgment and operation safety. In this paper, a novel VIS robot system was developed to predict guidewire insertion states and operate collaboratively. To assist the surgeons in perceiving the insertion state, an insertion multi-states prediction model based on softmax logistic regression was proposed. Combined with the prediction model, a human-machine collaborative control strategy was designed, which allows surgeons to perceive the insertion states based on not only the force feedback constructed by the master side but also the prediction results from the slave side. Moreover, a human-machine trust evaluation model and a master-slave collaborative mapping model were proposed for improving safety and efficiency of surgery. To verify the effectiveness of these models, the evaluation experiments in the blood vessel model were carried out. It was indicated by the experiment results that the guidewire insertion states can be predicted by the prediction model in different environments, and the overall accuracy is 93%. The master-slave mapping ratio can be adjusted by the collaborative control strategy automatically to adapt to different surgical conditions. The experimental results showed the usability of the robot-assisted VIS system with the novel force-based perception method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Introducing SMRTT: A Structural Equation Model of Multimodal Real-Time Trust.
- Author
-
Israelsen, Brett, Peggy Wu, Woodruff, Katharine, Avdic-McIntire, Gianna, Radlbeck, Andrew, McLean, Angus, Highland, Patrick "Dice", Schnell, Thomas "Mach", and Javorsek, Daniel "Animal"
- Subjects
STRUCTURAL equation modeling ,PHYSIOLOGICAL models - Abstract
Advances in autonomous technology have led to an increased interest in human-autonomy interactions. Generally, the success of these interactions is measured by the joint performance of the AI and the human operator. This performance depends, in part, on the operator having appropriate, or calibrated, trust of the autonomy. Optimizing the performance of human-autonomy teams therefore partly relies on the modeling and measuring of human trust. Theories and models have been developed on the factors influencing human trust in order to properly measure it. However, these models often rely on self-report rather than more objective, and real-time behavioral and physiological data. This paper seeks to build off of theoretical frameworks of trust by adding objective data to create a model capable of finer grain temporal measures of trust. Presented herein is SMRTT: SEM of Multimodal Real Time Trust. SMRTT leverages Structured Equation Modeling (SEM) techniques to arrive at a real time model of trust. Variables and factors from previous studies and existing theories are used to create components of SMRTT. The value of adding physiological data to the models to create real-time monitoring is discussed along with future plans to validate this model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Revisiting human-machine trust: a replication study of Muir and Moray (1996) using a simulated pasteurizer plant task.
- Author
-
Lee, Jieun, Yamani, Yusuke, Long, Shelby K., Unverricht, James, and Itoh, Makoto
- Subjects
COMPUTER simulation ,PROFESSIONS ,ANALYSIS of variance ,USER interfaces ,REGRESSION analysis ,UNDERGRADUATES ,ROBOTICS ,MATHEMATICAL variables ,AUTOMATION ,AUTONOMY (Psychology) ,DESCRIPTIVE statistics ,TECHNOLOGY ,DATA analysis software ,TRUST - Abstract
This study aimed to replicate Muir and Moray that demonstrated operators' trust in automated machines developing from faith, then dependability, and lastly predictability. Following the procedure of Muir and Moray, we asked undergraduate participants to complete a training program in a simulated pasteuriser plant and an experimental program including various errors in the pasteuriser. Results showed that the best predictor of overall trust was not faith but dependability, and that dependability consistently governed trust throughout the interaction with the pasteuriser. Thus, the obtained data patterns were inconsistent with those reported in Muir and Moray. We observed that operators in the current study used automatic control more frequently than manual control to successfully produce performance scores contrary to the operators in Muir and Moray. The results imply that dependability is a critical predictor of human-machine trust, which automation designer may focus on. More extensive future research using more modern automated technologies is necessary for understanding what factors control human-autonomy trust in modern ages. Practitioner Summary: The results suggest that dependability is a key factor that shapes human-machine trust across the time course of the trust development. This replication study suggests a new perspective for designing effective human-machine systems for untrained users who do not go through extensive training programs on automated systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Do I Trust a Machine? Differences in User Trust Based on System Performance
- Author
-
Yu, Kun, Berkovsky, Shlomo, Conway, Dan, Taib, Ronnie, Zhou, Jianlong, Chen, Fang, Tan, Desney, Editor-in-Chief, Vanderdonckt, Jean, Editor-in-Chief, Zhou, Jianlong, editor, and Chen, Fang, editor
- Published
- 2018
- Full Text
- View/download PDF
14. Initial validation of the trust of automated systems test (TOAST).
- Author
-
Wojton, Heather M., Porter, Daniel, T. Lane, Stephanie, Bieber, Chad, and Madhavan, Poornima
- Subjects
- *
TEST systems , *TRUST , *CONFIRMATORY factor analysis - Abstract
Trust is a key determinant of whether people rely on automated systems in the military and the public. However, there is currently no standard for measuring trust in automated systems. In the present studies, we propose a scale to measure trust in automated systems that is grounded in current research and theory on trust formation, which we refer to as the Trust in Automated Systems Test (TOAST). We evaluated both the reliability of the scale structure and criterion validity using independent, military-affiliated and civilian samples. In both studies we found that the TOAST exhibited a two-factor structure, measuring system understanding and performance (respectively), and that factor scores significantly predicted scores on theoretically related constructs demonstrating clear criterion validity. We discuss the implications of our findings for advancing the empirical literature and in improving interface design. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Effects of Demographic Characteristics on Trust in Driving Automation.
- Author
-
Lee, Jieun, Abe, Genya, Sato, Kenji, and Itoh, Makoto
- Subjects
- *
DEMOGRAPHIC characteristics , *AUTOMOBILE drivers , *AUTOMOBILE driving , *AUTOMATION , *STATISTICS , *SUPERVISORY control systems - Abstract
With the successful introduction of advanced driver assistance systems, vehicles with driving automation technologies have begun to be released onto the market. Because the role of human drivers during automated driving may be different from the role of drivers with assistance systems, it is important to determine how general users consider such new technologies. The current study has attempted to consider driver trust, which plays a critical role in forming users' technology acceptance. In a driving simulator experiment, the demographic information of 56 drivers (50% female, 64% student, and 53% daily driver) was analyzed with respect to Lee and Moray's three dimensions of trust: purpose, process, and performance. The statistical results revealed that female drivers were more likely to rate higher levels of trust than males, and non-student drivers exhibited higher levels of trust than student drivers. However, no driving frequency-related difference was observed. The driver ratings of each trust dimension were neutral to moderate, but purpose-related trust was lower than process- and performance-related trust. Additionally, student drivers exhibited a tendency to distrust automation compared to non-student drivers. The findings present a potential perspective of driver acceptability of current automated vehicles. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Human, Machine, or Hybrid? Using Anthropomorphism to Conceptualize Trust in Robots
- Author
-
Bhatti, Samia and Robert, Lionel + \\'Jr\\'
- Subjects
Social Sciences ,Robotics ,Anthropomorphism ,human-machine trust ,Humanoid Trust ,Trust ,Human-Robot Interaction ,Humanoid ,Robot trust ,Uncanny Valley ,human-robot relationships ,Humanoid Robots ,Robots ,Information Science - Abstract
While robots appear to be more and more human-like in form and function, they are still machines. People can hence perceive them as humans or machines. With varying human-like designs and user perceptions, there is much confusion about how to measure trust in human-robot relationships. While some researchers use human-like trusting beliefs to conceptualize trust, others use machine-like trusting beliefs to do the same. In this paper, we present a conceptual model and related research propositions to help researchers determine the correct conceptualization of trust for human-robot interaction. We propose that anthropomorphism, or perceptions of humanness about the robot, can dictate the conceptualization of trust in human-robot relationships.
- Published
- 2023
- Full Text
- View/download PDF
17. A Review on Communicative Mechanisms of External HMIs in Human-Technology Interaction
- Author
-
Thorvald, Peter, Kolbeinsson, Ari, Fogelberg, Emmie, Thorvald, Peter, Kolbeinsson, Ari, and Fogelberg, Emmie
- Abstract
The Operator 4.0 typology depicts the collaborative operator as one of eight operator working scenarios of operators in Industry 4.0. It signifies collaborative robot applications and the interaction between humans and robots working collaboratively or cooperatively towards a common goal. For this collaboration to run seamlessly and effortlessly, human-robot communication is essential. We briefly discuss what trust, predictability, and intentions are, before investigating the communicative features of both self-driving cars and collaborative robots. We found that although communicative external HMIs could arguably provide some benefits in both domains, an abundance of clues to what an autonomous car or a robot is about to do are easily accessible through the environment or could be created simply by understanding and designing legible motions.
- Published
- 2022
- Full Text
- View/download PDF
18. Battle Management Aids: Leveraging Artificial Intelligence for Tactical Decisions
- Author
-
Johnson, Bonnie W., Green, John M., Kendall, Walter, Miller, Scot A., Godin, Arkady A., Zhao, Ying, Naval Postgraduate School (U.S.), Naval Research Program (NRP), and Systems Engineering
- Subjects
machine learning ,automated decision aids ,mission planning ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,air and missile defense ,human-machine trust ,human-machine teaming ,artificial intelligence ,weapons engagements ,battle management aids ,cognitive laser ,laser weapon system - Abstract
NPS NRP Executive Summary This study will explore the needs and requirements for battle management aids that leverage artificial intelligence methods to enhance tactical decisions. The Navy has recognized the need for tactical decision aids to support battle management as warfighters become overwhelmed with shorter decision cycles, greater amounts of data, and more technology systems to manage. To date, much emphasis has focused on data acquisition, data fusion, and data analytics for gaining situational awareness in the battle space. However, a new frontier and opportunity exists for using this data to develop decision options and predict the consequences of military courses of action. Tactical decision aids must provide a common operational picture to distributed commands that meets the diverse mission needs as well as bridges the gap between planning and tactical domains. The decision aids must support coordinated C4ISR, active and passive operations, offensive and defensive tactics, and information warfare. The study will develop a conceptual design of a decision aid and architecture based on artificial intelligence, machine learning, predictive analytics, and game theory that produces a common operational picture and recommended tactical courses of action based on predicted performance, outcomes, and effects. The primary objectives of this study will be to understand the diverse needs and requirements for battle management aids and to develop a conceptual design of a decision aid system and architecture based on artificial intelligence, machine learning, game theory, and predictive analytics. N2/N6 - Information Warfare This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrp Chief of Naval Operations (CNO) Approved for public release. Distribution is unlimited.
- Published
- 2021
19. Naval Research Program 2021 Annual Report
- Author
-
Naval Postgraduate School (U.S.), Naval Research Program, Naval Postgraduate School (U.S.), and Naval Research Program (NRP)
- Subjects
operational planning ,GPS ,STORM ,communications ,NOSSA ,Logistics ,military operations in the information environment ,intelligence, surveillance, reconnaissance and targeting ,operational effectiveness analysis ,Shi ,CubeSat form factor ,hypervelocity missile ,MDUSV ,cognitive radio ,LOCE ,EMP ,adoption ,PBT ,sensing ,systems engineering ,geopolitical issues ,response ,social identity ,strategic supply chains ,CIR ,time series analysis ,acoustic vector sensors ,high frequency ,intelligence, surveillance, and reconnaissance ,Predictive Modeling ,communications intelligence ,simulation ,neural networks ,Naval Expeditionary Forces ,AFSI ,MQ-25A ,multilevel networks ,C5ISRT ,nuclear weapons ,symbol error rate ,JP-5 ,TLA ,competence ,UxV networked control system ,tactical maneuver ,SATCOM ,pilot training ,EOD ,Navy Analytic Agenda ,DMO ,SysML ,programs of record (PoRs) ,Joint Task Force Commander ,resource generators ,competency model ,trust ,CLA ,NAE ,item mission essentiality code ,design of experiments ,joint campaign analysis ,DoD ,WIEVLE ,ERA5 ,electronic intelligence ,cognition ,Long Range Unmanned Surface Vessel ,GBASM ,USMC ,information environment operations ,delivery schedule ,networking ,failures ,radio communications ,combat modeling ,naval power ,mobile learning ,Department of Defense ,DOE ,collection ,Arctic Ocean ,coefficient of variance ,Reliability Engineering ,information operations ,FRWQ ,OFDM ,unexploded ordinance detection ,subseasonal to seasonal ,training ,non-lethal weapons ,Special Operations Forces ,3D printing ,pre-positioning ,artificial intelligence ,GPS-denied navigation ,competency ,VSW ,SZ ,alcohol-drug abuse ,renewable ,Optimization ,ISRT ,non-kinetic targeting ,batteries ,Red Cell analysis ,degraded communications ,business intelligence ,EMCON ,storage ,ESG ,salvos ,interdependence analysis ,supply chain ,constant energy modulation ,Information Stream ,zoning ,CWMD ,pandemic ,performance-based training ,littoral operations in contested environments ,Firing Theory ,Great Power War ,MCM ,competency-based education ,Earth Systems Prediction Capability (ESPC) ,virtualization ,DRL ,Assignment Modeling ,cyber-security ,signals intelligence ,hydrodynamics ,transport ,social capital ,Finance ,MDA ,mission planning ,human on the loop control ,EDRAM ,force structure ,transmission control protocol ,Wargaming ,Bonuses ,emissions control ,law ,Navy ESPC ,exponential random graph models (ERGMs) ,laser weapon system ,C-UxS,C-UxS Security ,mobile telephony ,flashbang grenades ,naval tactical grid ,UUV ,High Intensity Conflict ,logistical independence ,situational awareness ,wireless communications ,Hadoop ,tactical operations centers ,battery ,Denied, Disrupted, Intermittent, and Limited (DDIL) ,COMINT ,agent-based simulation ,energy ,Heat Treatment ,Maintenance ,cost model ,unmanned security ,CSG ,standoff ,JADC2 ,PLM ,Budgeting ,NMCS ,OODA ,Domain Awareness ,internet protocol ,Wreck Interior Exploration Vehicle ,resilience ,C2 ,deep reinforcement learning ,subseasonal to seasonal (S2S) ,information warfare ,cyber ,modeling ,humility ,UxV ,and Modeling ,Document relevance ,AI ,fire support coordination ,Commander’s Intelligence Requirements ,UxV NCS ,Multi-Criteria Decision Analysis ,fusion ,combat logistics ,expeditionary advanced base operations ,Great Power Competition ,RRL ,Tribes ,medical ,Machine Learning ,access ,procurement lead time ,Grey Zone Conflict ,lead time reduction ,capabilities assessment ,distributed maritime operations ,MS sensor ,Atlantic Ocean ,discrete event simulation ,POM ,surf zone ,Model Based Systems Engineering Methodology for Employing Architecture for Systems Analysis ,regulation ,energy optimization ,C5I ,representing situations ,technology adoption ,officer pay ,campaign analysis ,littoral operations in a contested environment ,BFTN ,PPE ,Naval Special Warfare Command ,Intelligence Gathering ,maritime domain awareness ,target tracking ,F-76 ,Compensation ,budget ,EABO ,leadership ,very shallow water ,industrial base ,explosive ordnance disposal ,LRUSV ,security ,data farming ,CONOPS ,RAATM ,Cold Spray ,projected situational awareness ,meteorology and oceanography (METOC) ,automation ,C-UAS ,human machine teaming ,A2/AD ,Program Objective Memorandum ,Apache Webserver ,Reliability Predictions ,distributed operations ,behavioral decision making ,Systems Modeling Language ,MBSE MEASA ,sUAS ,learning management systems ,USVs ,Seabed-to-Space ,event-driven graph data model ,offensive mine warfare ,HF ,peer-to-peer systems ,Battlefield Tactical Network ,causal learning ,cellular telephony ,Integration ,classifications ,infrastructure ,counter-unmanned systems ,context ,orthogonal division multiplexing ,constructive simulation ,automated decision aids ,air- to-air refueling ,counter-detection ,escalation ladder ,federated learning ,ground-based anti-ship missile ,Unmanned Surface Vessels ,addictive behaviors ,High Altitude Platforms ,business model ,maritime strategy ,Tropical cyclones ,AVO ,technology ,counter-proliferation ,PESTONI ,LLA ,decision support systems ,additive manufacturing ,ordnance ,safety ,NATO ,non-mission capable supply ,inventory management ,Salvo Model ,lexical link analysis ,beamforming ,diversity ,anti ,navigational aid ,Aviation Depot Readiness Availability Model ,satellite communications ,acoustic intensity processing ,land use ,Crowds ,LMS ,lighter-than-air gas delivery system ,IP ,trade studies ,command and control (C2) ,Indo-Pacific ,Ready Relevant Learning ,generative adversarial networks ,Portfolio ,LEO, low-Earth orbit, ML ,informal networks ,Naval Expeditionary Combat Forces ,blockchain ,social network analysis ,data analysis ,strike group protection ,cloud based ,sea-control ,model-based systems engineering ,data visualization ,3-Tier Architecture ,offensive mining ,evidence-based training ,Baltic, Deterrence, Distributed Lethality, Enhanced Forward Presence, Fleet Design, Fleet Posture, Grey Zone Conflict, Host Nation Support, NATO, Naval Bases, Naval Operations, Russia ,risk ,mine countermeasures ,NSS ,BZ ,intelligence community ,Advanced Framework for Simulation ,anti-ship missile ,courses of action ,joint all-domain command and control ,Model Based Systems Engineering ,CP ,Retention ,arms race ,UAS ,human-machine teaming ,regulatory ,NTG ,CR ,battle management aids ,mixed integer program ,ELINT ,Navy training ,alcohol and drug management ,UAV ,EBT ,Recruiting ,unmanned aerial systems ,decision making ,C-UAS Security ,Naval Simulation System ,Mechanical Properties ,Synthetic Theater Operations Research Model ,hypervelocity missile ship launch platform ,Chinese Communist Party ,root causes ,countering weapons of mass destruction ,C-UAS interoperability ,COVID-19 ,agent-based modeling ,decision-making ,safety analysis ,naval fuel ,Turnover ,inclusion ,secondary repairable materiel ,data science ,dynamical-statistical forecasting ,gap analysis ,permitting ,control ,Reports ,C-UAS developing technology ,Vertical Launch System ,load balancing ,detection ,unmanned ,PV ,digital twins ,blue networks ,Russia ,Video Game ,Observe-Orient-Decide-Act ,SIGINT ,mine ,data maturation ,Readiness optimization ,cognitive laser ,emergent behavior ,Alliance Cohesion ,bit error rate ,solar ,ADRAM ,antenna arrays ,feature extraction and matching ,LTA ,hydrogen fuel cell ,Database Schema Design ,contested environment ,CBE ,simulations ,efficient experimental design ,blended learning strategies ,system safety ,TCP ,CBA ,policy ,unmanned surface vessel (USV) ,Naval Aviation Enterprise ,coercion ,China ,electromagnetic pulse ,CBT ,vision odometry ,psychological functioning ,operations ,development processes ,photovoltaic ,collaborative learning agents ,cycle of research ,SA ,high-quality force ,Markov-Chain ,Total Learning Architecture ,wargames ,Monterey Phoenix ,HMT ,CCP ,tasking ,Data Analytics ,non-kinetic weapons ,(non) permissive environment ,MCDA ,5G NR ,microgrid ,financially restricted work queue ,command and control, communications, computers, cyber, intelligence, surveillance, reconnaissance, and targeting ,beach zone ,aggregation over layers ,Naval Expeditionary Combat Command (NECC) ,processing ,air and missile defense ,internet ,denied environment ,social processes ,C-UAS use constraints ,seabed warfare ,quantum intelligence game ,modeling and simulation ,cost benefit analysis ,clandestine ,unmanned systems ,counter-unmanned aerial systems ,interconnection ,decision science ,legal ,CEM ,acquisition ,SER ,program management ,Resetting Anchor Antenna Tether Mechanism ,soft skills ,Tactics ,deterrence ,decision support ,Dynamic Programming ,cybersecurity ,workflow ,sea-denial ,competency-based training ,unmanned aerial, surface, underwater and ground vehicles ,instructional design ,exploitation and dissemination (TCPED ,supervised learning ,electrolyzer ,contextually adaptive battlespace ,models ,intermittency ,information fusion ,wind ,orchestration ,ISR ,missile defense ,great power competition (GPC) ,BER ,transformation ,social network analysis (SNA) ,fifth generation cellular - new radio ,human-machine trust ,sustainment ,Joint Targeting Folders ,ML ,Problematic video gaming ,casualty report ,weapons engagements ,tactical warfare ,metacognition - Abstract
NPS NRP Annual Report The Naval Postgraduate School (NPS) Naval Research Program (NRP) is funded by the Chief of Naval Operations and supports research projects for the Navy and Marine Corps. The NPS NRP serves as a launch-point for new initiatives which posture naval forces to meet current and future operational warfighter challenges. NRP research projects are led by individual research teams that conduct research and through which NPS expertise is developed and maintained. The primary mechanism for obtaining NPS NRP support is through participation at NPS Naval Research Working Group (NRWG) meetings that bring together fleet topic sponsors, NPS faculty members, and students to discuss potential research topics and initiatives. Chief of Naval Operations (CNO) Approved for public release. Distribution is unlimited.
- Published
- 2021
20. Does automation trust evolve from a leap of faith? An analysis using a reprogrammed pasteurizer simulation task.
- Author
-
Long, Shelby K., Lee, Jieun, Yamani, Yusuke, Unverricht, James, and Itoh, Makoto
- Subjects
- *
TRUST , *SOCIAL interaction , *HUMAN-machine relationship , *RELIABILITY (Personality trait) , *HUMAN-machine systems , *COMPUTER simulation , *AUTOMATION , *TECHNOLOGY - Abstract
Trust is a critical factor that drives successful human-automation interaction in a myriad of modern professional environments. One seminal work on human-automation trust is Muir and Moray (1996) showing that human-machine trust evolves from faith, then dependability, and finally predictability in a simulated supervisory control task. However, our recent work failed to replicate the finding of the original study, calling for further replication efforts. Experiment 1 aimed to fully replicate Muir and Moray (1996) where participants performed a simulated pasteurizer task. Experiment 2 attempted to replicate Experiment 1 using participants who major in Engineering as used in the original study. Both experiments showed that dependability was the best initial predictor of trust, building later to predictability and faith. Two experiments consistently failed to support both the hypothesis proposed by Muir and Moray (1996), that trust develops from predictability to dependability to faith, and their original findings that trust develops initially from faith. The results of the current experiments challenge this widely cited view of how human-machine trust develops. Modern automation designers should be aware that dependability might control initial trust development for general users and incorporate dependability information into their designs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.