247 results on '"HABLI, Ibrahim"'
Search Results
202. Guideline for Architectural Safety, Security and Privacy Implementations Using Design Patterns: SECREDAS Approach
- Author
-
Marko, Nadja, Castella Triginer, Joaquim Maria, Striecks, Christoph, Braun, Tobias, Schwarz, Reinhard, Marksteiner, Stefan, Vasenev, Alexandr, Kemmerich, Joerg, Hamazaryan, Hayk, Shan, Lijun, Loiseaux, Claire, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, Gerasimou, Simos, editor, Schoitsch, Erwin, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
203. Safe Interaction of Automated Forklifts and Humans at Blind Corners in a Warehouse with Infrastructure Sensors
- Author
-
Drabek, Christian, Kosmalska, Anna, Weiss, Gereon, Ishigooka, Tasuku, Otsuka, Satoshi, Mizuochi, Mariko, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
204. A Modular Approach to Non-deterministic Dynamic Fault Trees
- Author
-
Müller, Sascha, Jordon, Adeline, Gerndt, Andreas, Noll, Thomas, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
205. A Framework for Automated Quality Assurance and Documentation for Pharma 4.0
- Author
-
Schmidt, Andreas, Frey, Joshua, Hillen, Daniel, Horbelt, Jessica, Schandar, Markus, Schneider, Daniel, Sorokos, Ioannis, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
206. SASSI: Safety Analysis Using Simulation-Based Situation Coverage for Cobot Systems
- Author
-
Lesage, Benjamin, Alexander, Rob, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
207. Machine Learning-Based Fault Injection for Hazard Analysis and Risk Assessment
- Author
-
Oakes, Bentley James, Moradi, Mehrdad, Van Mierlo, Simon, Vangheluwe, Hans, Denil, Joachim, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
208. Safety Case Maintenance: A Systematic Literature Review
- Author
-
Cârlan, Carmen, Gallina, Barbara, Soima, Liana, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
209. Safety Assurance of Machine Learning for Chassis Control Functions
- Author
-
Burton, Simon, Kurzidem, Iwo, Schwaiger, Adrian, Schleiss, Philipp, Unterreiner, Michael, Graeber, Torben, Becker, Philipp, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
210. IT Design for Resiliency Using Extreme Value Analysis
- Author
-
Bozóki, Szilárd, Pataricza, András, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
211. Towards Certified Analysis of Software Product Line Safety Cases
- Author
-
Shahin, Ramy, Kokaly, Sahar, Chechik, Marsha, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
212. DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers
- Author
-
Paterson, Colin, Wu, Haoze, Grese, John, Calinescu, Radu, Păsăreanu, Corina S., Barrett, Clark, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
213. Could We Relieve AI/ML Models of the Responsibility of Providing Dependable Uncertainty Estimates? A Study on Outside-Model Uncertainty Estimates
- Author
-
Jöckel, Lisa, Kläs, Michael, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
214. ISO/SAE 21434-Based Risk Assessment of Security Incidents in Automated Road Vehicles
- Author
-
Püllen, Dominik, Liske, Jonas, Katzenbeisser, Stefan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
215. Towards Certification of a Reduced Footprint ACAS-Xu System: A Hybrid ML-Based Solution
- Author
-
Damour, Mathieu, De Grancey, Florence, Gabreau, Christophe, Gauffriau, Adrien, Ginestet, Jean-Brice, Hervieu, Alexandre, Huraux, Thomas, Pagetti, Claire, Ponsolle, Ludovic, Clavière, Arthur, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
216. Automating the Assembly of Security Assurance Case Fragments
- Author
-
Meng, Baoluo, Paul, Saswata, Moitra, Abha, Siu, Kit, Durling, Michael, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
217. Evaluation Framework for Performance Limitation of Autonomous Systems Under Sensor Attack
- Author
-
Shimizu, Koichi, Suzuki, Daisuke, Muramatsu, Ryo, Mori, Hisashi, Nagatsuka, Tomoyuki, Matsumoto, Tsutomu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Habli, Ibrahim, editor, Sujan, Mark, editor, and Bitsch, Friedemann, editor
- Published
- 2021
- Full Text
- View/download PDF
218. Automotive mechatronic safety argument framework
- Author
-
Rivett, Roger, Kelly, Tim, and Habli, Ibrahim
- Subjects
004 - Abstract
A modern vehicle uses mechanical components under software control, referred to as mechatronic systems, to deliver its features. The software for these, and its supporting hardware, are typically developed according to the functional safety standard ISO 26262:2011. This standard requires that a safety argument is created that demonstrates that the safety requirements for an item are complete and satisfied by evidence. However, this argument only addresses the software and electronic hardware aspects of the mechatronic system, although safety requirements derived for these can also be allocated to the mechanical part of the mechatronic system. The safety requirements allocated to hardware and software also have a value of integrity assigned to them based on an assessment of the unmitigated risk. The concept of risk and integrity is expressed differently in the development of the mechanical components. In this thesis, we address the challenge of extending the safety argument required by ISO 26262 to include the mechanical components being controlled, so creating a safety argument pattern that encompasses the complete mechatronic system. The approach is based on a generic model for engineering which can be applied to the development of the hardware, software and mechanical components. From this, a safety argument pattern has been derived which consequently can be applied to all three engineering disciplines of the mechatronic system. The harmonisation of the concept of integrity is addressed through the use of special characteristics. The result is a model-based assurance approach which allows an argument to be constructed for the mitigation of risk associated with a mechatronic system that encompasses the three engineering disciplines of the system. This approach is evaluated through interview-based case studies and the retrospective application of the approach to an existing four corner air suspension system.
- Published
- 2018
219. Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare.
- Author
-
Sujan, Mark A., White, Sean, Habli, Ibrahim, and Reynolds, Nick
- Subjects
- *
ARTIFICIAL intelligence , *INTENSIVE care patients , *INTENSIVE care units , *MEDICAL care , *THEMATIC analysis - Abstract
• Qualitative study of stakeholder perceptions of healthcare AI. • Diversity of views can support responsible innovation. • There is a need for a systems approach to AI safety. • The impact on patient – clinician relationship needs consideration. • Ethical, legal and societal aspects need to be addressed. There is an increasing number of healthcare AI applications in development or already in use. However, the safety impact of using AI in healthcare is largely unknown. In this paper we explore how different stakeholders (patients, hospital staff, technology developers, regulators) think about safety and safety assurance of healthcare AI. 26 interviews were undertaken with patients, hospital staff, technology developers and regulators to explore their perceptions on the safety and the safety assurance of AI in healthcare using the example of an AI-based infusion pump in the intensive care unit. Data were analysed using thematic analysis. Participant perceptions related to: the potential impact of healthcare AI, requirements for human-AI interaction, safety assurance practices and regulatory frameworks for AI and the gaps that exist, and how incidents involving AI should be managed. The description of a diversity of views can support responsible innovation and adoption of such technologies in healthcare. Safety and assurance of healthcare AI need to be based on a systems approach that expands the current technology-centric focus. Lessons can be learned from the experiences with highly automated systems across safety-critical industries, but issues such as the impact of AI on the relationship between patients and their clinicians require greater consideration. Existing standards and best practices for the design and assurance of systems should be followed, but there is a need for greater awareness of these among technology developers. In addition, wider ethical, legal, and societal implications of the use of AI in healthcare need to be addressed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
220. From Pluralistic Normative Principles to Autonomous-Agent Rules.
- Author
-
Townsend, Beverley, Paterson, Colin, Arvind, T. T., Nemirovsky, Gabriel, Calinescu, Radu, Cavalcanti, Ana, Habli, Ibrahim, and Thomas, Alan
- Subjects
- *
SYSTEMS engineering , *ARTIFICIAL intelligence , *WELL-being , *CONCRETE - Abstract
With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural ('SLEEC') nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules. This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended context premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
221. Service-oriented architectures for safety-critical systems
- Author
-
Al-Humam, Abdulaziz and Habli, Ibrahim
- Subjects
004 - Abstract
Many organisations in the safety-critical domain are service-oriented, fundamentally centred on critical services provided by systems and operators. Increasingly, these services rely on software-intensive systems, e.g. medical health informatics and air traffic control, for improving the different aspects of industrial practice, e.g. enhancing efficiency through automation and safety through smart alarm systems. However, many services are categorised as high risk and as such it is vital to analyse the ways in which the software-based systems can contribute to unintentional harm and potentially compromise safety. This thesis defines an approach to modelling and analysing Service-Oriented Architectures (SOAs) used in the safety-critical domain, with emphasis on identifying and classifying potential hazardous behaviour. The approach also provides a systematic and reusable basis for defining how the safety case for these SOAs can be developed in a modular manner. The approach is tool-supported and is evaluated through two case studies, from the healthcare and oil and gas domains, and industrial review.
- Published
- 2015
222. Artificial intelligence explainability: the technical and ethical dimensions.
- Author
-
McDermid, John A., Jia, Yan, Porter, Zoe, and Habli, Ibrahim
- Subjects
- *
ARTIFICIAL intelligence , *INDUSTRIAL safety , *MACHINE learning , *DECISION making - Abstract
In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
223. Contextual design requirements for decision-support tools involved in weaning patients from mechanical ventilation in intensive care units.
- Author
-
Hughes, Nathan, Jia, Yan, Sujan, Mark, Lawton, Tom, Habli, Ibrahim, and McDermid, John
- Subjects
- *
INTENSIVE care units , *ARTIFICIAL intelligence , *COGNITIVE ability , *DECISION support systems , *INFANT weaning - Abstract
Weaning patients from ventilation in intensive care units (ICU) is a complex task. There is a growing desire to build decision-support tools to help clinicians during this process, especially those employing Artificial Intelligence (AI). However, tools built for this purpose should fit within and ideally improve the current work environment, to ensure they can successfully integrate into clinical practice. To do so, it is important to identify areas where decision-support tools may aid clinicians, and associated design requirements for such tools. This study analysed the work context surrounding the weaning process from mechanical ventilation in ICU environments, via cognitive task and work domain analyses. In doing so, both what cognitive processes clinicians perform during weaning, and the constraints and affordances of the work environment itself, were described. This study found a number of weaning process tasks where decision-support tools may prove beneficial, and from these a set of contextual design requirements were created. This work benefits researchers interested in creating human-centred decision-support tools for mechanical ventilation that are sensitive to the wider work system. • An analysis of the work context surrounding weaning ICU patients from mechanical ventilation. • Identification of several areas where decision-support tools could aid in the weaning process. • A set of contextual design requirements for decision-support tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
224. Using FRAM to explore sources of performance variability in intravenous infusion administration in ICU: A non-normative approach to systems contradictions.
- Author
-
Furniss, Dominic, Nelson, David, Habli, Ibrahim, White, Sean, Elliott, Matthew, Reynolds, Nick, and Sujan, Mark
- Subjects
- *
INTRAVENOUS therapy , *INTENSIVE care units , *MEDICAL care , *MEDICAL research , *IMPLEMENTATION (Social action programs) - Abstract
Systems contradictions present challenges that need to be effectively managed, e.g. due to conflicting rules and advice, goal conflicts, and mismatches between demand and capacity. We apply FRAM (Functional Resonance Analysis Method) to intravenous infusion practices in an intensive care unit (ICU) to explore how tensions and contradictions are managed by people. A multi-disciplinary team including individuals from nursing, medical, pharmacy, safety, IT and human factors backgrounds contributed to this analysis. A FRAM model investigation resulting in seven functional areas are described. A tabular analysis highlights significant areas of performance variability, e.g. administering medication before a prescription, prioritising drugs, different degrees of double checking and using sites showing early signs of infection for intravenous access. Our FRAM analysis has been non-normative: performance variability is not necessarily wanted or unwanted, it is merely necessary where system contradictions cannot be easily resolved and so adaptive capacity is required to cope. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
225. Effect of Label Noise on Robustness of Deep Neural Network Object Detectors
- Author
-
Bishwo N. Adhikari, Esa Rahtu, Jukka Peltomaki, Saeed Bakhshi Germi, Heikki Huttunen, Habli, Ibrahim, Sujan, Mark, Gerasimou, Simos, Schoitsch, Erwin, Bitsch, Friedemann, Tampere University, and Computing Sciences
- Subjects
Hyperparameter ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,Context (language use) ,113 Computer and information sciences ,Object detection ,Noise ,Robustness (computer science) ,Minimum bounding box ,Artificial intelligence ,business - Abstract
Label noise is a primary point of interest for safety concerns in previous works as it affects the robustness of a machine learning system by a considerable amount. This paper studies the sensitivity of object detection loss functions to label noise in bounding box detection tasks. Although label noise has been widely studied in the classification context, less attention is paid to its effect on object detection. We characterize different types of label noise and concentrate on the most common type of annotation error, which is missing labels. We simulate missing labels by deliberately removing bounding boxes at training time and study its effect on different deep learning object detection architectures and their loss functions. Our primary focus is on comparing two particular loss functions: cross-entropy loss and focal loss. We also experiment on the effect of different focal loss hyperparameter values with varying amounts of noise in the datasets and discover that even up to 50% missing labels can be tolerated with an appropriate selection of hyperparameters. The results suggest that focal loss is more sensitive to label noise, but increasing the gamma value can boost its robustness. acceptedVersion
- Published
- 2021
226. Prediction of weaning from mechanical ventilation using Convolutional Neural Networks.
- Author
-
Jia, Yan, Kaul, Chaitanya, Lawton, Tom, Murray-Smith, Roderick, and Habli, Ibrahim
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL respiration , *RECEIVER operating characteristic curves , *CRITICALLY ill patient care , *INTENSIVE care patients , *INTENSIVE care units , *RESEARCH , *MECHANICAL ventilators , *RESEARCH methodology , *MEDICAL cooperation , *EVALUATION research , *CATASTROPHIC illness , *COMPARATIVE studies - Abstract
Weaning from mechanical ventilation covers the process of liberating the patient from mechanical support and removing the associated endotracheal tube. The management of weaning from mechanical ventilation comprises a significant proportion of the care of critically ill intubated patients in Intensive Care Units (ICUs). Both prolonged dependence on mechanical ventilation and premature extubation expose patients to an increased risk of complications and increased health care costs. This work aims to develop a decision support model using routinely-recorded patient information to predict extubation readiness. In order to do so, we have deployed Convolutional Neural Networks (CNN) to predict the most appropriate treatment action in the next hour for a given patient state, using historical ICU data extracted from MIMIC-III. The model achieved 86% accuracy and 0.94 area under the receiver operating characteristic curve (AUC-ROC). We also performed feature importance analysis for the CNN model and interpreted these features using the DeepLIFT method. The results of the feature importance assessment show that the CNN model makes predictions using clinically meaningful and appropriate features. Finally, we implemented counterfactual explanations for the CNN model. This can help clinicians understand what feature changes for a particular patient would lead to a desirable outcome, i.e. readiness to extubate. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
227. Implementing an artificial intelligence command centre in the NHS: a mixed-methods study.
- Author
-
Johnson OA, McCrorie C, McInerney C, Mebrahtu TF, Granger J, Sheikh N, Lawton T, Habli I, Randell R, and Benn J
- Subjects
- Humans, United Kingdom, COVID-19 epidemiology, Patient Safety, Hospitals, Teaching organization & administration, Qualitative Research, SARS-CoV-2, Interviews as Topic, State Medicine organization & administration, Artificial Intelligence
- Abstract
Background: Hospital 'command centres' use digital technologies to collect, analyse and present real-time information that may improve patient flow and patient safety. Bradford Royal Infirmary has trialled this approach and presents an opportunity to evaluate effectiveness to inform future adoption in the United Kingdom., Objective: To evaluate the impact of the Bradford Command Centre on patient care and organisational processes., Design: A comparative mixed-methods study. Operational data from a study and control site were collected and analysed. The intervention was observed, and staff at both sites were interviewed. Analysis was grounded in a literature review and the results were synthesised to form conclusions about the intervention., Setting: The study site was Bradford Royal Infirmary, a large teaching hospital in the city of Bradford, United Kingdom. The control site was Huddersfield Royal Infirmary in the nearby city of Huddersfield., Participants: Thirty-six staff members were interviewed and/or observed., Intervention: The implementation of a digitally enabled hospital command centre., Main Outcome Measures: Qualitative perspectives on hospital management. Quantitative metrics on patient flow, patient safety, data quality., Data Sources: Anonymised electronic health record data. Ethnographic observations including interviews with hospital staff. Cross-industry review including relevant literature and expert panel interviews., Results: The Command Centre was implemented successfully and has improved staff confidence of better operational control. Unintended consequences included tensions between localised and centralised decision-making and variable confidence in the quality of data available. The Command Centre supported the hospital through the COVID-19 pandemic, but the direct impact of the Command Centre was difficult to measure as the pandemic forced all hospitals, including the study and control sites, to innovate rapidly. Late in the study we learnt that the control site had visited the study site and replicated some aspects of the command centre themselves; we were unable to explore this in detail. There was no significant difference between pre- and post-intervention periods for the quantitative outcome measures and no conclusive impact on patient flow and data quality. Staff and patients supported the command-centre approaches but patients expressed concern that individual needs might get lost to 'the system'., Conclusions: Qualitative evidence suggests the Command Centre implementation was successful, but it proved challenging to link quantitative evidence to specific technology interventions. Staff were positive about the benefits and emphasised that these came from the way they adapted to and used the new technology rather than the technology per se., Limitations: The COVID-19 pandemic disrupted care patterns and forced rapid innovation which reduced our ability to compare study and control sites and data before, during and after the intervention., Future Work: We plan to follow developments at Bradford and in command centres in the National Health Service in order to share learning. Our mixed-methods approach should be of interest to future studies attempting similar evaluation of complex digitally enabled whole-system changes., Study Registration: The study is registered as IRAS No.: 285933., Funding: This award was funded by the National Institute for Health and Care Research (NIHR) Health and Social Care Delivery Research programme (NIHR award ref: NIHR129483) and is published in full in Health and Social Care Delivery Research ; Vol. 12, No. 41. See the NIHR Funding and Awards website for further award information.
- Published
- 2024
- Full Text
- View/download PDF
228. Moving beyond the AI sales pitch - Empowering clinicians to ask the right questions about clinical AI.
- Author
-
Habli I, Sujan M, and Lawton T
- Abstract
We challenge the dominant technology-centric narrative around clinical AI. To realise the true potential of the technology, clinicians must be empowered to take a whole-system perspective and assess the suitability of AI-supported tasks for their specific complex clinical setting. Key factors include the AI's capacity to augment human capabilities, evidence of clinical safety beyond general performance metrics and equitable clinical decision-making by the human-AI team. Proactively addressing these issues could pave the way for an accountable clinical buy-in and a trustworthy deployment of the technology., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Authors. Published by Elsevier Ltd on behalf of Royal College of Physicians.)
- Published
- 2024
- Full Text
- View/download PDF
229. Balancing Acts: Tackling Data Imbalance in Machine Learning for Predicting Myocardial Infarction in Type 2 Diabetes.
- Author
-
Ozturk B, Lawton T, Smith S, and Habli I
- Subjects
- Humans, Bayes Theorem, Support Vector Machine, Diabetes Mellitus, Type 2, Myocardial Infarction, Machine Learning
- Abstract
Type 2 Diabetes (T2D) is a prevalent lifelong health condition. It is predicted that over 500 million adults will be diagnosed with T2D by 2040. T2D can develop at any age, and if it progresses, it may cause serious comorbidities. One of the most critical T2D-related comorbidities is Myocardial Infarction (MI), known as heart attack. MI is a life-threatening medical emergency, and it is important to predict it and intervene in a timely manner. The use of Machine Learning (ML) for clinical prediction is gaining pace, but the class imbalance in predictive models is a key challenge for establishing a trustworthy deployment of the technology. This may lead to bias and overfitting in the ML models, and it may cause misleading interpretations of the ML outputs. In our study, we showed how systematic use of Class Imbalance Handling (CIH) techniques may improve the performance of the ML models. We used the Connected Bradford dataset, consisting of over one million real-world health records. Three commonly used CIH techniques, Oversampling, Undersampling, and Class Weighting (CW) have been used for Naive Bayes (NB), Neural Network (NN), Random Forest (RF), Support Vector Machine (SVM), and Ensemble models. We report that CW overperforms among the other techniques with the highest Accuracy and F1 values of 0.9948 and 0.9556, respectively. Applying the most appropriate CIH techniques for the ML models using real-world healthcare data provides promising results for helping to reduce the risk of MI in patients with T2D.
- Published
- 2024
- Full Text
- View/download PDF
230. Clinicians risk becoming 'liability sinks' for artificial intelligence.
- Author
-
Lawton T, Morgan P, Porter Z, Hickey S, Cunningham A, Hughes N, Iacovides I, Jia Y, Sharma V, and Habli I
- Abstract
Competing Interests: TL has received an honorarium for a lecture on this topic from Al Sultan United Medical Co and is head of clinical artificial intelligence at Bradford Teaching Hospitals NHS Foundation Trust, and a potential liability sink All other authors report no conflicts of interest
- Published
- 2024
- Full Text
- View/download PDF
231. Changing the patient safety mindset: can safety cases help?
- Author
-
Sujan M and Habli I
- Subjects
- Humans, Patient Safety
- Abstract
Competing Interests: Competing interests: MS and IH were part of the Safer Clinical Systems team.
- Published
- 2024
- Full Text
- View/download PDF
232. Applying Team Science to Collaborative Digital Health Research: Learnings from the Wearable Clinic.
- Author
-
Peek N, Stockton-Powdrell C, Casson A, Sperrin M, Parsia B, Manca A, Iglesias C, Habli I, Hassan L, Antrobus S, and Machin M
- Subjects
- Interdisciplinary Research, Learning, Ambulatory Care Facilities, Digital Health, Wearable Electronic Devices
- Abstract
Collaboration across disciplinary boundaries is vital to address the complex challenges and opportunities in Digital Health. We present findings and experiences of applying the principles of Team Science to a digital health research project called 'The Wearable Clinic'. Challenges faced were a lack of shared understanding of key terminology and concepts, and differences in publication cultures between disciplines. We also encountered more profound discrepancies, relating to definitions of "success" in a research project. We recommend that collaborative digital health research projects select a formal Team Science methodology from the outset.
- Published
- 2024
- Full Text
- View/download PDF
233. The impact of hospital command centre on patient flow and data quality: findings from the UK National Health Service.
- Author
-
Mebrahtu TF, McInerney CD, Benn J, McCrorie C, Granger J, Lawton T, Sheikh N, Habli I, Randell R, and Johnson O
- Subjects
- Humans, Retrospective Studies, Referral and Consultation, United Kingdom, Emergency Service, Hospital, Length of Stay, State Medicine, Hospitals
- Abstract
In the last 6 years, hospitals in developed countries have been trialling the use of command centres for improving organizational efficiency and patient care. However, the impact of these command centres has not been systematically studied in the past. It is a retrospective population-based study. Participants were patients who visited the Bradford Royal Infirmary hospital, Accident and Emergency (A&E) Department, between 1 January 2018 and 31 August 2021. Outcomes were patient flow (measured as A&E waiting time, length of stay, and clinician seen time) and data quality (measured by the proportion of missing treatment and assessment dates and valid transition between A&E care stages). Interrupted time-series segmented regression and process mining were used for analysis. A&E transition time from patient arrival to assessment by a clinician marginally improved during the intervention period; there was a decrease of 0.9 min [95% confidence interval (CI): 0.35-1.4], 3 min (95% CI: 2.4-3.5), 9.7 min (95% CI: 8.4-11.0), and 3.1 min (95% CI: 2.7-3.5) during 'patient flow program', 'command centre display roll-in', 'command centre activation', and 'hospital wide training program', respectively. However, the transition time from patient treatment until the conclusion of consultation showed an increase of 11.5 min (95% CI: 9.2-13.9), 12.3 min (95% CI: 8.7-15.9), 53.4 min (95% CI: 48.1-58.7), and 50.2 min (95% CI: 47.5-52.9) for the respective four post-intervention periods. Furthermore, the length of stay was not significantly impacted; the change was -8.8 h (95% CI: -17.6 to 0.08), -8.9 h (95% CI: -18.6 to 0.65), -1.67 h (95% CI: -10.3 to 6.9), and -0.54 h (95% CI: -13.9 to 12.8) during the four respective post-intervention periods. It was a similar pattern for the waiting and clinician seen times. Data quality as measured by the proportion of missing dates of records was generally poor (treatment date = 42.7% and clinician seen date = 23.4%) and did not significantly improve during the intervention periods. The findings of the study suggest that a command centre package that includes process change and software technology does not appear to have a consistent positive impact on patient safety and data quality based on the indicators and data we used. Therefore, hospitals considering introducing a command centre should not assume there will be benefits in patient flow and data quality., (© The Author(s) 2023. Published by Oxford University Press on behalf of International Society for Quality in Health Care.)
- Published
- 2023
- Full Text
- View/download PDF
234. Predicting Progression of Type 2 Diabetes Using Primary Care Data with the Help of Machine Learning.
- Author
-
Ozturk B, Lawton T, Smith S, and Habli I
- Subjects
- Adult, Humans, Bayes Theorem, Machine Learning, Primary Health Care, Support Vector Machine, Diabetes Mellitus, Type 2 diagnosis, Diabetes Mellitus, Type 2 epidemiology, Hypertension diagnosis, Hypertension epidemiology
- Abstract
Type 2 diabetes is a life-long health condition, and as it progresses, A range of comorbidities can develop. The prevalence of diabetes has increased gradually, and it is expected that 642 million adults will be living with diabetes by 2040. Early and proper interventions for managing diabetes-related comorbidities are important. In this study, we propose a Machine Learning (ML) model for predicting the risk of developing hypertension for patients who already have Type 2 diabetes. We used the Connected Bradford dataset, consisting of 1.4 million patients, as our main dataset for data analysis and model building. As a result of data analysis, we found that hypertension is the most frequent observation among patients having Type 2 diabetes. Since hypertension is very important to predict clinically poor outcomes such as risk of heart, brain, kidney, and other diseases, it is crucial to make early and accurate predictions of the risk of having hypertension for Type 2 diabetic patients. We used Naïve Bayes (NB), Neural Network (NN), Random Forest (RF), and Support Vector Machine (SVM) to train our model. Then we ensembled these models to see the potential performance improvement. The ensemble method gave the best classification performance values of accuracy and kappa values of 0.9525 and 0.2183, respectively. We concluded that predicting the risk of developing hypertension for Type 2 diabetic patients using ML provides a promising stepping stone for preventing the Type 2 diabetes progression.
- Published
- 2023
- Full Text
- View/download PDF
235. Effect of a hospital command centre on patient safety: an interrupted time series study.
- Author
-
Mebrahtu TF, McInerney CD, Benn J, McCrorie C, Granger J, Lawton T, Sheikh N, Randell R, Habli I, and Johnson OA
- Subjects
- Humans, Interrupted Time Series Analysis, Retrospective Studies, Cohort Studies, Hospitals, Patients
- Abstract
Background: Command centres have been piloted in some hospitals across the developed world in the last few years. Their impact on patient safety, however, has not been systematically studied. Hence, we aimed to investigate this., Methods: This is a retrospective population-based cohort study. Participants were patients who visited Bradford Royal Infirmary Hospital and Calderdale & Huddersfield hospitals between 1 January 2018 and 31 August 2021. A five-phase, interrupted time series, linear regression analysis was used., Results: After introduction of a Command Centre, while mortality and readmissions marginally improved, there was no statistically significant impact on postoperative sepsis. In the intervention hospital, when compared with the preintervention period, mortality decreased by 1.4% (95% CI 0.8% to 1.9%), 1.5% (95% CI 0.9% to 2.1%), 1.3% (95% CI 0.7% to 1.8%) and 2.5% (95% CI 1.7% to 3.4%) during successive phases of the command centre programme, including roll-in and activation of the technology and preparatory quality improvement work. However, in the control site, compared with the baseline, the weekly mortality also decreased by 2.0% (95% CI 0.9 to 3.1), 2.3% (95% CI 1.1 to 3.5), 1.3% (95% CI 0.2 to 2.4), 3.1% (95% CI 1.4 to 4.8) for the respective intervention phases. No impact on any of the indicators was observed when only the software technology part of the Command Centre was considered., Conclusion: Implementation of a hospital Command Centre may have a marginal positive impact on patient safety when implemented as part of a broader hospital-wide improvement programme including colocation of operations and clinical leads in a central location. However, improvement in patient safety indicators was also observed for a comparable period in the control site. Further evaluative research into the impact of hospital command centres on a broader range of patient safety and other outcomes is warranted., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ.)
- Published
- 2023
- Full Text
- View/download PDF
236. Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment.
- Author
-
Festor P, Jia Y, Gordon AC, Faisal AA, Habli I, and Komorowski M
- Subjects
- Artificial Intelligence, Critical Care, Humans, Decision Support Systems, Clinical, Sepsis therapy
- Abstract
Objectives: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis., Methods: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions., Results: Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published 'AI clinician' recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance., Discussion: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder., Conclusion: These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies., Competing Interests: Competing interests: The authors declare the following competing interest: MK: editorial board of BMJ Health and Care Informatics, consulting fees (Philips Healthcare) and speaker honoraria (GE Healthcare). The other authors declare no competing interest., (© Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ.)
- Published
- 2022
- Full Text
- View/download PDF
237. Patient Safety Informatics: Meeting the Challenges of Emerging Digital Health.
- Author
-
McInerney C, Benn J, Dowding D, Habli I, Jenkins DA, McCrorie C, Peek N, Randell R, Williams R, and Johnson OA
- Subjects
- Humans, Interdisciplinary Studies, Medical Informatics, Patient Safety
- Abstract
The fourth industrial revolution is based on cyber-physical systems and the connectivity of devices. It is currently unclear what the consequences are for patient safety as existing digital health technologies become ubiquitous with increasing pace and interact in unforeseen ways. In this paper, we describe the output from a workshop focused on identifying the patient safety challenges associated with emerging digital health technologies. We discuss six challenges identified in the workshop and present recommendations to address the patient safety concerns posed by them. A key implication of considering the challenges and opportunities for Patient Safety Informatics is the interdisciplinary contribution required to study digital health technologies within their embedded context. The principles underlying our recommendations are those of proactive and systems approaches that relate the social, technical and regulatory facets underpinning patient safety informatics theory and practice.
- Published
- 2022
- Full Text
- View/download PDF
238. Assuring safe artificial intelligence in critical ambulance service response: study protocol.
- Author
-
Sujan M, Thimbleby H, Habli I, Cleve A, Maaløe L, and Rees N
- Abstract
Introduction: Early recognition of out-of-hospital cardiac arrest (OHCA) by ambulance service call centre operators is important so that cardiopulmonary resuscitation can be delivered immediately, but around 25% of OHCAs are not picked up by call centre operators. An artificial intelligence (AI) system has been developed to support call centre operators in the detection of OHCA. The study aims to (1) explore ambulance service stakeholder perceptions on the safety of OHCA AI decision support in call centres, and (2) develop a clinical safety case for the OHCA AI decision-support system., Methods and Analysis: The study will be undertaken within the Welsh Ambulance Service. The study is part research and part service evaluation. The research utilises a qualitative study design based on thematic analysis of interview data. The service evaluation consists of the development of a clinical safety case based on document analysis, analysis of the AI model and its development process and informal interviews with the technology developer., Conclusions: AI presents many opportunities for ambulance services, but safety assurance requirements need to be understood. The ASSIST project will continue to explore and build the body of knowledge in this area., (© 2022 The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
239. Evaluating the safety and patient impacts of an artificial intelligence command centre in acute hospital care: a mixed-methods protocol.
- Author
-
McInerney C, McCrorie C, Benn J, Habli I, Lawton T, Mebrahtu TF, Randell R, Sheikh N, and Johnson O
- Subjects
- Hospitals, Humans, Patient Participation, Reproducibility of Results, Artificial Intelligence, State Medicine
- Abstract
Introduction: This paper presents a mixed-methods study protocol that will be used to evaluate a recent implementation of a real-time, centralised hospital command centre in the UK. The command centre represents a complex intervention within a complex adaptive system. It could support better operational decision-making and facilitate identification and mitigation of threats to patient safety. There is, however, limited research on the impact of such complex health information technology on patient safety, reliability and operational efficiency of healthcare delivery and this study aims to help address that gap., Methods and Analysis: We will conduct a longitudinal mixed-method evaluation that will be informed by public-and-patient involvement and engagement. Interviews and ethnographic observations will inform iterations with quantitative analysis that will sensitise further qualitative work. Quantitative work will take an iterative approach to identify relevant outcome measures from both the literature and pragmatically from datasets of routinely collected electronic health records., Ethics and Dissemination: This protocol has been approved by the University of Leeds Engineering and Physical Sciences Research Ethics Committee (#MEEC 20-016) and the National Health Service Health Research Authority (IRAS No.: 285933). Our results will be communicated through peer-reviewed publications in international journals and conferences. We will provide ongoing feedback as part of our engagement work with local trust stakeholders., Competing Interests: Competing interests: None declared., (© Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ.)
- Published
- 2022
- Full Text
- View/download PDF
240. Safety cases for digital health innovations: can they work?
- Author
-
Sujan M and Habli I
- Subjects
- Humans, Telemedicine trends
- Abstract
Competing Interests: Competing interests: None declared.
- Published
- 2021
- Full Text
- View/download PDF
241. Classification of Failures in the Perception of Conversational Agents (CAs) and Their Implications on Patient Safety.
- Author
-
Aftab H, Shah SHH, and Habli I
- Subjects
- Delivery of Health Care, Humans, Perception, Communication, Patient Safety
- Abstract
The use of Conversational agents (CAs) in healthcare is an emerging field. These CAs seem to be effective in accomplishing administrative tasks, e.g. providing locations of care facilities and scheduling appointments. Modern CAs use machine learning (ML) to recognize, understand and generate a response. Given the criticality of many healthcare settings, ML and other component errors may result in CA failures and may cause adverse effects on patients. Therefore, in-depth assurance is required before the deployment of ML in critical clinical applications, e.g. management of medication dose or medical diagnosis. CA safety issues could arise due to diverse causes, e.g. related to user interactions, environmental factors and ML errors. In this paper, we classify failures of perception (recognition and understanding) of CAs and their sources. We also present a case study of a CA used for calculating insulin dose for gestational diabetes mellitus (GDM) patients. We then correlate identified perception failures of CAs to potential scenarios that might compromise patient safety.
- Published
- 2021
- Full Text
- View/download PDF
242. Safety-driven design of machine learning for sepsis treatment.
- Author
-
Jia Y, Lawton T, Burden J, McDermid J, and Habli I
- Subjects
- Delivery of Health Care, Humans, Workflow, Machine Learning, Sepsis diagnosis, Sepsis therapy
- Abstract
Machine learning (ML) has the potential to bring significant clinical benefits. However, there are patient safety challenges in introducing ML in complex healthcare settings and in assuring the technology to the satisfaction of the different regulators. The work presented in this paper tackles the urgent problem of proactively assuring ML in its clinical context as a step towards enabling the safe introduction of ML into clinical practice. In particular, the paper considers the use of deep Reinforcement Learning, a type of ML, for sepsis treatment. The methodology starts with the modelling of a clinical workflow that integrates the ML model for sepsis treatment recommendations. Then safety analysis is carried out based on the clinical workflow, identifying hazards and safety requirements for the ML model. In this paper the design of the ML model is enhanced to satisfy the safety requirements for mitigating a major clinical hazard: sudden change of vasopressor dose. A rigorous evaluation is conducted to show how these requirements are met. A safety case is presented, providing a basis for regulators to make a judgement on the acceptability of introducing the ML model into sepsis treatment in a healthcare setting. The overall argument is broad in considering the wider patient safety considerations, but the detailed rationale and supporting evidence presented relate to this specific hazard. Whilst there are no agreed regulatory approaches to introducing ML into healthcare, the work presented in this paper has shown a possible direction for overcoming this barrier and exploit the benefits of ML without compromising safety., (Copyright © 2021 Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
243. Enhancing COVID-19 decision making by creating an assurance case for epidemiological models.
- Author
-
Habli I, Alexander R, Hawkins R, Sujan M, McDermid J, Picardi C, and Lawton T
- Subjects
- COVID-19, Humans, Pandemics, SARS-CoV-2, Betacoronavirus, Clinical Decision-Making methods, Coronavirus Infections therapy, Evidence-Based Medicine, Health Care Rationing organization & administration, Pneumonia, Viral therapy, Quality Assurance, Health Care organization & administration
- Abstract
Competing Interests: Competing interests: None declared.
- Published
- 2020
- Full Text
- View/download PDF
244. Preliminary Safety Analysis of a Wearable Clinic for the Early Detection of Psychotic Relapse.
- Author
-
Habli I, Stockton-Powdrell C, Machin M, Fraccaro P, Lewis S, and Peek N
- Subjects
- Humans, Mobile Applications, Recurrence, Smartphone, Mental Disorders, Wearable Electronic Devices
- Abstract
We discuss the preliminary safety analysis of a smartphone-based intervention for early detection of psychotic relapse. We briefly describe how we identified patient safety hazards associated with the system and how measures were defined to mitigate these hazards.
- Published
- 2020
- Full Text
- View/download PDF
245. Artificial intelligence in health care: accountability and safety.
- Author
-
Habli I, Lawton T, and Porter Z
- Subjects
- Health Facilities, Artificial Intelligence, Delivery of Health Care, Safety Management, Social Responsibility
- Abstract
The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed., ((c) 2020 The authors; licensee World Health Organization.)
- Published
- 2020
- Full Text
- View/download PDF
246. Developing a Safety Case for Electronic Prescribing.
- Author
-
Jia Y, Lawton T, White S, and Habli I
- Subjects
- Hospitals, Teaching, Humans, Medication Errors, Electronic Prescribing, Patient Harm
- Abstract
It is now recognised that Health IT systems can bring benefits to healthcare, but they can also introduce new causes of risks that contribute to patient harm. This paper focuses on approaches to modelling and analysing potential causes of medication errors, particularly those arising from the use of Electronic Prescribing. It sets out a systematic way of analysing hazards, their causes and consequences, drawing on the expertise of a multidisciplinary team. The analysis results are used to support the development of a safety case for a large-scale Health IT system in use in three teaching hospitals. The paper shows how elements of the safety case can be updated dynamically. We show that it is valuable to use the dynamically updated elements to inform clinicians about changes in risk, and thus prompt changes in practice to mitigate the risks.
- Published
- 2019
- Full Text
- View/download PDF
247. YORwalK: Desiging a Smartphone Exercise Application for People with Intermittent Claudication.
- Author
-
Shalan A, Abdulrahman A, Habli I, Tew G, and Thompson A
- Subjects
- Humans, Telemedicine, Exercise, Intermittent Claudication therapy, Mobile Applications, Smartphone
- Abstract
Peripheral Arterial Disease (PAD) is a chronic cardiovascular disease. It is highly prevalent in older adults. Mobile Health (mHealth) and Telehealth technologies are considered two central digital solutions for enabling patient-centred care. There is evidence that physical activity apps can improve health outcomes in adults. The aim of this project is to develop a prototype of smart phone app to target patients with PAD, which we named YORwalK, to promote exercise and track changes in walking ability in this population. We used a multidisciplinary team combined with a User Centred Design approach. We performed an evaluation survey using modified System Usability Scale (SUS). The survey was to assess the usability of the App and completed by health care professionals. The App was developed based on the concept of promoting behaviour change through feedback and life style prompts. YORwalK features incorporate self-monitoring and motivating feedback. SUS result indicating higher usability of the App.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.