30 results on '"Responsible AI"'
Search Results
2. Dimensions of data sparseness and their effect on supply chain visibility
- Author
-
Sub Responsible AI, Responsible AI, van Schilt, Isabelle M., Kwakkel, Jan H., Mense, Jelte P., Verbraeck, Alexander, Sub Responsible AI, Responsible AI, van Schilt, Isabelle M., Kwakkel, Jan H., Mense, Jelte P., and Verbraeck, Alexander
- Published
- 2024
3. Potential-based reward shaping using state–space segmentation for efficiency in reinforcement learning
- Author
-
Sub Responsible AI, Responsible AI, Bal, Melis İlayda, Aydın, Hüseyin, İyigün, Cem, Polat, Faruk, Sub Responsible AI, Responsible AI, Bal, Melis İlayda, Aydın, Hüseyin, İyigün, Cem, and Polat, Faruk
- Published
- 2024
4. Minimality, necessity and sufficiency for argumentation and explanation
- Author
-
Sub Responsible AI, Responsible AI, Borg, AnneMarie, Bex, Floris, Sub Responsible AI, Responsible AI, Borg, AnneMarie, and Bex, Floris
- Published
- 2024
5. Human-Centred Explanation of Rule-Based Decision-Making Systems in the Legal Domain: Demonstration
- Author
-
Sub Responsible AI, Responsible AI, Sileno, Giovanni, Spanakis, Jerry, van Dijck, Gijs, Zuurmond, Suzan, Borg, Anne Marie, Van Kempen, Matthijs, Wieten, Remi, Sub Responsible AI, Responsible AI, Sileno, Giovanni, Spanakis, Jerry, van Dijck, Gijs, Zuurmond, Suzan, Borg, Anne Marie, Van Kempen, Matthijs, and Wieten, Remi
- Published
- 2023
6. Data-Driven Revision of Conditional Norms in Multi-Agent Systems (Extended Abstract)
- Author
-
Sub Responsible AI, Sub Intelligent Systems, Sub Software Production, Sub Organization and Information, Elkind, Edith, Dell'Anna, Davide, Alechina, Natasha, Dalpiaz, Fabiano, Dastani, Mehdi, Logan, Brian, Sub Responsible AI, Sub Intelligent Systems, Sub Software Production, Sub Organization and Information, Elkind, Edith, Dell'Anna, Davide, Alechina, Natasha, Dalpiaz, Fabiano, Dastani, Mehdi, and Logan, Brian
- Published
- 2023
7. Hierarchical a Fortiori Reasoning with Dimensions
- Author
-
Sub Responsible AI, Afd Informatica Algemeen, Sileno, Giovanni, Spanakis, Jerry, Dijck, Gijs van, van Woerkom, Wijnand, Grossi, Davide, Prakken, Henry, Verheij, Bart, Sub Responsible AI, Afd Informatica Algemeen, Sileno, Giovanni, Spanakis, Jerry, Dijck, Gijs van, van Woerkom, Wijnand, Grossi, Davide, Prakken, Henry, and Verheij, Bart
- Published
- 2023
8. Precedent-Based Reasoning with Incomplete Cases
- Author
-
Sub Responsible AI, Sub Intelligent Systems, Intelligent Systems, Sileno, Giovanni, Spanakis, Jerry, Dijck, Gijs van, Odekerken, Daphne, Bex, Floris, Prakken, Henry, Sub Responsible AI, Sub Intelligent Systems, Intelligent Systems, Sileno, Giovanni, Spanakis, Jerry, Dijck, Gijs van, Odekerken, Daphne, Bex, Floris, and Prakken, Henry
- Published
- 2023
9. Explaining Model Behavior with Global Causal Analysis
- Author
-
Intelligent Systems, Sub Responsible AI, Sub Intelligent Systems, Sub Algorithmic Data Analysis, Longo, Luca, Robeer, Marcel, Bex, Floris, Feelders, Ad, Prakken, Henry, Intelligent Systems, Sub Responsible AI, Sub Intelligent Systems, Sub Algorithmic Data Analysis, Longo, Luca, Robeer, Marcel, Bex, Floris, Feelders, Ad, and Prakken, Henry
- Published
- 2023
10. Uncertainty-Aware Personal Assistant for Making Personalized Privacy Decisions
- Author
-
Sub Responsible AI, Ayci, Gonul, Sensoy, Murat, Ozgur, Arzucan, Yolum, Pinar, Sub Responsible AI, Ayci, Gonul, Sensoy, Murat, Ozgur, Arzucan, and Yolum, Pinar
- Published
- 2023
11. PACCART: Reinforcing Trust in Multiuser Privacy Agreement Systems
- Author
-
Sub Responsible AI, Dep Informatica, Di Scala, Daan, Yolum, Pinar, Sub Responsible AI, Dep Informatica, Di Scala, Daan, and Yolum, Pinar
- Published
- 2023
12. Can We Explain Privacy?
- Author
-
Sub Responsible AI, Ayci, Gonul, Ozgur, Arzucan, Sensoy, Murat K., Yolum, Pinar P., Sub Responsible AI, Ayci, Gonul, Ozgur, Arzucan, Sensoy, Murat K., and Yolum, Pinar P.
- Published
- 2023
13. Feasibility Study of the Self-Care Immediate Stabilization Procedure (ISP) ® After a Traumatic Experience
- Author
-
Oren Asman, Senior lecturer and director of the Samueli initiative for responsible AI in medicine
- Published
- 2024
14. Control-Oriented Neural State-Space Models for State-Feedback Linearization and Pole Placement
- Author
-
Hache, Alexandre, Thieffry, Maxime, Yagoubi, Mohamed, Chevrel, Philippe, Commande, Observation, Diagnostic et Expérimentation (LS2N - équipe CODEx), Laboratoire des Sciences du Numérique de Nantes (LS2N), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-École Centrale de Nantes (Nantes Univ - ECN), Nantes Université (Nantes Univ)-Nantes Université (Nantes Univ)-Nantes université - UFR des Sciences et des Techniques (Nantes univ - UFR ST), Nantes Université - pôle Sciences et technologie, Nantes Université (Nantes Univ)-Nantes Université (Nantes Univ)-Nantes Université - pôle Sciences et technologie, Nantes Université (Nantes Univ)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Nantes Université (Nantes Univ), Département Automatique, Productique et Informatique (IMT Atlantique - DAPI), IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), and ANR-20-THIA-0019 : AI@IMT Responsible AI for Industry and Society
- Subjects
[SPI.AUTO]Engineering Sciences [physics]/Automatic - Abstract
International audience; Starting from a data set consisting of input-output measurements of a dynamical process, this paper presents a training procedure for a specifically control-oriented model. The considered dynamic model adopts a particular neural state-space representation: its structure guarantees its linearizability by state feedback. Moreover, the linearizing control law follows trivially from the parameters of the learned model. The method relies on a parameterized continuous-time neural state-space model whose structure is inspired from well-known exact linearization. The feasibility and efficiency of the approach is illustrated on a nonlinear identification benchmark, namely the Silverbox one. The quality of learning and linearizing feature of the control design are validated on two nonlinear models by comparing the input-output behavior of each closed-loop and its best linear approximation.
- Published
- 2022
15. Generative artificial intelligence and ethical considerations in health care: a scoping review and ethics checklist.
- Author
-
Ning Y, Teixayavong S, Shang Y, Savulescu J, Nagaraj V, Miao D, Mertens M, Ting DSW, Ong JCL, Liu M, Cao J, Dunn M, Vaughan R, Ong MEH, Sung JJ, Topol EJ, and Liu N
- Subjects
- Humans, Artificial Intelligence ethics, Checklist, Delivery of Health Care ethics
- Abstract
The widespread use of Chat Generative Pre-trained Transformer (known as ChatGPT) and other emerging technology that is powered by generative artificial intelligence (GenAI) has drawn attention to the potential ethical issues they can cause, especially in high-stakes applications such as health care, but ethical discussions have not yet been translated into operationalisable solutions. Furthermore, ongoing ethical discussions often neglect other types of GenAI that have been used to synthesise data (eg, images) for research and practical purposes, which resolve some ethical issues and expose others. We did a scoping review of the ethical discussions on GenAI in health care to comprehensively analyse gaps in the research. To reduce the gaps, we have developed a checklist for comprehensive assessment and evaluation of ethical discussions in GenAI research. The checklist can be integrated into peer review and publication systems to enhance GenAI research and might be useful for ethics-related disclosures for GenAI-powered products and health-care applications of such products and beyond., Competing Interests: Declaration of interests NL reports funding from the Duke–NUS Signature Research Programme funded by the Ministry of Health, Singapore. JS reports funding from the Wellcome Trust and roles as a Bioethics Committee consultant for Bayer and as an advisory panel member for the Hevolution Foundation. DSWT reports funding from National Medical Research Council, Singapore, Duke–NUS Medical School, and Agency for Science, Technology, and Research; patents on deep learning systems for diabetic retinopathy, glaucoma, and age-related macular degeneration (co-inventor; 2017), a computer-implemented method for training an image classifier using weakly annotated training data (2019), and automatically extracting measurements from an image of a display of a measurement device (2020); has a leadership role (unpaid) as Chair of the AI and Digital Innovation Standing Committee and the Asia–Pacific Academy of Ophthalmology; and serves on the executive committees of the American Academy of Ophthalmology AI Committee, STARD-AI Steering Committee, Imperial College London, DECIDE-AI, and QUANDAS-AI. EJT reports funding from the National Institutes of Health (NIH) Grant and consulting fees as an adviser to Tempus Labs, Pheno.AI, and Abridge. All other authors declare no competing interests., (Copyright © 2024 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
16. An Ethical Perspective on the Democratization of Mental Health With Generative AI.
- Author
-
Elyoseph Z, Gur T, Haber Y, Simon T, Angert T, Navon Y, Tal A, and Asman O
- Subjects
- Humans, Democracy, Artificial Intelligence ethics, Mental Health ethics
- Abstract
Unlabelled: Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides a sociohistorical perspective for the theme issue "Responsible Design, Integration, and Use of Generative AI in Mental Health." It evaluates ethical considerations in using generative artificial intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as large language models. The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, and conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing artificial intelligence-based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach to GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient-centered., (© Zohar Elyoseph, Tamar Gur, Yuval Haber, Tomer Simon, Tal Angert, Yuval Navon, Amir Tal, Oren Asman. Originally published in JMIR Mental Health (https://mental.jmir.org).)
- Published
- 2024
- Full Text
- View/download PDF
17. Regulating AI in Mental Health: Ethics of Care Perspective.
- Author
-
Tavory T
- Subjects
- Humans, Mental Health Services ethics, Mental Health Services legislation & jurisprudence, Mental Health ethics, Artificial Intelligence ethics
- Abstract
This article contends that the responsible artificial intelligence (AI) approach-which is the dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI's impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new "therapeutic" area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools., (©Tamar Tavory. Originally published in JMIR Mental Health (https://mental.jmir.org), 19.09.2024.)
- Published
- 2024
- Full Text
- View/download PDF
18. Research Into Digital Health Intervention for Mental Health: 25-Year Retrospective on the Ethical and Legal Challenges.
- Author
-
Hall CL, Gómez Bergin AD, and Rennick-Egglestone S
- Subjects
- Humans, Retrospective Studies, Mental Health Services legislation & jurisprudence, Mental Health Services ethics, Telemedicine ethics, Telemedicine legislation & jurisprudence, Digital Health, Mental Health
- Abstract
Digital mental health interventions are routinely integrated into mental health services internationally and can contribute to reducing the global mental health treatment gap identified by the World Health Organization. Research teams designing and delivering evaluations frequently invest substantial effort in deliberating on ethical and legal challenges around digital mental health interventions. In this article, we reflect on our own research experience with digital mental health intervention design and evaluation to identify 8 of the most critical challenges that we or others have faced, and that have ethical or legal consequences. These include: (1) harm caused by online recruitment work; (2) monitoring of intervention safety; (3) exclusion of specific demographic or clinical groups; (4) inadequate robustness of effectiveness and cost-effectiveness findings; (5) adequately conceptualizing and supporting engagement and adherence; (6) structural barriers to implementation; (7) data protection and intellectual property; and (8) regulatory ambiguity relating to digital mental health interventions that are medical devices. As we describe these challenges, we have highlighted serious consequences that can or have occurred, such as substantial delays to studies if regulations around Software as a Medical Device (SaMD) are not fully understood, or if regulations change substantially during the study lifecycle. Collectively, the challenges we have identified highlight a substantial body of required knowledge and expertise, either within the team or through access to external experts. Ensuring access to knowledge requires careful planning and adequate financial resources (for example, paying public contributors to engage in debate on critical ethical issues or paying for legal opinions on regulatory issues). Access to such resources can be planned for on a per-study basis and enabled through funding proposals. However, organizations regularly engaged in the development and evaluation of digital mental health interventions should consider creating or supporting structures such as advisory groups that can retain necessary competencies, such as in medical device regulation., (©Charlotte L Hall, Aislinn D Gómez Bergin, Stefan Rennick-Egglestone. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 09.09.2024.)
- Published
- 2024
- Full Text
- View/download PDF
19. Variability of joint hypermobility in children: a meta-analytic approach to set cut-off scores.
- Author
-
Williams CM, Welch JJ, Scheper M, Tofts L, and Pacey V
- Subjects
- Humans, Child, Adolescent, Female, Male, Prevalence, Child, Preschool, Joint Instability diagnosis, Joint Instability epidemiology
- Abstract
Current international consensus of the appropriate Beighton score cut-off to define if a child has generalised joint hypermobile or not is based upon expert opinion. Our aim was to determine the prevalence of Beighton scores of children worldwide to provide a recommendation for establishing the Beighton score cut-off to identify generalised joint hypermobility in children. We used AMED, OVID Medline, Embase and CINAHL to find published articles from inception to April 2024 describing Beighton scores of children up to and including 18 years from the general population. We extracted study demographics including country of publication, total number of participants, summary data about the age and sex of participant, Beighton scores and any cut-off used where authors deemed children hypermobile and how many children were rated at the corresponding Beighton scores. There were 37 articles reporting on the prevalence or incidence of hypermobility at cut-off scores from 28,868 participants. Using the cut-off of ≥ 6 resulted in a prevalence of 6% for studies reporting male data and 13% for studies reporting female data. Limited data reporting availability precluded further sub-analysis at a Beighton score of ≥ 7, age, pubertal status and ethnicity. Conclusion: The working threshold for identifying generalised joint hypermobility in children should be a Beighton score of 6 or more. Our analysis also suggests a Beighton score of 7 or greater may be appropriate in childhood, particularly for females. What is Known: • The working threshold for identifying generalised joint hypermobility in children previously was set based on expert opinion. What is New: • The threshold to identify hypermobility in children should be at a minimum of ≥ 6 on the Beighton score., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
20. The Immunopeptidomics Ontology (ImPO).
- Author
-
Faria D, Eugénio P, Contreiras Silva M, Balbi L, Bedran G, Kallor AA, Nunes S, Palkowski A, Waleron M, Alfaro JA, and Pesquita C
- Subjects
- Humans, Proteomics methods, Peptides immunology, Databases, Protein, Biological Ontologies
- Abstract
The adaptive immune response plays a vital role in eliminating infected and aberrant cells from the body. This process hinges on the presentation of short peptides by major histocompatibility complex Class I molecules on the cell surface. Immunopeptidomics, the study of peptides displayed on cells, delves into the wide variety of these peptides. Understanding the mechanisms behind antigen processing and presentation is crucial for effectively evaluating cancer immunotherapies. As an emerging domain, immunopeptidomics currently lacks standardization-there is neither an established terminology nor formally defined semantics-a critical concern considering the complexity, heterogeneity, and growing volume of data involved in immunopeptidomics studies. Additionally, there is a disconnection between how the proteomics community delivers the information about antigen presentation and its uptake by the clinical genomics community. Considering the significant relevance of immunopeptidomics in cancer, this shortcoming must be addressed to bridge the gap between research and clinical practice. In this work, we detail the development of the ImmunoPeptidomics Ontology, ImPO, the first effort at standardizing the terminology and semantics in the domain. ImPO aims to encapsulate and systematize data generated by immunopeptidomics experimental processes and bioinformatics analysis. ImPO establishes cross-references to 24 relevant ontologies, including the National Cancer Institute Thesaurus, Mondo Disease Ontology, Logical Observation Identifier Names and Codes and Experimental Factor Ontology. Although ImPO was developed using expert knowledge to characterize a large and representative data collection, it may be readily used to encode other datasets within the domain. Ultimately, ImPO facilitates data integration and analysis, enabling querying, inference and knowledge generation and importantly bridging the gap between the clinical proteomics and genomics communities. As the field of immunogenomics uses protein-level immunopeptidomics data, we expect ImPO to play a key role in supporting a rich and standardized description of the large-scale data that emerging high-throughput technologies are expected to bring in the near future. Ontology URL: https://zenodo.org/record/10237571 Project GitHub: https://github.com/liseda-lab/ImPO/blob/main/ImPO.owl., (© The Author(s) 2024. Published by Oxford University Press.)
- Published
- 2024
- Full Text
- View/download PDF
21. The AI ethics of digital COVID-19 diagnosis and their legal, medical, technological, and operational managerial implications.
- Author
-
Bartenschlager CC, Gassner UM, Römmele C, Brunner JO, Schlögl-Flierl K, and Ziethmann P
- Subjects
- Humans, SARS-CoV-2, Machine Learning ethics, Diagnosis, Computer-Assisted ethics, Pandemics, COVID-19 diagnosis, Artificial Intelligence ethics
- Abstract
The COVID-19 pandemic has given rise to a broad range of research from fields alongside and beyond the core concerns of infectiology, epidemiology, and immunology. One significant subset of this work centers on machine learning-based approaches to supporting medical decision-making around COVID-19 diagnosis. To date, various challenges, including IT issues, have meant that, notwithstanding this strand of research on digital diagnosis of COVID-19, the actual use of these methods in medical facilities remains incipient at best, despite their potential to relieve pressure on scarce medical resources, prevent instances of infection, and help manage the difficulties and unpredictabilities surrounding the emergence of new mutations. The reasons behind this research-application gap are manifold and may imply an interdisciplinary dimension. We argue that the discipline of AI ethics can provide a framework for interdisciplinary discussion and create a roadmap for the application of digital COVID-19 diagnosis, taking into account all disciplinary stakeholders involved. This article proposes such an ethical framework for the practical use of digital COVID-19 diagnosis, considering legal, medical, operational managerial, and technological aspects of the issue in accordance with our diverse research backgrounds and noting the potential of the approach we set out here to guide future research., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 The Authors. Published by Elsevier B.V. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
22. Multi-Omic Analysis of Esophageal Adenocarcinoma Uncovers Candidate Therapeutic Targets and Cancer-Selective Posttranscriptional Regulation.
- Author
-
O'Neill JR, Yébenes Mayordomo M, Mitulović G, Al Shboul S, Bedran G, Faktor J, Hernychova L, Uhrik L, Gómez-Herranz M, Kocikowski M, Save V, Vojtěšek B, Arends MJ, Hupp T, and Alfaro JA
- Subjects
- Humans, Male, Female, RNA Processing, Post-Transcriptional, Proteome metabolism, Multiomics, Esophageal Neoplasms genetics, Esophageal Neoplasms metabolism, Esophageal Neoplasms pathology, Adenocarcinoma genetics, Adenocarcinoma metabolism, Adenocarcinoma pathology, Proteomics methods, Gene Expression Regulation, Neoplastic, Biomarkers, Tumor metabolism, Biomarkers, Tumor genetics
- Abstract
Efforts to address the poor prognosis associated with esophageal adenocarcinoma (EAC) have been hampered by a lack of biomarkers to identify early disease and therapeutic targets. Despite extensive efforts to understand the somatic mutations associated with EAC over the past decade, a gap remains in understanding how the atlas of genomic aberrations in this cancer impacts the proteome and which somatic variants are of importance for the disease phenotype. We performed a quantitative proteomic analysis of 23 EACs and matched adjacent normal esophageal and gastric tissues. We explored the correlation of transcript and protein abundance using tissue-matched RNA-seq and proteomic data from seven patients and further integrated these data with a cohort of EAC RNA-seq data (n = 264 patients), EAC whole-genome sequencing (n = 454 patients), and external published datasets. We quantified protein expression from 5879 genes in EAC and patient-matched normal tissues. Several biomarker candidates with EAC-selective expression were identified, including the transmembrane protein GPA33. We further verified the EAC-enriched expression of GPA33 in an external cohort of 115 patients and confirm this as an attractive diagnostic and therapeutic target. To further extend the insights gained from our proteomic data, an integrated analysis of protein and RNA expression in EAC and normal tissues revealed several genes with poorly correlated protein and RNA abundance, suggesting posttranscriptional regulation of protein expression. These outlier genes, including SLC25A30, TAOK2, and AGMAT, only rarely demonstrated somatic mutation, suggesting post-transcriptional drivers for this EAC-specific phenotype. AGMAT was demonstrated to be overexpressed at the protein level in EAC compared to adjacent normal tissues with an EAC-selective, post-transcriptional mechanism of regulation of protein abundance proposed. Integrated analysis of proteome, transcriptome, and genome in EAC has revealed several genes with tumor-selective, posttranscriptional regulation of protein expression, which may be an exploitable vulnerability., Competing Interests: Conflict of interest The authors declare that they have no competing interests., (Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
23. Telesurgery collaborative community working group: insights about the current telesurgery scenario.
- Author
-
Patel V, Moschovas MC, Marescaux J, Satava R, Dasgupta P, and Dohler M
- Subjects
- Humans, Robotic Surgical Procedures methods, Telemedicine
- Published
- 2024
- Full Text
- View/download PDF
24. Towards responsible use of artificial intelligence in daily practice: what do physiotherapists need to know, consider and do?
- Author
-
Scheper MC, van Velzen M, and L U van Meeteren N
- Subjects
- Humans, Attitude of Health Personnel, Health Knowledge, Attitudes, Practice, Artificial Intelligence, Physical Therapists
- Published
- 2024
- Full Text
- View/download PDF
25. Unreliable LLM Bioethics Assistants: Ethical and Pedagogical Risks.
- Author
-
Goetz L, Trengove M, Trotsyuk A, and Federico CA
- Subjects
- Humans, Educational Status, Health Personnel, Delivery of Health Care, Bioethics
- Published
- 2023
- Full Text
- View/download PDF
26. A translational perspective towards clinical AI fairness.
- Author
-
Liu M, Ning Y, Teixayavong S, Mertens M, Xu J, Ting DSW, Cheng LT, Ong JCL, Teo ZL, Tan TF, RaviChandran N, Wang F, Celi LA, Ong MEH, and Liu N
- Abstract
Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the fairness of such data-driven insights remains a concern in high-stakes fields. Despite extensive developments, issues of AI fairness in clinical contexts have not been adequately addressed. A fair model is normally expected to perform equally across subgroups defined by sensitive variables (e.g., age, gender/sex, race/ethnicity, socio-economic status, etc.). Various fairness measurements have been developed to detect differences between subgroups as evidence of bias, and bias mitigation methods are designed to reduce the differences detected. This perspective of fairness, however, is misaligned with some key considerations in clinical contexts. The set of sensitive variables used in healthcare applications must be carefully examined for relevance and justified by clear clinical motivations. In addition, clinical AI fairness should closely investigate the ethical implications of fairness measurements (e.g., potential conflicts between group- and individual-level fairness) to select suitable and objective metrics. Generally defining AI fairness as "equality" is not necessarily reasonable in clinical settings, as differences may have clinical justifications and do not indicate biases. Instead, "equity" would be an appropriate objective of clinical AI fairness. Moreover, clinical feedback is essential to developing fair and well-performing AI models, and efforts should be made to actively involve clinicians in the process. The adaptation of AI fairness towards healthcare is not self-evident due to misalignments between technical developments and clinical considerations. Multidisciplinary collaboration between AI researchers, clinicians, and ethicists is necessary to bridge the gap and translate AI fairness into real-life benefits., (© 2023. Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
27. AI and machine learning in resuscitation: Ongoing research, new concepts, and key challenges.
- Author
-
Okada Y, Mertens M, Liu N, Lam SSW, and Ong MEH
- Abstract
Aim: Artificial intelligence (AI) and machine learning (ML) are important areas of computer science that have recently attracted attention for their application to medicine. However, as techniques continue to advance and become more complex, it is increasingly challenging for clinicians to stay abreast of the latest research. This overview aims to translate research concepts and potential concerns to healthcare professionals interested in applying AI and ML to resuscitation research but who are not experts in the field., Main Text: We present various research including prediction models using structured and unstructured data, exploring treatment heterogeneity, reinforcement learning, language processing, and large-scale language models. These studies potentially offer valuable insights for optimizing treatment strategies and clinical workflows. However, implementing AI and ML in clinical settings presents its own set of challenges. The availability of high-quality and reliable data is crucial for developing accurate ML models. A rigorous validation process and the integration of ML into clinical practice is essential for practical implementation. We furthermore highlight the potential risks associated with self-fulfilling prophecies and feedback loops, emphasizing the importance of transparency, interpretability, and trustworthiness in AI and ML models. These issues need to be addressed in order to establish reliable and trustworthy AI and ML models., Conclusion: In this article, we overview concepts and examples of AI and ML research in the resuscitation field. Moving forward, appropriate understanding of ML and collaboration with relevant experts will be essential for researchers and clinicians to overcome the challenges and harness the full potential of AI and ML in resuscitation., Competing Interests: YO has received a research grant from the ZOLL Foundation and overseas scholarships from the Japan Society for Promotion of Science, the FUKUDA Foundation for medical technology, and the International medical research foundation. These organizations have no role in conducting this research. MEHO reports grants from the Laerdal Foundation, Laerdal Medical, and Ramsey Social Justice Foundation for funding of the Pan-Asian Resuscitation Outcomes Study an advisory relationship with Global Healthcare SG, a commercial entity that manufactures cooling devices; and funding from Laerdal Medical on an observation program to their Community CPR Training Centre Research Program in Norway. MEHO is a Scientific Advisor to TIIM Healthcare SG and Global Healthcare SG., (© 2023 The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
28. An external stability audit framework to test the validity of personality prediction in AI hiring.
- Author
-
Rhea AK, Markey K, D'Arinzo L, Schellmann H, Sloane M, Squires P, Arif Khan F, and Stoyanovich J
- Abstract
Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers' resumes or social media profiles. We interrogate the validity of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. Crucially, rather than challenging or affirming the assumptions made in psychometric testing - that personality is a meaningful and measurable construct, and that personality traits are indicative of future success on the job - we frame our audit methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. Our main contribution is the development of a socio-technical framework for auditing the stability of algorithmic systems. This contribution is supplemented with an open-source software library that implements the technical components of the audit, and can be used to conduct similar stability audits of algorithmic systems. We instantiate our framework with the audit of two real-world personality prediction systems, namely, Humantic AI and Crystal. The application of our audit framework demonstrates that both these systems show substantial instability with respect to key facets of measurement, and hence cannot be considered valid testing instruments., Competing Interests: Conflict of interestThe authors declare that they have no conflict of interest., (© The Author(s) 2022.)
- Published
- 2022
- Full Text
- View/download PDF
29. A socio-technical framework for digital contact tracing.
- Author
-
Vinuesa R, Theodorou A, Battaglini M, and Dignum V
- Abstract
In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their suitability in terms of impact on the users, employed technology and governance methods. We illustrate its usage with three applications, and with the European Data Protection Board (EDPB) guidelines, highlighting their limitations., Competing Interests: The authors declare that they do not have any conflict of interest., (© 2020 The Author(s).)
- Published
- 2020
- Full Text
- View/download PDF
30. The role of artificial intelligence in achieving the Sustainable Development Goals.
- Author
-
Vinuesa R, Azizpour H, Leite I, Balaam M, Dignum V, Domisch S, Felländer A, Langhans SD, Tegmark M, and Fuso Nerini F
- Abstract
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.