13 results
Search Results
2. Malnutrition risk assessment using a machine learning‐based screening tool: A multicentre retrospective cohort.
- Author
-
Parchure, Prathamesh, Besculides, Melanie, Zhan, Serena, Cheng, Fu‐yuan, Timsina, Prem, Cheertirala, Satya Narayana, Kersch, Ilana, Wilson, Sara, Freeman, Robert, Reich, David, Mazumdar, Madhu, and Kia, Arash
- Subjects
- *
MALNUTRITION diagnosis , *RISK assessment , *DIETETICS , *MALNUTRITION , *MEDICAL quality control , *HUMAN services programs , *HOSPITAL care , *NUTRITIONAL assessment , *ARTIFICIAL intelligence , *RETROSPECTIVE studies , *DESCRIPTIVE statistics , *LONGITUDINAL method , *PRE-tests & post-tests , *RESEARCH , *METROPOLITAN areas , *MACHINE learning , *QUALITY assurance , *LENGTH of stay in hospitals , *ALGORITHMS , *DISEASE risk factors ,ELECTRONIC health record standards - Abstract
Background: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST‐Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. Methods: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID‐19 and had a length of stay of ≤ 30 days. Results: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST‐Plus‐assisted RD evaluations. The lag between admission and diagnosis improved with MUST‐Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre‐/post‐implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. Conclusion: MUST‐Plus, a machine learning‐based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning‐based processes to improve malnutrition screening and facilitate timely intervention. Key points/Highlights: Malnutrition is prevalent among hospitalised patients and frequently goes unrecognised, with the potential for severe sequelae. Accurate diagnosis, documentation and treatment of malnutrition have the potential of having a positive impact on morbidity rate, mortality rate, length of inpatient stay, readmission rate and hospital revenue. The tool's successful application highlights its potential to optimise malnutrition screening in healthcare systems, offering potential benefits for patient outcomes and hospital finances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Design and implementation of an AI‐enabled visual report tool as formative assessment to promote learning achievement and self‐regulated learning: An experimental study.
- Author
-
Liao, Xiaofang, Zhang, Xuedi, Wang, Zhifeng, and Luo, Heng
- Subjects
- *
SELF-regulated learning , *FORMATIVE evaluation , *PSYCHOLOGICAL feedback , *ACADEMIC achievement , *DATA visualization , *DESIGN techniques , *SYNTHETIC biology - Abstract
Formative assessment is essential for improving teaching and learning, and AI and visualization techniques provide great potential for its design and delivery. Using NLP, cognitive diagnostic and visualization techniques designed to analyse and present students' monthly exam data, we developed an AI‐enabled visual report tool comprising six modules and conducted an empirical study of its effectiveness in a high school biology classroom. A total of 125 students in a ninth‐grade biology course were assigned to a treatment group (n = 63) receiving AI‐enabled visual reports as the intervention and a control group (n = 62) receiving overall oral feedback from the teacher. We present the main statistical results of the within‐subjects design and the between‐subjects design respectively, to better capture the main findings. Repeated measures ANOVA revealed a significant interaction effect of intervention and time on learning achievement, and the paired‐sample Wilcoxon test indicated that the treatment group had experienced increasing learning anxiety (Cohen's d = 0.203, p = 0.046) and self‐efficacy (Cohen's d = 1.793, p = 0.000) over time. Moreover, we conducted a series of non‐parametric tests to compare the effects of AI‐enabled visual reports and teacher feedback, but found no significant differences except for an increased self‐efficacy (Cohen's d = 0.312, p = 0.046). Additionally, we had the students in the treatment group rate their favourable modules in the AI‐enabled visual report and provide evaluative feedback. The study results provide important insights into the design and implementation of effective formative assessment supported by artificial AI and visualization techniques. Practitioner notesWhat is already known about this topic Formative assessment is essential for improving teaching and learning.Traditional formative assessment tools lack accurate data‐oriented assessment and usability.AI and visualization techniques have great potential for formative assessment.What this paper adds This study designs and implements an AI‐enabled visual report tool that generates data‐driven, user‐friendly reports.The AI‐enabled visual report can not only enhance students' learning achievement and self‐regulated learning over time but also increase their test anxiety.The AI‐enabled visual report has a comparable effect with teacher feedback but leads to increased self‐efficacy.Implications for practice and/or policy We recommend using the AI‐enabled visual report in large‐size classes for its overall positive effects on both learning achievement and self‐regulated learning.We recommend using the AI‐enabled visual report over teacher feedback for its capacity to enhance students' self‐efficacy.We recommend prioritizing the modules of Performance Ranking, Personal Mastery and Knowledge Alert when designing the AI‐enabled visual report. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Is Chatgpt a menace for creative writing ability? An experiment.
- Author
-
Niloy, Ahnaf Chowdhury, Akter, Salma, Sultana, Nayeema, Sultana, Jakia, and Rahman, Sayed Imran Ur
- Subjects
- *
DATA analysis , *ARTIFICIAL intelligence , *CONTENT analysis , *UNIVERSITIES & colleges , *DESCRIPTIVE statistics , *QUANTITATIVE research , *CREATIVE ability , *EXPERIMENTAL design , *PRE-tests & post-tests , *STATISTICS , *CONTENT mining , *COLLEGE students , *STUDENT attitudes , *WRITTEN communication ,RESEARCH evaluation - Abstract
Background: The increasing prevalence of Artificial Intelligence (AI) language models, exemplified by ChatGPT, has sparked inquiries into their influence on creative writing skills in educational contexts. This study aims to quantitatively investigate whether ChatGPT's use negatively affects university students' creative writing abilities, focusing on originality, content presentation, accuracy, and elaboration in essays. The research adopts an experimental approach to shed light on this concern. Objective: This study aims to quantitatively investigate whether the utilization of ChatGPT, an AI chatbot, adversely affects specific dimensions of creative writing skills among university students, with an emphasis on originality, content presentation, accuracy, and elaboration. Method: The experimental study involves 600 students from 10 universities, divided into a control and an experimental group (EGp). The EGp incorporates ChatGPT in their creative writing process as an intervention. The study evaluates originality, content presentation, accuracy, and elaboration, utilizing the Wilcoxon Signed‐Rank Test for analysis. Results and Conclusion: The findings reveal a detrimental association between ChatGPT use and university students' creative writing abilities. Analysing both machine‐based and human‐based assessments substantiates earlier qualitative observations regarding ChatGPT's adverse impact on creative writing. This study highlights the necessity of approaching AI integration, particularly in creative writing disciplines, with caution. While AI tools have merits, their integration should be thoughtful, considering the potential drawbacks. These insights inform future research and educational practices, guiding the effective incorporation of AI while nurturing students' writing skills. Lay Description: What is already known about this topic: ChatGPT poses an ethical dilemma regarding its use in the field of academiaQualitative claims and opinions have been raised in prior studies regarding its use in the creative writing processPrior studies have both supported and opposed its use but with very limited quantitative approaches while most of the opinions remain qualitativeSome prior studies opine in support of ChatGPT's ability as an authorSeveral factors measuring creativity has been identified by previous studies but a constructive approach in the light of advanced Artificial Intelligence (AI) based chatbots like ChatGPT is missing in such literature What this paper adds: An experimental approach to provide a valid quantitative proof of the qualitative claims over ChatGPT's detrimental effect towards creativity in writing, which was absent in prior studiesA multifactor‐based formula to measure creativity in a quantitative formA quantitative view of the factors that are affected in either a positive way or a negative way in a user by ChatGPT, providing a holistic picture to determine its extent of useA statistical and theoretical understanding over an unexplored topic like creative writing in the light of ChatGPTA quantitative proof why ChatGPT should not be considered as an author Implications for practice and/or policy: Educators may implement changes in assigning tasks to students compared to their earlier practices, based on the identified factors that are being affected negatively, to ensure ChatGPT does not hinder a student's creativity at a greater extentThe extent of using ChatGPT should be limited to self‐learning as positive effect was experienced through the experimentPolicymakers may use the findings of the study to impose strict policies in academia for ensuring academic integrity (Example: must use of plagiarism detecting software for checking scripts, assigning tasks to students which require more analytical abilities, providing tasks which are not properly readable by LLM's like ChatGPT such as image‐based questions, case studies etc.) [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Methods for using Bing's AI‐powered search engine for data extraction for a systematic review.
- Author
-
Hill, James Edward, Harris, Catherine, and Clegg, Andrew
- Subjects
- *
ARTIFICIAL intelligence , *SEARCH engines , *DATA extraction , *NATURAL language processing , *ELECTRONIC data processing - Abstract
Data extraction is a time‐consuming and resource‐intensive task in the systematic review process. Natural language processing (NLP) artificial intelligence (AI) techniques have the potential to automate data extraction saving time and resources, accelerating the review process, and enhancing the quality and reliability of extracted data. In this paper, we propose a method for using Bing AI and Microsoft Edge as a second reviewer to verify and enhance data items first extracted by a single human reviewer. We describe a worked example of the steps involved in instructing the Bing AI Chat tool to extract study characteristics as data items from a PDF document into a table so that they can be compared with data extracted manually. We show that this technique may provide an additional verification process for data extraction where there are limited resources available or for novice reviewers. However, it should not be seen as a replacement to already established and validated double independent data extraction methods without further evaluation and verification. Use of AI techniques for data extraction in systematic reviews should be transparently and accurately described in reports. Future research should focus on the accuracy, efficiency, completeness, and user experience of using Bing AI for data extraction compared with traditional methods using two or more reviewers independently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Levi's and Lalaland.ai collaboration crisis.
- Author
-
Maiolo, Lila
- Subjects
- *
ARTIFICIAL intelligence , *CRISES , *CRISIS management - Abstract
Levi Strauss & Co., a popular fashion label commonly known as Levi's, was involved in a crisis situation in March 2023 as a result of their partnership with Lalaland.ai, an artificial intelligence (AI) company. The partnership was created with the intention of using AI-generated models to show more diversity in Levi's modelling. However, the brand received intense backlash and criticism following the partnership's announcement for cheapening diversity by failing to use real models. In the format of a case study, this paper describes the situation and evaluates Levi's crisis response in this relevant and dynamic dilemma. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. When artificial intelligence comes to the Chinese calligraphic landscape: The coming transformation.
- Author
-
Jiang, Zhijie, Yu, Jiayuan, Jin, Yingying, Ginn, Anthony, Chen, Jianjun, and Sun, Guodong
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *GEOGRAPHERS , *CULTURAL geography - Abstract
The rise of Artificial intelligence (AI) heralds potentially profound impact on the Chinese calligraphic landscape (CCL). Considering AI's increasing agential capacities, the anthropocentric conception of CCL that presupposes the priority of human identities, emotion, and creative work has been challenged. However, geographers remain quiet about AI‐induced transformations up to date. To fill the research gap, this paper seeks to infuse more‐than‐human geographies into CCL. By taking a post‐human approach, cultural geographers would have a novel understanding of human being in the creation of CCL. This paper initially discusses three prominent changes brought by deep learning (DL) in such landscape: a new ontological actor, transitory, and represented space. Responding to these transformations, the paper reconceptualizes the CCL as a post human term and unravels socio‐spatial practices and diverse more‐than‐human geographies beneath such landscapes through three recent foci, namely robotic approaches to the CCL via DL, modeling experience brought about by AI, and human‐AI collaboration for the creation of the CCL. Ultimately, this paper inspires geographers to profoundly comprehend CCLs in an era of AI. Through all these attempts, this paper advances insights into CCLs as more‐than‐humans‐made. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging.
- Author
-
Hadjiiski, Lubomir, Kenny Cha, Heang-Ping Chan, Drukker, Karen, Lia Morra, Näppi, Janne J., Sahiner, Berkman, Hiroyuki Yoshida, Quan Chen, Deserno, Thomas M., Hayit Greenspan, Henkjan Huisman, Zhimin Huo, Mazurchuk, Richard, Petrick, Nicholas, Regge, Daniele, Samala, Ravi, Summers, Ronald M., Suzuki, Kenji, and Tourassi, Georgia
- Subjects
- *
COMPUTER-aided diagnosis , *CLINICAL decision support systems , *MACHINE learning , *COMPUTER-assisted image analysis (Medicine) , *DECISION support systems , *DEEP learning - Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Bias, Fairness and Accountability with Artificial Intelligence and Machine Learning Algorithms.
- Author
-
Zhou, Nengfeng, Zhang, Zach, Nair, Vijayan N., Singhal, Harsh, and Chen, Jie
- Subjects
- *
ARTIFICIAL intelligence , *SUPERVISED learning , *MACHINE learning , *FAIRNESS - Abstract
Summary: The advent of artificial intelligence (AI) and machine learning algorithms has led to opportunities as well as challenges in their use. In this overview paper, we begin with a discussion of bias and fairness issues that arise with the use of AI techniques, with a focus on supervised machine learning algorithms. We then describe the types and sources of data bias and discuss the nature of algorithmic unfairness. In addition, we provide a review of fairness metrics in the literature, discuss their limitations, and describe de‐biasing (or mitigation) techniques in the model life cycle. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. The potential use of artificial intelligence in the therapy of borderline personality disorder.
- Author
-
Szalai, Judit
- Subjects
- *
TREATMENT of borderline personality disorder , *SELF-evaluation , *ARTIFICIAL intelligence , *EMOTIONS , *REFLECTION (Philosophy) - Abstract
This paper explores the possibility of AI‐based addendum therapy for borderline personality disorder, its potential advantages and limitations. Identity disturbance in this condition is strongly connected to self‐narratives, which manifest excessive incoherence, causal gaps, dysfunctional beliefs, and diminished self‐attributions of agency. Different types of therapy aim at boosting self‐knowledge through self‐narratives in BPD. The suggestion of this paper is that human‐to‐human therapy could be complemented by AI assistance holding out the promise of making patients' self‐narratives more coherent through improving the accuracy of their self‐assessments, reflection on their emotions, and understanding their relationships with others. Theoretical and pragmatic arguments are presented in favour of this idea, and certain technical solutions are suggested to implement it. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Exploring the phenomenon and ethical issues of AI paternalism in health apps.
- Subjects
- *
PATERNALISM , *MOBILE apps , *PERSUASION (Rhetoric) , *ARTIFICIAL intelligence , *PHYSICAL fitness , *MEDICAL care , *AUTONOMY (Psychology) , *TELEMEDICINE - Abstract
Health apps, including consumer‐oriented fitness apps, have two functions. They are supposed to monitor and promote users' health, the latter by way of being an instance of persuasive technology. The use of artificial intelligence (AI) allows for AI health apps, i.e., health apps that act more and more autonomously when it comes to analyzing users' health data and arriving at tailor‐made results on how to improve their health. Consequently, AI health apps seem to gain a paternalistic potential. This is a game‐changer, for corresponding issues of paternalism can then no longer be traced back to human engineers. Instead, the paternalizing party just is the AI system. Hence, AI health apps lead to the novel issue of AI paternalism in health care. In this paper, I explore this novel phenomenon and its ethical implications. Firstly, I discuss from a critical perspective whether the notion of AI paternalism makes (conceptual) sense to begin with. Unsurprisingly, I argue that it does and how so. Secondly, I briefly indicate important ethical issues that AI paternalism in health apps raise and which need to be discussed in more detail in order to judge under which conditions (certain forms of) AI paternalism might be considered acceptable, if at all. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Efficient anomaly detection from medical signals and images with convolutional neural networks for Internet of medical things (IoMT) systems.
- Author
-
Khalil, Ali A., E. Ibrahim, Fatma, Abbass, Mohamed Y., Haggag, Nehad, Mahrous, Yasser, Sedik, Ahmed, Elsherbeeny, Zeinab, Khalaf, Ashraf A. M., Rihan, Mohamad, El‐Shafai, Walid, El‐Banby, Ghada M., Soltan, Eman, Soliman, Naglaa F., Algarni, Abeer D., Al‐Hanafy, Waleed, El‐Fishawy, Adel S., El‐Rabaie, El‐Sayed M., Al‐Nuaimy, Waleed, Dessouky, Moawad I., and Saleeb, Adel A.
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *DIAGNOSTIC imaging , *INTERNET of things , *ARTIFICIAL intelligence - Abstract
Deep learning is one of the most promising machine learning techniques that revolutionalized the artificial intelligence field. The known traditional and convolutional neural networks (CNNs) have been utilized in medical pattern recognition applications that depend on deep learning concepts. This is attributed to the importance of anomaly detection (AD) in automatic diagnosis systems. In this paper, the AD is performed on medical electroencephalography (EEG) signal spectrograms and medical corneal images for Internet of medical things (IoMT) systems. Deep learning based on the CNN models is employed for this task with training and testing phases. Each input image passes through a series of convolution layers with different kernel filters. For the classification task, pooling and fully‐connected layers are utilized. Computer simulation experiments reveal the success and superiority of the proposed models for automated medical diagnosis in IoMT systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Understanding how African‐American and Latinx youth evaluate their experiences with digital assistants.
- Author
-
Yi, Siqi, Slota, Stephen C., Bailey, Jakki O., Watkins, S. Craig, and Fleischmann, Kenneth R.
- Subjects
- *
ARTIFICIAL intelligence , *SOCIOECONOMIC factors , *SOCIOCULTURAL factors , *POCKET computers , *AFRICAN American parents , *HISPANIC American parents , *AFRICAN American children , *HISPANIC American children - Abstract
As artificial intelligence (AI)‐driven devices play an increasingly important role in children's lives, there is a need for research considering how socioeconomic and cultural differences shape children's engagement with digital assistants. This paper reports results from 10 interviews, including five African‐American or Latinx parent/child dyads about how they use and evaluate digital assistants. We identified three key themes resulting from these interviews: usability, privacy, and digital literacy. We conclude that further study is needed to ensure that digital assistants are aligned with the values of children from underrepresented populations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.