358,002 results
Search Results
52. Management of Incidental Adrenal Masses: A White Paper of the ACR Incidental Findings Committee.
- Author
-
Mayo-Smith WW, Song JH, Boland GL, Francis IR, Israel GM, Mazzaglia PJ, Berland LL, and Pandharipande PV
- Subjects
- Abdomen, Adrenal Gland Neoplasms therapy, Humans, Magnetic Resonance Imaging, Radiology, Societies, Medical, Tomography, X-Ray Computed, Adrenal Gland Neoplasms diagnostic imaging, Advisory Committees, Algorithms, Incidental Findings
- Abstract
The ACR Incidental Findings Committee presents recommendations for managing adrenal masses that are incidentally detected on CT or MRI. These recommendations represent an update to the adrenal component of the JACR 2010 white paper on managing incidental findings in the adrenal glands, kidneys, liver, and pancreas. The Adrenal Subcommittee, constituted by abdominal radiologists and an endocrine surgeon, developed this algorithm. The algorithm draws from published evidence coupled with expert subspecialist opinion and was finalized by a process of iterative consensus. Algorithm branches categorize incidental adrenal masses on the basis of patient characteristics and imaging features. For each specified combination, the algorithm concludes with characterization of benignity or indolence (sufficient to discontinue follow-up) and/or a subsequent management recommendation. The algorithm addresses many, but not all, possible pathologies and clinical scenarios. Our goal is to improve the quality of patient care by providing guidance on how to manage incidentally detected adrenal masses., (Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
53. Management of Incidental Pancreatic Cysts: A White Paper of the ACR Incidental Findings Committee.
- Author
-
Megibow AJ, Baker ME, Morgan DE, Kamel IR, Sahani DV, Newman E, Brugge WR, Berland LL, and Pandharipande PV
- Subjects
- Advisory Committees, Humans, Magnetic Resonance Imaging, Pancreatic Cyst therapy, Radiology, Societies, Medical, Tomography, X-Ray Computed, Algorithms, Incidental Findings, Pancreatic Cyst diagnostic imaging
- Abstract
The ACR Incidental Findings Committee (IFC) presents recommendations for managing pancreatic cysts that are incidentally detected on CT or MRI. These recommendations represent an update from the pancreatic component of the JACR 2010 white paper on managing incidental findings in the adrenal glands, kidneys, liver, and pancreas. The Pancreas Subcommittee-which included abdominal radiologists, a gastroenterologist, and a pancreatic surgeon-developed this algorithm. The recommendations draw from published evidence and expert opinion, and were finalized by informal iterative consensus. Algorithm branches successively categorize pancreatic cysts based on patient characteristics and imaging features. They terminate with an ascertainment of benignity and/or indolence (sufficient to discontinue follow-up), or a management recommendation. The algorithm addresses most, but not all, pathologies and clinical scenarios. Our goal is to improve quality of care by providing guidance on how to manage incidentally detected pancreatic cysts., (Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
54. Evaluation of an Algorithmic-Level Left-Corner Parsing Account of Surprisal Effects.
- Author
-
Schuler W and Yue S
- Subjects
- Humans, Reading, Models, Psychological, Algorithms, Memory, Short-Term
- Abstract
This article evaluates the predictions of an algorithmic-level distributed associative memory model as it introduces, propagates, and resolves ambiguity, and compares it to the predictions of computational-level parallel parsing models in which ambiguous analyses are accounted separately in discrete distributions. By superposing activation patterns that serve as cues to other activation patterns, the model is able to maintain multiple syntactically complex analyses superposed in a finite working memory, propagate this ambiguity through multiple intervening words, then resolve this ambiguity in a way that produces a measurable predictor that is proportional to the log conditional probability of the disambiguating word given its context, marginalizing over all remaining analyses. The results are indeed consistent in cases of complex structural ambiguity with computational-level parallel parsing models producing this same probability as a predictor, which have been shown reliably to predict human reading times., (© 2024 The Author(s). Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS).)
- Published
- 2024
- Full Text
- View/download PDF
55. Comparing the Clique Percolation algorithm to other overlapping community detection algorithms in psychological networks: A Monte Carlo simulation study.
- Author
-
Ribeiro Santiago PH, Soares GH, Quintero A, and Jamieson L
- Subjects
- Humans, Computer Simulation, Fuzzy Logic, Algorithms, Monte Carlo Method
- Abstract
In psychological networks, one limitation of the most used community detection algorithms is that they can only assign each node (symptom) to a unique community, without being able to identify overlapping symptoms. The clique percolation (CP) is an algorithm that identifies overlapping symptoms but its performance has not been evaluated in psychological networks. In this study, we compare the CP with model parameters chosen based on fuzzy modularity (CPMod) with two other alternatives, the ratio of the two largest communities (CPRat), and entropy (CPEnt). We evaluate their performance to: (1) identify the correct number of latent factors (i.e., communities); and (2) identify the observed variables with substantive (and equally sized) cross-loadings (i.e., overlapping symptoms). We carried out simulations under 972 conditions (3x2x2x3x3x3x3): (1) data categories (continuous, polytomous and dichotomous); (2) number of factors (two and four); (3) number of observed variables per factor (four and eight); (4) factor correlations (0.0, 0.5, and 0.7); (5) size of primary factor loadings (0.40, 0.55, and 0.70); (6) proportion of observed variables with substantive cross-loadings (0.0%, 12.5%, and 25.0%); and (7) sample size (300, 500, and 1000). Performance was evaluated through the Omega index, Mean Bias Error (MBE), Mean Absolute Error (MAE), sensitivity, specificity, and mean number of isolated nodes. We also evaluated two other methods, Exploratory Factor Analysis and the Walktrap algorithm modified to consider overlap (EFA-Ov and Walk-Ov, respectively). The Walk-Ov displayed the best performance across most conditions and is the recommended option to identify communities with overlapping symptoms in psychological networks., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
56. Bias-reduced neural networks for parameter estimation in quantitative MRI.
- Author
-
Mao A, Flassbeck S, and Assländer J
- Subjects
- Humans, Bias, Neuroimaging methods, Reproducibility of Results, Least-Squares Analysis, Magnetic Resonance Imaging methods, Neural Networks, Computer, Algorithms, Brain diagnostic imaging, Computer Simulation, Image Processing, Computer-Assisted methods
- Abstract
Purpose: To develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cramér-Rao bound., Theory and Methods: We generalize the mean squared error loss to control the bias and variance of the NN's estimates, which involves averaging over multiple noise realizations of the same measurements during training. Bias and variance properties of the resulting NNs are studied for two neuroimaging applications., Results: In simulations, the proposed strategy reduces the estimates' bias throughout parameter space and achieves a variance close to the Cramér-Rao bound. In vivo, we observe good concordance between parameter maps estimated with the proposed NNs and traditional estimators, such as nonlinear least-squares fitting, while state-of-the-art NNs show larger deviations., Conclusion: The proposed NNs have greatly reduced bias compared to those trained using the mean squared error and offer significantly improved computational efficiency over traditional estimators with comparable or better accuracy., (© 2024 International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
57. Adaptive complementary neighboring sub-aperture beamforming for thermoacoustic imaging.
- Author
-
Yang Z, Wang F, Peng W, Song L, Luo Y, Zhao Z, and Huang L
- Subjects
- Animals, Swine, Humans, Acoustics, Signal-To-Noise Ratio, Liver diagnostic imaging, Arm diagnostic imaging, Temperature, Phantoms, Imaging, Image Processing, Computer-Assisted methods, Algorithms
- Abstract
Background: When applied to thermoacoustic imaging (TAI), the delay-and-sum (DAS) algorithm produces strong sidelobes due to its disadvantages of uniform aperture weighting. As a result, the quality of TAI images recovered by DAS is often severely degraded by strong non-coherent clutter, which restricts the development and application of TAI., Purpose: To address this issue, we propose an adaptive complementary neighboring sub-aperture (NSA) beamforming algorithm for TAI., Methods: In NSA, we introduce a coordinate system transformation when calculating the normalized cross-correlation (NCC) matrix. This approach enables the computation of the NCC coefficient within the specified kernel without complex coordinate calculations. We first conducted the numerical simulation experiment to validate NSA using a tree branch phantom. In addition, we also conducted phantom (five sauce tubes), ex vivo (ablation needle in ex vivo porcine liver), and in vivo (human arm) TAI experiments using our TAI system with a center frequency of 3 GHz., Results: In the numerical simulation experiment, the structural similarity index (SSIM) value for NSA is increased from 0.37828 for DAS to 0.75492. In the point target phantom TAI experiment, the generalized contrast-to-noise ratio (gCNR) value for NSA is increased from 0.936 for DAS to 0.962. The experimental results show that NSA can recover clearer thermoacoustic images compared to DAS. In the ex vivo TAI experiment, the full width at half maxima (FWHM) of an ablation needle (diameter = 1.5 mm) for coherence factor (CF) weighted DAS and NSA are 0.9 and 1.3 mm, respectively. Furthermore, in the in vivo TAI experiment, CF reduces the signals within the arm compared to NSA. Therefore, compared with CF, NSA can maintain the integrity of target information in TAI while effectively suppressing non-coherent background clutter., Conclusions: NSA can effectively reduce non-coherent background noise while ensuring the completeness of the target information. So, NSA offers the potential to provide high-quality thermoacoustic images and further advance their clinical application., (© 2024 American Association of Physicists in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
58. Enhancing quality and speed in database-free neural network reconstructions of undersampled MRI with SCAMPI.
- Author
-
Siedler TM, Jakob PM, and Herold V
- Subjects
- Humans, Artifacts, Brain diagnostic imaging, Data Compression methods, Magnetic Resonance Imaging methods, Neural Networks, Computer, Image Processing, Computer-Assisted methods, Algorithms
- Abstract
Purpose: We present SCAMPI (Sparsity Constrained Application of deep Magnetic resonance Priors for Image reconstruction), an untrained deep Neural Network for MRI reconstruction without previous training on datasets. It expands the Deep Image Prior approach with a multidomain, sparsity-enforcing loss function to achieve higher image quality at a faster convergence speed than previously reported methods., Methods: Two-dimensional MRI data from the FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates for single coil and multicoil data., Results: The performance of our architecture was compared to state-of-the-art Compressed Sensing methods and ConvDecoder, another untrained Neural Network for two-dimensional MRI reconstruction. SCAMPI outperforms these by better reducing undersampling artifacts and yielding lower error metrics in multicoil imaging. In comparison to ConvDecoder, the U-Net architecture combined with an elaborated loss-function allows for much faster convergence at higher image quality. SCAMPI can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Moreover, it is a novel tool for reconstructing undersampled single coil k-space data., Conclusion: Our approach avoids overfitting to dataset features, that can occur in Neural Networks trained on databases, because the network parameters are tuned only on the reconstruction data. It allows better results and faster reconstruction than the baseline untrained Neural Network approach., (© 2024 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
59. SPICER: Self-supervised learning for MRI with automatic coil sensitivity estimation and reconstruction.
- Author
-
Hu Y, Gan W, Ying C, Wang T, Eldeniz C, Liu J, Chen Y, An H, and Kamilov US
- Subjects
- Humans, Supervised Machine Learning, Brain diagnostic imaging, Deep Learning, Phantoms, Imaging, Magnetic Resonance Imaging methods, Image Processing, Computer-Assisted methods, Neural Networks, Computer, Algorithms
- Abstract
Purpose: To introduce a novel deep model-based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k-space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation., Methods: SPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high-quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA-based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self-supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data., Results: We validate SPICER on both open-access datasets and experimentally collected data, showing that it can achieve state-of-the-art performance in highly accelerated data acquisition settings (up to 10 × $$ 10\times $$ ). Our results also highlight the importance of different modules of SPICER-including the DMBA, the CSM estimation, and the SPICER training loss-on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre-estimation methods especially when the ACS data is limited., Conclusion: Despite being trained on noisy undersampled data, SPICER can reconstruct high-quality images and CSMs in highly undersampled settings, which outperforms other self-supervised learning methods and matches the performance of the well-known E2E-VarNet trained on fully sampled ground-truth data., (© 2024 International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
60. Paper Perfect: Robert Lang and the Science of Origami
- Author
-
Foer, Joshua
- Published
- 2014
61. Educational Data Mining in Prediction of Students’ Learning Performance: A Scoping Review
- Author
-
Li, Chunping, Li, Mingxi, Huang, Chuan-Liang, Tseng, Yi-Tong, Kim, Soo-Hyung, Yeom, Soonja, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Keane, Therese, editor, Lewin, Cathy, editor, Brinda, Torsten, editor, and Bottino, Rosa, editor
- Published
- 2023
- Full Text
- View/download PDF
62. Reconstructing Graphs from Connected Triples
- Author
-
Bastide, Paul, Cook, Linda, Erickson, Jeff, Groenland, Carla, Kreveld, Marc van, Mannens, Isja, Vermeulen, Jordi L., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Paulusma, Daniël, editor, and Ries, Bernard, editor
- Published
- 2023
- Full Text
- View/download PDF
63. Automatic Test Paper Generation Technology for Mandarin Based on Hilbert Huang Algorithm.
- Author
-
Wang, Lei
- Subjects
ARTIFICIAL neural networks ,ALGORITHMS ,COMPUTER engineering ,EMPLOYEE rights ,HUMAN resources departments - Abstract
With the development of computer technology, automatic test paper generation systems have gradually become an effective tool for detecting and maintaining national machine security and protecting the rights and interests of workers. This article achieved multi-level oral scores for different types of questions through online scoring using artificial neural networks in recent years. Based on its specific situation and evaluation index requirements, an analysis module that is reasonable, efficient, and in line with the hierarchical structure and module requirements of national conditions has been designed to complete the research on automatic test paper generation technology, in order to help better manage and allocate human resources and improve production efficiency. Afterwards, this article conducted functional testing on the technical module. The test results showed that the scalability of the system was over 82%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
64. GPTZero vs. Text Tampering: The Battle That GPTZero Wins
- Author
-
David W. Brown and Dean Jensen
- Abstract
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may be at risk. The bot's ability to quickly create high quality papers, sometimes complete with reference material, has led to concern that these programs will make students too reliant on their ability and not develop the critical thinking skills necessary to succeed. The rise in these applications has led to the need for the development of detection programs that are able to read the students submitted work and return an accurate estimation of if the paper is human or computer created. These detection programs use natural language processing's (NLP) ideas of perplexity, or randomness of the text, and burstiness, or the tendency for certain words and phrases to appear together, plus sophisticated algorithms to compare the essays to preexisting literature to generate an accurate estimation on the likely author of the paper. The use of these systems has been found to be highly effective in reducing plagiarism among students, however concerns have been raised about the limitations of these systems. False positives, false negatives, and cross language identification are three areas of concern amongst faculty and have led to reduced usage of the detection engines. Despite the limitations however, these systems are a valuable tool for educational institutions to maintain academic integrity and ensure that students are submitting original work. [For the full proceedings, see ED656038.]
- Published
- 2023
65. Cooperative Multiobjective Decision Support for the Paper Industry
- Author
-
Murthy, Sesh, Akkiraju, Rama, Goodwin, Richard, Keskinocak, Pinar, Rachlin, John, Wu, Frederick, Yeh, James, Fuhrer, Robert, Kumaran, Santhosh, Aggarwal, Alok, Sturzenbecker, Martin, Jayaraman, Ranga, and Daigle, Robert
- Published
- 1999
66. Cardiac Abnormality Prediction Using Multiple Machine Learning Approaches
- Author
-
Rana, Jahid Hasan, Farhin, Moniara, Turzo, Saif Ahamed, Roy, Sagar Chandra, Nabil, Rashidul Hasan, Rupai, Aneem-Al-Ahsan, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Islam, A. K. M. Muzahidul, editor, Uddin, Jia, editor, Mansoor, Nafees, editor, Rahman, Shahriar, editor, and Al Masud, Shah Murtaza Rashid, editor
- Published
- 2022
- Full Text
- View/download PDF
67. Models for Detecting Frauds in Medical Insurance
- Author
-
Mitrova, Hristina, Madevska Bogdanova, Ana, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Antovski, Ljupcho, editor, and Armenski, Goce, editor
- Published
- 2022
- Full Text
- View/download PDF
68. Path Optimization for Multi-material 3D Printing Using Self-organizing Maps
- Author
-
Pinochet, Diego, Tsamis, Alexandros, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Gerber, David, editor, Pantazis, Evangelos, editor, Bogosian, Biayna, editor, Nahmad, Alicia, editor, and Miltiadis, Constantinos, editor
- Published
- 2022
- Full Text
- View/download PDF
69. Ten Simple Rules for Writing a Reply Paper.
- Author
-
Simmons MP
- Subjects
- Algorithms, Information Dissemination methods, Medical Writing, Peer Review, Research methods, Periodicals as Topic, Publishing organization & administration
- Published
- 2015
- Full Text
- View/download PDF
70. Reference intervals data mining: getting the right paper--the author's reply.
- Author
-
Katayev A, Fleming JK, Luo D, Fisher AH, and Sharp TM
- Subjects
- Female, Humans, Male, Algorithms, Data Mining, Databases, Factual, Probability
- Published
- 2015
71. Reference intervals data mining: getting the right paper.
- Author
-
Jones G, Horowitz G, Katayev A, Fleming JK, Luo D, Fisher AH, and Sharp TM
- Subjects
- Female, Humans, Male, Algorithms, Data Mining, Databases, Factual, Probability
- Published
- 2015
- Full Text
- View/download PDF
72. Reference intervals data mining: no longer a probability paper method.
- Author
-
Katayev A, Fleming JK, Luo D, Fisher AH, and Sharp TM
- Subjects
- Female, Humans, Laboratories, Male, Reference Values, Algorithms, Data Mining methods, Databases, Factual, Probability
- Abstract
Objectives: To describe the application of a data-mining statistical algorithm for calculation of clinical laboratory tests reference intervals., Methods: Reference intervals for eight different analytes and different age and sex groups (a total of 11 separate reference intervals) for tests that are unlikely to be ordered during routine screening of disease-free populations were calculated using the modified algorithm for data mining of test results stored in the laboratory database and compared with published peer-reviewed studies that used direct sampling. The selection of analytes was based on the predefined criteria that include comparability of analytical methods with a statistically significant number of observations., Results: Of the 11 calculated reference intervals, having upper and lower limits for each, 21 of 22 reference interval limits were not statistically different from the reference studies., Conclusions: The presented statistical algorithm is shown to be an accurate and practical tool for reference interval calculations., (Copyright© by the American Society for Clinical Pathology.)
- Published
- 2015
- Full Text
- View/download PDF
73. Accelerated motion correction with deep generative diffusion models.
- Author
-
Levac B, Kumar S, Jalal A, and Tamir JI
- Subjects
- Humans, Brain diagnostic imaging, Computer Simulation, Magnetic Resonance Imaging methods, Artifacts, Retrospective Studies, Diffusion Magnetic Resonance Imaging, Image Processing, Computer-Assisted methods, Motion, Algorithms, Bayes Theorem
- Abstract
Purpose: The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations., Methods: The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data., Results: We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data., Conclusion: We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions., (© 2024 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
74. Unsupervised motion artifact correction of turbo spin-echo MRI using deep image prior.
- Author
-
Lee J, Seo H, Lee W, and Park H
- Subjects
- Humans, Brain diagnostic imaging, Neural Networks, Computer, Computer Simulation, Artifacts, Magnetic Resonance Imaging methods, Image Processing, Computer-Assisted methods, Deep Learning, Motion, Algorithms
- Abstract
Purpose: In MRI, motion artifacts can significantly degrade image quality. Motion artifact correction methods using deep neural networks usually required extensive training on large datasets, making them time-consuming and resource-intensive. In this paper, an unsupervised deep learning-based motion artifact correction method for turbo-spin echo MRI is proposed using the deep image prior framework., Theory and Methods: The proposed approach takes advantage of the high impedance to motion artifacts offered by the neural network parameterization to remove motion artifacts in MR images. The framework consists of parameterization of MR image, automatic spatial transformation, and motion simulation model. The proposed method synthesizes motion-corrupted images from the motion-corrected images generated by the convolutional neural network, where an optimization process minimizes the objective function between the synthesized images and the acquired images., Results: In the simulation study of 280 slices from 14 subjects, the proposed method showed a significant increase in the averaged structural similarity index measure by 0.2737 in individual coil images and by 0.4550 in the root-sum-of-square images. In addition, the ablation study demonstrated the effectiveness of each proposed component in correcting motion artifacts compared to the corrected images produced by the baseline method. The experiments on real motion dataset has shown its clinical potential., Conclusion: The proposed method exhibited significant quantitative and qualitative improvements in correcting rigid and in-plane motion artifacts in MR images acquired using turbo spin-echo sequence., (© 2024 International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
75. Stop moving: MR motion correction as an opportunity for artificial intelligence.
- Author
-
Zhou Z, Hu P, and Qi H
- Subjects
- Humans, Brain diagnostic imaging, Movement, Magnetic Resonance Imaging methods, Artifacts, Motion, Image Processing, Computer-Assisted methods, Deep Learning, Artificial Intelligence, Neural Networks, Computer, Algorithms
- Abstract
Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction., (© 2024. The Author(s), under exclusive licence to European Society for Magnetic Resonance in Medicine and Biology (ESMRMB).)
- Published
- 2024
- Full Text
- View/download PDF
76. Order selection for heterogeneous semiparametric hidden Markov models.
- Author
-
Zou Y, Song X, and Zhao Q
- Subjects
- Humans, Models, Statistical, Longitudinal Studies, Neuroimaging statistics & numerical data, Markov Chains, Alzheimer Disease, Bayes Theorem, Monte Carlo Method, Computer Simulation, Algorithms
- Abstract
Hidden Markov models (HMMs), which can characterize dynamic heterogeneity, are valuable tools for analyzing longitudinal data. The order of HMMs (ie, the number of hidden states) is typically assumed to be known or predetermined by some model selection criterion in conventional analysis. As prior information about the order frequently lacks, pairwise comparisons under criterion-based methods become computationally expensive with the model space growing. A few studies have conducted order selection and parameter estimation simultaneously, but they only considered homogeneous parametric instances. This study proposes a Bayesian double penalization (BDP) procedure for simultaneous order selection and parameter estimation of heterogeneous semiparametric HMMs. To overcome the difficulties in updating the order, we create a brand-new Markov chain Monte Carlo algorithm coupled with an effective adjust-bound reversible jump strategy. Simulation results reveal that the proposed BDP procedure performs well in estimation and works noticeably better than the conventional criterion-based approaches. Application of the suggested method to the Alzheimer's Disease Neuroimaging Initiative research further supports its usefulness., (© 2024 John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
77. Creating a universal cell segmentation algorithm.
- Subjects
- Humans, Image Processing, Computer-Assisted methods, Animals, Algorithms
- Published
- 2024
- Full Text
- View/download PDF
78. Fair Multivariate Adaptive Regression Splines for Ensuring Equity and Transparency
- Author
-
Parian Haghighat, Denisa Gandara, Lulu Kang, and Hadis Anahideh
- Abstract
Predictive analytics is widely used in various domains, including education, to inform decision-making and improve outcomes. However, many predictive models are proprietary and inaccessible for evaluation or modification by researchers and practitioners, limiting their accountability and ethical design. Moreover, predictive models are often opaque and incomprehensible to the officials who use them, reducing their trust and utility. Furthermore, predictive models may introduce or exacerbate bias and inequity, as they have done in many sectors of society. Therefore, there is a need for transparent, interpretable, and fair predictive models that can be easily adopted and adapted by different stakeholders. In this paper, we propose a fair predictive model based on multivariate adaptive regression splines (MARS) that incorporates fairness measures in the learning process. MARS is a non-parametric regression model that performs feature selection, handles non-linear relationships, generates interpretable decision rules, and derives optimal splitting criteria on the variables. Specifically, we integrate fairness into the knot optimization algorithm and provide theoretical and empirical evidence of how it results in a fair knot placement. We apply our "fair"MARS model to real-world data and demonstrate its effectiveness in terms of accuracy and equity. Our paper contributes to the advancement of responsible and ethical predictive analytics for social good. [This paper was presented at an Association for the Advancement of Artificial Intelligence conference.]
- Published
- 2024
79. Socio‐technical issues in the platform‐mediated gig economy: A systematic literature review: An Annual Review of Information Science and Technology (ARIST) paper.
- Author
-
Dedema, Meredith and Rosenbaum, Howard
- Subjects
- *
INFORMATION science , *TECHNOLOGY , *CORPORATE culture , *ALGORITHMS , *ECONOMICS - Abstract
The gig economy and gig work have grown quickly in recent years and have drawn much attention from researchers in different fields. Because the platform mediated gig economy is a relatively new phenomenon, studies have produced a range of interesting findings; of interest here are the socio‐technical issues that this work has surfaced. This systematic literature review (SLR) provides a snapshot of a range of socio‐technical issues raised in the last 12 years of literature focused on the platform mediated gig economy. Based on a sample of 515 papers gathered from nine databases in multiple disciplines, 132 were coded that specifically studied the gig economy, gig work, and gig workers. Three main socio‐technical themes were identified: (1) the digital workplace, which includes information infrastructure and digital labor that are related to the nature of gig work and the user agency; (2) algorithmic management, which includes platform governance, performance management, information asymmetry, power asymmetry, and system manipulation, relying on a diverse set of technological tools including algorithms and big data analytics; (3) ethical design, as a relevant value set that gig workers expect from the platform, which includes trust, fairness, equality, privacy, and transparency. A social informatics perspective is used to rethink the relationship between gig workers and platforms, extract the socio‐technical issues noted in prior research, and discuss the underexplored aspects of the platform mediated gig economy. The results draw attention to understudied yet critically important socio‐technical issues in the gig economy that suggest short‐ and long‐term opportunities for future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
80. Cultures, Intersections, Networks. The Role of Algorithms in Defining Power Relations Based on Gender, Race, Class, Disability
- Author
-
De Castro, Martina, Zona, Umberto, Bocci, Fabio, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Ranieri, Maria, editor, Pellegrini, Marta, editor, Menichetti, Laura, editor, Roffi, Alice, editor, and Luzzi, Damiana, editor
- Published
- 2022
- Full Text
- View/download PDF
81. Artificial Intelligence Project Success Factors—Beyond the Ethical Principles
- Author
-
Miller, Gloria J., van der Aalst, Wil, Series Editor, Mylopoulos, John, Series Editor, Ram, Sudha, Series Editor, Rosemann, Michael, Series Editor, Szyperski, Clemens, Series Editor, Ziemba, Ewa, editor, and Chmielarz, Witold, editor
- Published
- 2022
- Full Text
- View/download PDF
82. Performance Evaluation of the Extractive Methods in Automatic Text Summarization Using Medical Papers.
- Author
-
Kus, Anil and Aci, Cigdem Inan
- Subjects
- *
PERFORMANCE evaluation , *TEXT summarization , *MEDICAL sciences , *ALGORITHMS , *SEMANTICS - Abstract
The rapid development of technology has resulted in a surge in the volume of digital data available. This situation creates a problem for users who need assistance in locating specific information within this massive collection of data, resulting in a time-consuming process. Automatic Text Summarization systems have been developed as a more effective solution than traditional summarization techniques to address this problem and improve user access to relevant information. It is well known that researchers in the health sciences find it difficult to keep up with the latest literature due to their busy schedules. This study aims to produce comprehensive abstracts of Turkish-language scientific papers in the field of health sciences. Although abstracts of scientific papers are already available, more thorough summaries are still needed. To the best of our knowledge, no previous attempt has been made to automatically summarize Turkish language health science papers. For this purpose, a dataset of 105 Turkish papers was collected from DergiPark. Term Frequency, Term Frequency-Inverse Document Frequency, Latent Semantic Analysis, TextRank, and Latent Dirichlet Allocation algorithms were chosen as extractive text summarization methods due to their frequent use in this field. The performance of the text summarization models was evaluated using recall, precision, and F-score metrics, and the algorithms gave satisfactory results for Turkish. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
83. Spreadsheet Simulation of Priority Queues
- Author
-
Thin-Yin Leong and Nang-Laik Ma
- Abstract
This paper develops a spreadsheet simulation methodology for teaching simulation and performance analysis of priority queues with multiple servers, without resorting to macros, add-ins, or array formula. The approach is made possible by a "single overtaking" simplifying assumption under which any lower-priority customer may be passed in line by at most one higher-priority customer. By increasing the number of overtaking customers, one at a time, the simulation model is extended to the "multiovertaking" case. These simplifying assumptions make such spreadsheet simulations (of more complex queuing networks) accessible to students, and so the paper includes teaching and learning strategies for the classroom. Performance analysis of single-overtaking versus multiovertaking polices is included.
- Published
- 2024
- Full Text
- View/download PDF
84. Special Issue Paper: Robust Solutions and Risk Measures for a Supply Chain Planning Problem under Uncertainty
- Author
-
Poojari, C. A., Lucas, C., and Mitra, G.
- Published
- 2008
85. EAACI position paper: irritant-induced asthma.
- Author
-
Vandenplas O, Wiszniewska M, Raulf M, de Blay F, Gerth van Wijk R, Moscato G, Nemery B, Pala G, Quirce S, Sastre J, Schlünssen V, Sigsgaard T, Siracusa A, Tarlo SM, van Kampen V, Zock JP, and Walusiak-Skorupa J
- Subjects
- Humans, Irritants adverse effects, Occupational Exposure adverse effects, Algorithms, Asthma, Occupational classification, Asthma, Occupational diagnosis
- Abstract
The term irritant-induced (occupational) asthma (IIA) has been used to denote various clinical forms of asthma related to irritant exposure at work. The causal relationship between irritant exposure(s) and the development of asthma can be substantiated by the temporal association between the onset of asthma symptoms and a single or multiple high-level exposure(s) to irritants, whereas this relationship can only be inferred from epidemiological data for workers chronically exposed to moderate levels of irritants. Accordingly, the following clinical phenotypes should be distinguished within the wide spectrum of irritant-related asthma: (i) definite IIA, that is acute-onset IIA characterized by the rapid onset of asthma within a few hours after a single exposure to very high levels of irritant substances; (ii) probable IIA, that is asthma that develops in workers with multiple symptomatic high-level exposures to irritants; and (iii) possible IIA, that is asthma occurring with a delayed-onset after chronic exposure to moderate levels of irritants. This document prepared by a panel of experts summarizes our current knowledge on the diagnostic approach, epidemiology, pathophysiology, and management of the various phenotypes of IIA., (© 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
86. [Blood pressure measurement--do not sweat the small stuff and it is all small stuff?! Position paper of the Croatian national referral center for hypertension, center of excellence of the European Society of Hypertension].
- Author
-
Vrdoljak A, Vrkić TZ, Kos J, Vitale K, Premuzić V, Laganović M, and Jelaković B
- Subjects
- Blood Pressure Determination standards, Cardiovascular Diseases diagnosis, Humans, Hypertension diagnosis, International Cooperation, Practice Guidelines as Topic, Societies, Medical, Algorithms, Blood Pressure Determination instrumentation, Blood Pressure Determination methods
- Abstract
Office blood pressure measurement using mercury sphygmomanometer is the gold standard for making diagnoses of hypertension, evaluation of cardiovascular risk and estimation of obtained control of treated hypertensives. The vast majority of epidemiologic data are based on this method. However, the importance of blood pressure variability, white coat effect as well as availability of simple devices, home and ambulatory blood pressure measurements became routine parts in routine clinical work. As mercury will be soon forbidden in clinical work such devices and methodology will be even more important. In everyday clinical practice all three techniques should be implemented and in this paper advantages and drawbacks of all techniques are discussed. In the end, based on recent data and recommendations of international societies, diagnostic algorithm was proposed. Additionally, we described the technique of non-invasive central blood pressure measurement, determination of pulse wave velocity and calculation of augmentation index, new proposed risk factors.
- Published
- 2014
87. Fractal features of a crumpling network in randomly folded thin matter and mechanics of sheet crushing.
- Author
-
Balankin AS, Horta Rangel A, García Pérez G, Gayosso Martinez F, Sanchez Chavez H, and Martínez-González CL
- Subjects
- Compressive Strength, Computer Simulation, Algorithms, Fractals, Membranes, Artificial, Models, Theoretical, Paper
- Abstract
We study the static and dynamic properties of networks of crumpled creases formed in hand crushed sheets of paper. The fractal dimensionalities of crumpling networks in the unfolded (flat) and folded configurations are determined. Some other noteworthy features of crumpling networks are established. The physical implications of these findings are discussed. Specifically, we state that self-avoiding interactions introduce a characteristic length scale of sheet crumpling. A framework to model the crumpling phenomena is suggested. Mechanics of sheet crushing under external confinement is developed. The effect of compaction geometry on the crushing mechanics is revealed.
- Published
- 2013
- Full Text
- View/download PDF
88. Rise of the Machines: Navigating the Opportunities and Challenges of AI-Assisted Research and Learning
- Author
-
Justin K. Dimmel and Izge Bayyurt
- Abstract
This commentary was written by ChatGPT, an artificial intelligence language model developed by OpenAI. It was conceived by the first author as a test for how the advent of predictive language modeling will create opportunities and challenges for researchers and teachers in mathematics education. The paper consists of a commentary that was written by ChatGPT, followed by a reflection written by the authors that explains how the model was prompted to generate the text and how we worked with ChatGPT to validate and edit the text that was produced. We consider the implications of models like ChatGPT on the future of academic work. [For the complete proceedings, see ED658295.]
- Published
- 2023
89. Towards Scalable Adaptive Learning with Graph Neural Networks and Reinforcement Learning
- Author
-
Vassoyan, Jean and Vie, Jill-Jênn
- Abstract
Adaptive learning is an area of educational technology that consists in delivering personalized learning experiences to address the unique needs of each learner. An important subfield of adaptive learning is learning path personalization: it aims at designing systems that recommend sequences of educational activities to maximize students' learning outcomes. Many machine learning approaches have already demonstrated significant results in a variety of contexts related to learning path personalization. However, most of them were designed for very specific settings and are not very reusable. This is accentuated by the fact that they often rely on non-scalable models, which are unable to integrate new elements after being trained on a specific set of educational resources. In this paper, we introduce a flexible and scalable approach towards the problem of learning path personalization, which we formalize as a reinforcement learning problem. Our model is a sequential recommender system based on a graph neural network, which we evaluate on a population of simulated learners. Our results demonstrate that it can learn to make good recommendations in the small-data regime. [For the complete proceedings, see ED630829.]
- Published
- 2023
90. To Speak or Not to Speak, and What to Speak, When Doing Task Actions Collaboratively
- Author
-
Nasir, Jauwairia, Kothiyal, Aditi, Sheng, Haoyu, and Dillenbourg, Pierre
- Abstract
Transactive discussion during collaborative learning is crucial for building on each other's reasoning and developing problem solving strategies. In a tabletop collaborative learning activity, student actions on the interface can drive their thinking and be used to ground discussions, thus affecting their problem-solving performance and learning. However, it is not clear how the interplay of actions and discussions, for instance, how students performing actions or pausing actions while discussing, is related to their learning. In this paper, we seek to understand how the transactivity of actions and discussions is associated with learning. Specifically, we ask what is the relationship between discussion and actions, and how it is different between those who learn (gainers) and those who do not (non-gainers). We present a combined differential sequence mining and content analysis approach to examine this relationship, which we applied on the data from 32 teams collaborating on a problem designed to help them learn concepts of minimum spanning trees. We found that discussion and action occur concurrently more frequently among gainers than non-gainers. Further we find that gainers tend to do more reflective actions along with discussion, such as looking at their previous solutions, than non-gainers. Finally, gainers discussion consists more of goal clarification, reflection on past solutions and agreement on future actions than non-gainers, who do not share their ideas and cannot agree on next steps. Thus this approach helps us identify how the interplay of actions and discussion could lead to learning, and the findings offer guidelines to teachers and instructional designers regarding indicators of productive collaborative learning, and when and how, they should intervene to improve learning. Concretely, the results suggest that teachers should support elaborative, reflective and planning discussions along with reflective actions. [For the complete proceedings, see ED630829.]
- Published
- 2023
91. Maching Learning Based Financial Management Mobile Application to Enhance College Students' Financial Literacy
- Author
-
Mohsina Kamarudeen and K. Vijayalakshmi
- Abstract
This paper presents a mobile application aimed at enhancing the financial literacy of college students by monitoring their spending patterns and promoting better decision-making. The application is developed using the agile methodology with Android Studio and Flutter as development tools and Firebase as a database. The app is divided into sub-applications, with the home page serving as the program's integration point, displaying a summary of the user's financial progress. The app generates valuable insights into the user's current and future financial success, utilizing data analytics and machine learning to provide detailed and summary insights into the user's financial progress. The machine-learning algorithm used in this app is linear regression, which predicts the user's income and expenses for the upcoming month based on their historical spending data. In addition, the app highlights deals and student discounts in the user's vicinity and links to financial articles that promote better financial planning and decision-making. By promoting responsible spending habits and providing valuable financial insights, this mobile application aims to help students become financially literate and make informed financial decisions for future. [For the full proceedings, see ED654100.]
- Published
- 2023
92. Automated Information Supply of Worker Guidance Systems in Smart Assembly Environment
- Author
-
Reisinger, Gerhard, Hold, Philipp, Sihn, Wilfried, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Tröltzsch, Fredi, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Reis, Ricardo, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, and Ratchev, Svetan, editor
- Published
- 2021
- Full Text
- View/download PDF
93. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
- *
DEEP learning , *ELECTROCARDIOGRAPHY , *ELECTRONIC paper , *ATRIAL fibrillation , *ALGORITHMS , *HEART failure , *HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
94. Iterative Construction of Complete Lyapunov Functions: Analysis of Algorithm Efficiency
- Author
-
Argáez, Carlos, Giesl, Peter, Hafstein, Sigurdur, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Obaidat, Mohammad S., editor, Ören, Tuncer, editor, and Rango, Floriano De, editor
- Published
- 2020
- Full Text
- View/download PDF
95. 基于多目标优化的联邦学习进化.
- Author
-
胡智勇, 于千城, 王之赐, and 张丽丝
- Subjects
- *
FEDERATED learning , *ALGORITHMS , *PRIVACY - Abstract
Traditional federated learning faces challenges such as high communication costs, structural heterogeneity,and insufficient privacy protection. To address these issues, this paper proposes a federated learning evolutionary algorithm that applies sparse evolutionary training algorithm to reduce communication costs and integrates local differential privacy protection for participants’ privacy. Additionally, it utilizes the NSGA-Ⅲ algorithm to optimize the network structure and sparsity of the global federated learning model, adjusting the relationship between data availability and privacy protection. This achieves a balance between the effectiveness, communication costs, and privacy of the global federated learning model. Experimental results under unstable communication environments demonstrate that, on the MNIST and CIFAR-10 datasets, compared to the solution with the lowest error rate using the FNSGA-Ⅲ algorithm, the proposed algorithm improves communication efficiency by 57. 19% and 52. 17%, respectively. The participants also achieved(3. 46, 10-4) and(6. 52, 10-4)-local differential privacy. This algorithm can effectively reduce communication costs and protect participant privacy without significantly compromising the accuracy of the global model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
96. Enhancing recall in automated record screening: A resampling algorithm.
- Author
-
Hou Z and Tipton E
- Subjects
- Automation, Information Storage and Retrieval methods, Meta-Analysis as Topic, Models, Statistical, Probability, Reproducibility of Results, Review Literature as Topic, Systematic Reviews as Topic methods, Algorithms
- Abstract
Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least 80 % of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks., (© 2024 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
97. Mixtures of t $$ t $$ factor analysers with censored responses and external covariates: An application to educational data from Peru.
- Author
-
Wang WL, Castro LM, Li HJ, and Lin TI
- Subjects
- Humans, Likelihood Functions, Peru, Multivariate Analysis, Computer Simulation, Quality of Life, Algorithms
- Abstract
Analysing data from educational tests allows governments to make decisions for improving the quality of life of individuals in a society. One of the key responsibilities of statisticians is to develop models that provide decision-makers with pertinent information about the latent process that educational tests seek to represent. Mixtures of t $$ t $$ factor analysers (MtFA) have emerged as a powerful device for model-based clustering and classification of high-dimensional data containing one or several groups of observations with fatter tails or anomalous outliers. This paper considers an extension of MtFA for robust clustering of censored data, referred to as the MtFAC model, by incorporating external covariates. The enhanced flexibility of including covariates in MtFAC enables cluster-specific multivariate regression analysis of dependent variables with censored responses arising from upper and/or lower detection limits of experimental equipment. An alternating expectation conditional maximization (AECM) algorithm is developed for maximum likelihood estimation of the proposed model. Two simulation experiments are conducted to examine the effectiveness of the techniques presented. Furthermore, the proposed methodology is applied to Peruvian data from the 2007 Early Grade Reading Assessment, and the results obtained from the analysis provide new insights regarding the reading skills of Peruvian students., (© 2023 British Psychological Society.)
- Published
- 2024
- Full Text
- View/download PDF
98. iNucRes-ASSH: Identifying nucleic acid-binding residues in proteins by using self-attention-based structure-sequence hybrid neural network.
- Author
-
Zhang J, Chen Q, and Liu B
- Subjects
- Proteins chemistry, Neural Networks, Computer, Algorithms, Nucleic Acids
- Abstract
Interaction between proteins and nucleic acids is crucial to many cellular activities. Accurately detecting nucleic acid-binding residues (NABRs) in proteins can help researchers better understand the interaction mechanism between proteins and nucleic acids. Structure-based methods can generally make more accurate predictions than sequence-based methods. However, the existing structure-based methods are sensitive to protein conformational changes, causing limited generalizability. More effective and robust approaches should be further explored. In this study, we propose iNucRes-ASSH to identify nucleic acid-binding residues with a self-attention-based structure-sequence hybrid neural network. It improves the generalizability and robustness of NABR prediction from two levels: residue representation and prediction model. Experimental results show that iNucRes-ASSH can predict the nucleic acid-binding residues even when the experimentally validated structures are unavailable and outperforms five competing methods on a recent benchmark dataset and a widely used test dataset., (© 2023 Wiley Periodicals LLC.)
- Published
- 2024
- Full Text
- View/download PDF
99. White paper of the Society of Abdominal Radiology hepatocellular carcinoma diagnosis disease-focused panel on LI-RADS v2018 for CT and MRI
- Author
-
Elsayes, Khaled M, Kielar, Ania Z, Elmohr, Mohab M, Chernyak, Victoria, Masch, William R, Furlan, Alessandro, Marks, Robert M, Cruite, Irene, Fowler, Kathryn J, Tang, An, Bashir, Mustafa R, Hecht, Elizabeth M, Kamaya, Aya, Jambhekar, Kedar, Kamath, Amita, Arora, Sandeep, Bijan, Bijan, Ash, Ryan, Kassam, Zahra, Chaudhry, Humaira, McGahan, John P, Yacoub, Joseph H, McInnes, Matthew, Fung, Alice W, Shanbhogue, Krishna, Lee, James, Deshmukh, Sandeep, Horvat, Natally, Mitchell, Donald G, Do, Richard KG, Surabhi, Venkateswar R, Szklaruk, Janio, and Sirlin, Claude B
- Subjects
Digestive Diseases ,Liver Cancer ,Cancer ,Biomedical Imaging ,Rare Diseases ,Liver Disease ,Good Health and Well Being ,Algorithms ,Carcinoma ,Hepatocellular ,Diagnosis ,Differential ,Humans ,Liver Neoplasms ,Magnetic Resonance Imaging ,Societies ,Medical ,Tomography ,X-Ray Computed ,United States ,LI-RADS ,v2018 ,CT ,MRI ,HCC - Abstract
The Liver Imaging and Reporting Data System (LI-RADS) is a comprehensive system for standardizing the terminology, technique, interpretation, reporting, and data collection of liver imaging with the overarching goal of improving communication, clinical care, education, and research relating to patients at risk for or diagnosed with hepatocellular carcinoma (HCC). In 2018, the American Association for the Study of Liver Diseases (AASLD) integrated LI-RADS into its clinical practice guidance for the imaging-based diagnosis of HCC. The harmonization between the AASLD and LI-RADS diagnostic imaging criteria required minor modifications to the recently released LI-RADS v2017 guidelines, necessitating a LI-RADS v2018 update. This article provides an overview of the key changes included in LI-RADS v2018 as well as a look at the LI-RADS v2018 diagnostic algorithm and criteria, technical recommendations, and management suggestions. Substantive changes in LI-RADS v2018 are the removal of the requirement for visibility on antecedent surveillance ultrasound for LI-RADS 5 (LR-5) categorization of 10-19 mm observations with nonrim arterial phase hyper-enhancement and nonperipheral "washout", and adoption of the Organ Procurement and Transplantation Network definition of threshold growth (≥ 50% size increase of a mass in ≤ 6 months). Nomenclatural changes in LI-RADS v2018 are the removal of -us and -g as LR-5 qualifiers.
- Published
- 2018
100. Evaluation of nursing adherence to a paper-based graduated continuous intravenous regular human insulin infusion algorithm.
- Author
-
Dickerson RN, Johnson JL, Maish GO 3rd, Minard G, and Brown RO
- Subjects
- Adult, Aged, Blood Glucose, Glucose Metabolism Disorders epidemiology, Humans, Hyperglycemia prevention & control, Hypoglycemia epidemiology, Hypoglycemia prevention & control, Incidence, Infusions, Intravenous nursing, Infusions, Intravenous standards, Insulin therapeutic use, Intensive Care Units, Middle Aged, Nurses, Patient Safety, Practice Guidelines as Topic, Retrospective Studies, Algorithms, Clinical Competence, Clinical Protocols, Critical Illness nursing, Glucose Metabolism Disorders prevention & control, Guideline Adherence, Insulin administration & dosage
- Abstract
Objective: The use of continuous intravenous regular human insulin (RHI) infusion is often necessary to achieve glycemic control in critically ill patients. Because insulin is a high-risk medication owing to the potential for severe hypoglycemia, it is imperative that insulin infusion algorithms are designed to be safe, effective, and instructionally clear. The safety and efficacy of our intravenous RHI infusion algorithm protocol has been previously established (Nutrition 2008;24:536-45); however, the protocol violations by nursing personnel were not examined. The objective of this study was to assess nursing adherence to our RHI infusion algorithm., Methods: Continuous RHI infusion algorithm violations were retrospectively evaluated in adult patients admitted to a trauma intensive care unit who received concurrent continuous enteral and/or parenteral nutrition therapy and our algorithm for at least 3 d. Blood glucose (BG) monitoring was done every 1 to 2 h with the target BG at 70 to 149 mg/dL (3.9 to 8.3 mmol/L). Nursing adherence to the RHI infusion protocol was evaluated for each patient by comparing the adjustments in insulin infusion rates documented by the nursing personnel with the prescribed adjustments per our graduated continuous intravenous RHI infusion algorithm., Results: A total of 4150 BG measurements necessitating the determination of the appropriate RHI dosage rate by nursing personnel in 40 patients occurred during the observational period. The target BG was achieved for a mean of 20 h/d and none of the patients had an episode of severe hypoglycemia (BG <40 mg/dL or 2.2 mmol/L). The overall rate of algorithm violations was 12.1%. The algorithm violations accounted for a single episode of mild to moderate hypoglycemia (BG 40 to 60 mg/dL or 2.2 to 3.3 mmol/L) in 4 patients and 65 total episodes of hyperglycemia (BG ≥150 mg/dL or 8.3 mmol/L) in 18 patients., Conclusion: An adherence rate of nearly 90% is indicative of excellent nursing adherence compared with other published paper-based algorithms that examined protocol adherence. These data, combined with our previously published glycemic control data, indicate that this RHI infusion algorithm is an effective one for hyperglycemic trauma patients receiving continuous enteral and/or parenteral nutritional therapy., (Copyright © 2012 Elsevier Inc. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.