53,175 results
Search Results
152. Design Analytics for Mobile Learning: Scaling up the Classification of Learning Designs Based on Cognitive and Contextual Elements
- Author
-
Pishtari, Gerti, Prieto, Luis P., Rodriguez-Triana, Maria Jesus, and Martinez-Maldonado, Roberto
- Abstract
This research was triggered by the identified need in literature for large-scale studies about the kinds of designs that teachers create for mobile learning (m-learning). These studies require analyses of large datasets of learning designs. The common approach followed by researchers when analyzing designs has been to manually classify them following high-level pedagogically guided coding strategies, which demands extensive work. Therefore, the first goal of this paper is to explore the use of supervised machine learning (SML) to automatically classify the textual content of m-learning designs using pedagogically relevant classifications, such as the cognitive level demanded by students to carry out specific designed tasks, the phases of inquiry learning represented in the designs, or the role that the situated environment has in the designs. Because not all SML models are transparent, but researchers often need to understand their behaviour, the second goal of this paper is to consider the trade-off between models' performance and interpretability in the context of design analytics for m-learning. To achieve these goals, we compiled a dataset of designs deployed using two tools, Avastusrada and Smartzoos. With this dataset, we trained and compared different models and feature extraction techniques. We further optimized and compared the best performing and most interpretable algorithms (EstBERT and Logistic Regression) to consider the second goal with an illustrative case. We found that SML can reliably classify designs with accuracy > 0.86 and Cohen's kappa > 0.69.
- Published
- 2022
153. A Systematic Study of the Literature on Career Guidance Expert Systems for Students: Implications for ODL
- Author
-
Gunwant, Shilpa, Pande, Jeetendra, and Bisht, Raj Kishor
- Abstract
The continual evolution of employment opportunities in the present industrial era has raised the need for career-long expert advice. Similar to other fields, thankfully technology has come to our rescue in the area of career guidance also. This paper presents a systematic review of Expert Systems (ES) developed for career guidance, course selection and evaluation of students in the past ten years. The popular research databases Google Scholar and Science Direct were used for obtaining the relevant research papers through broad keywords. The keywords were refined to identify the articles related to rule-based, case-based and fuzzy logic-based ES used for career guidance. A total of twenty-five peer-reviewed relevant articles with full-text available online was selected for the final study. In order to avoid duplicity, technical reports and unreferenced literature were excluded. The review identifies the relatively high weight given by the researchers to rule-based systems owing to their simplicity and broad applicability. However, the relative merits and demerits of rule-based, case-based and fuzzy logic-based ES are highly dependent on the field of application. Nevertheless, ES find wide applications in the area of career guidance and have the potential to enhance the career guidance accessibility of the most remote students.
- Published
- 2022
154. Evaluating Gaming Detector Model Robustness over Time
- Author
-
Levin, Nathan, Baker, Ryan S., Nasiar, Nidhi, Fancsali, Stephen, and Hutt, Stephen
- Abstract
Research into "gaming the system" behavior in intelligent tutoring systems (ITS) has been around for almost two decades, and detection has been developed for many ITSs. Machine learning models can detect this behavior in both real-time and in historical data. However, intelligent tutoring system designs often change over time, in terms of the design of the student interface, assessment models, and data collection log schemas. Can gaming detectors still be trusted, a decade or more after they are developed? In this research, we evaluate the robustness/degradation of gaming detectors when trained on older data logs and evaluated on current data logs. We demonstrate that some machine learning models developed using past data are still able to predict gaming behavior from student data collected 16 years later, but that there is considerable variance in how well different algorithms perform over time. We demonstrate that a classic decision tree algorithm maintained its performance while more contemporary algorithms struggled to transfer to new data, even though they exhibited better performance on unseen students in both New and Old data sets by themselves. Examining the feature importance values provides some explanation for the differences in performance between models, and offers some insight into how we might safeguard against detector rot over time. [For the full proceedings, see ED623995.]
- Published
- 2022
155. Adversarial Bandits for Drawing Generalizable Conclusions in Non-Adversarial Experiments: An Empirical Study
- Author
-
Zhi-Han, Yang, Zhang, Shiyue, and Rafferty, Anna N.
- Abstract
Online educational technologies facilitate pedagogical experimentation, but typical experimental designs assign a fixed proportion of students to each condition, even if early results suggest some are ineffective. Experimental designs using multi-armed bandit (MAB) algorithms vary the probability of condition assignment for a new student based on prior results, placing more students in more effective conditions. While stochastic MAB algorithms have been used for educational experiments, they collect data that decreases power and increases false positive rates [22]. Instead, we propose using adversarial MAB algorithms, which are less exploitative and thus may exhibit more robustness. Through simulations involving data from 20+ educational experiments [29], we show data collected using adversarial MAB algorithms does not have the statistical downsides of that from stochastic MAB algorithms. Further, we explore how differences in condition variability (e.g., performance gaps between students being narrowed by an intervention) impact MAB versus uniform experimental design. Data from stochastic MAB algorithms systematically reduce power when the better arm is less variable, while increasing it when the better arm is more variable; data from the adversarial MAB algorithms results in the same statistical power as uniform assignment. Overall, these results demonstrate that adversarial MAB algorithms are a viable "off-the-shelf" solution for researchers who want to preserve the statistical power of standard experimental designs while also benefiting student participants. [For the full proceedings, see ED623995.]
- Published
- 2022
156. Invented Strategies Changing Teachers' Pedagogical Content Knowledge
- Author
-
Lunt, Jana
- Abstract
This study investigates how utilizing student-invented strategies in the classroom can inform teachers' pedagogical content knowledge. Two elementary school teachers participated in professional development discussing the benefits of invented strategies. Data was then gathered as the participants implemented this practice in their classrooms. Data was analyzed qualitatively to show the ways in which invented strategies can be useful in a teacher's development of their pedagogical content knowledge, including their Knowledge of Content and Students, Knowledge of Content and Teaching, as well as Knowledge of Content and Curriculum. [For the complete proceedings, see ED630210.]
- Published
- 2022
157. An Approach to Semantic Educational Content Mining Using NLP
- Author
-
Aisha Abdulmohsin Al Abdulqader, Amenah Ahmed Al Mulla, Gaida Abdalaziz Al Moheish, Michael Jovellanos Pinero, Conrado Vizcarra, Abdulelah Al Gosaibi, and Abdulaziz Saad Albarrak
- Abstract
The COVID-19 epidemic had caused one of the most significant disruptions to the global education system. Many educational institutions faced sudden pressure to switch from face-to-face to online delivery of courses. The conventional classes are no longer the primary means of delivery; instead, online education and resources have become the prominent approach. With the increasing demand for supplementary course materials to fulfill the needs of each area of study, students began to use search engines and online resources that contain discussions, practical demonstrations, and tutorial videos to aid students in their studies and course work. This study addresses the underlying challenges of retrieving relevant online educational materials by introducing an intelligent agent for semantic data mining. It works as middleware infrastructure that allow context-aware data processing and mining. YouTube was used to assess the consistency of the proposed model since it returns a large number of results in its search pool. The results showed that using the extraction of topics method, the similarities scores with the proposed model provided favorable results. Furthermore, an improvement in video ranking and sorting was realized. According to the findings, using this method provided users with a more productive and reliable study experience. [For the full proceedings, see ED639633.]
- Published
- 2022
158. Tachyum Unveils Air Defense Superiority Using Prodigy AI White Paper
- Subjects
Air defenses ,Algorithms ,Antiairborne warfare ,Algorithm ,Business ,Business, international - Abstract
LAS VEGAS -- Tachyum[TM] today released details of Prodigy([R]), the world's first universal processor, for advanced military and defense avionics applications in a new white paper 'Air Dominance Powered by [...]
- Published
- 2023
159. MIRSIG position paper: the use of image registration and fusion algorithms in radiotherapy
- Author
-
Nicholas, Lowther, Rob, Louwe, Johnson, Yuen, Nicholas, Hardcastle, Adam, Yeo, and Michael, Jameson
- Subjects
Radiological and Ultrasound Technology ,Radiotherapy Planning, Computer-Assisted ,Image Processing, Computer-Assisted ,Biomedical Engineering ,Biophysics ,Humans ,Radiotherapy Dosage ,Radiology, Nuclear Medicine and imaging ,Instrumentation ,Algorithms ,Biotechnology - Abstract
The report of the American Association of Physicists in Medicine (AAPM) Task Group No. 132 published in 2017 reviewed rigid image registration and deformable image registration (DIR) approaches and solutions to provide recommendations for quality assurance and quality control of clinical image registration and fusion techniques in radiotherapy. However, that report did not include the use of DIR for advanced applications such as dose warping or warping of other matrices of interest. Considering that DIR warping tools are now readily available, discussions were hosted by the Medical Image Registration Special Interest Group (MIRSIG) of the Australasian College of Physical Scientists & Engineers in Medicine in 2018 to form a consensus on best practice guidelines. This position statement authored by MIRSIG endorses the recommendations of the report of AAPM task group 132 and expands on the best practice advice from the ‘Deforming to Best Practice’ MIRSIG publication to provide guidelines on the use of DIR for advanced applications.
- Published
- 2022
160. Personalization in Australian K-12 Classrooms: How Might Digital Teaching and Learning Tools Produce Intangible Consequences for Teachers' Workplace Conditions?
- Author
-
Arantes, Janine Aldous
- Abstract
Recent negotiations of 'data' in schools place focus on student assessment and NAPLAN. However, with the rise in artificial intelligence (AI) underpinning educational technology, there is a need to shift focus towards the value of teachers' digital data. By doing so, the broader debate surrounding the implications of these technologies and rights within the classroom as a workplace becomes more apparent to practitioners and educational researchers. Drawing on the Australian Human Rights Commission's "Human Rights and Technology final report," this conceptual paper focusses on teachers' rights alongside emerging technologies that use or provide predictive analytics or artificial intelligence, also called 'personalisation'. The lens of Postdigital positionality guides the discussion. Three potential consequences are presented as provocations: (1) What might happen if emerging technology uses teachers' digital data that represent current societal inequality? (2) What might happen if insights provided by such technology are inaccurate, insufficient, or unrepresentative of our teachers? (3) What might happen if the design of the AI system itself is discriminatory? This conceptual paper argues for increased discourse about technologies that use or provide predictive analytics complemented by considering potential consequences associated with algorithmic bias.
- Published
- 2023
- Full Text
- View/download PDF
161. Computerized automated algorithm-based analyses of digitized paper ECGs in Brugada syndrome
- Author
-
Antoine Leenhardt, Pierre Maison-Blanche, Isabelle Denjoy, Fabio Badilini, Pierre-Léo Laporte, Fabrice Extramiana, and Martino Vaglio
- Subjects
Adult ,Male ,Acute effects ,medicine.medical_specialty ,Sudden death ,Electrocardiography ,QRS complex ,Internal medicine ,medicine ,Humans ,Repolarization ,cardiovascular diseases ,Brugada Syndrome ,Brugada syndrome ,business.industry ,Class I antiarrhythmic drug ,Middle Aged ,medicine.disease ,Increased risk ,Automated algorithm ,Cardiology ,Female ,Cardiology and Cardiovascular Medicine ,business ,Anti-Arrhythmia Agents ,Algorithms ,Software - Abstract
Background Brugada syndrome is a rare inherited arrhythmic syndrome with a coved type 1 ST-segment elevation on ECG and an increased risk of sudden death. Many studies have evaluated risk stratification performance based on ECG-derived parameters. However, since historical Brugada patient cohorts included mostly paper ECGs, most studies have been based on manual ECG parameter measurements. We hypothesized that it would be possible to run automated algorithm-based analysis of paper ECGs. We aimed: 1) to validate the digitization process for paper ECGs in Brugada patients; and 2) to quantify the acute class I antiarrhythmic drug effect on relevant ECG parameters in Brugada syndrome. Methods A total of 176 patients (30% female, 43 ± 13 years old) with induced type 1 Brugada syndrome ECG were included in the study. All of the patients had paper ECGs before and during class I antiarrhythmic drug challenge. Twenty patients also had a digital ECG, in whom printouts were used to validate the digitization process. Paper ECGs were scanned and then digitized using ECGScan software, version 3.4.0 (AMPS, LLC, New York, NY, USA) to obtain FDA HL7 XML format ECGs. Measurements were automatically performed using the Bravo (AMPS, LLC, New York, NY, USA) and Glasgow algorithms. Results ECG parameters obtained from digital and digitized ECGs were closely correlated (r = 0.96 ± 0.07, R2 = 0.93 ± 0.12). Class I antiarrhythmic drugs significantly increased the global QRS duration (from 113 ± 20 to 138 ± 23, p Conclusions Automated algorithm-based measurements of depolarization and repolarization parameters from digitized paper ECGs are reliable and could quantify the acute effects of class 1 antiarrhythmic drug challenge in Brugada patients. Our results support using computerized automated algorithm-based analyses from digitized paper ECGs to establish risk stratification decision trees in Brugada syndrome.
- Published
- 2021
162. A fully-automated paper ECG digitisation algorithm using deep learning
- Author
-
Huiyi Wu, Kiran Haresh Kumar Patel, Xinyang Li, Bowen Zhang, Christoforos Galazis, Nikesh Bajaj, Arunashis Sau, Xili Shi, Lin Sun, Yanda Tao, Harith Al-Qaysi, Lawrence Tarusan, Najira Yasmin, Natasha Grewal, Gaurika Kapoor, Jonathan W. Waks, Daniel B. Kramer, Nicholas S. Peters, and Fu Siong Ng
- Subjects
Electrocardiography ,Deep Learning ,Multidisciplinary ,Atrial Fibrillation ,Humans ,Neural Networks, Computer ,Algorithms - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects.
- Published
- 2022
163. Selected Papers of the 31st International Workshop on Combinatorial Algorithms, IWOCA 2020.
- Author
-
Gąsieniec, Leszek, Klasing, Ralf, and Radzik, Tomasz
- Subjects
- *
MATHEMATICAL proofs , *ALGORITHMS , *CHARTS, diagrams, etc. , *POLYNOMIAL time algorithms , *ONLINE algorithms , *GRAPH labelings , *HAMMING distance - Published
- 2022
- Full Text
- View/download PDF
164. Heterogeneity and predictors of the effects of AI assistance on radiologists.
- Author
-
Yu F, Moehring A, Banerjee O, Salz T, Agarwal N, and Rajpurkar P
- Subjects
- Humans, Radiologists, Artificial Intelligence, Algorithms
- Abstract
The integration of artificial intelligence (AI) in medical image interpretation requires effective collaboration between clinicians and AI algorithms. Although previous studies demonstrated the potential of AI assistance in improving overall clinician performance, the individual impact on clinicians remains unclear. This large-scale study examined the heterogeneous effects of AI assistance on 140 radiologists across 15 chest X-ray diagnostic tasks and identified predictors of these effects. Surprisingly, conventional experience-based factors, such as years of experience, subspecialty and familiarity with AI tools, fail to reliably predict the impact of AI assistance. Additionally, lower-performing radiologists do not consistently benefit more from AI assistance, challenging prevailing assumptions. Instead, we found that the occurrence of AI errors strongly influences treatment outcomes, with inaccurate AI predictions adversely affecting radiologist performance on the aggregate of all pathologies and on half of the individual pathologies investigated. Our findings highlight the importance of personalized approaches to clinician-AI collaboration and the importance of accurate AI models. By understanding the factors that shape the effectiveness of AI assistance, this study provides valuable insights for targeted implementation of AI, enabling maximum benefits for individual clinicians in clinical practice., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
165. Craniofacial Soft-Tissue Anthropomorphic Database with Magnetic Resonance Imaging and Unbiased Diffeomorphic Registration.
- Author
-
Villavisanis DF, Khandelwal P, Zapatero ZD, Wagner CS, Blum JD, Cho DY, Swanson JW, Taylor JA, Yushkevich PA, and Bartlett SP
- Subjects
- Humans, Child, Male, Female, Child, Preschool, Retrospective Studies, Cephalometry methods, Magnetic Resonance Imaging methods, Imaging, Three-Dimensional methods, Image Processing, Computer-Assisted methods, Algorithms
- Abstract
Background: Objective assessment of craniofacial surgery outcomes in a pediatric population is challenging because of the complexity of patient presentations, diversity of procedures performed, and rapid craniofacial growth. There is a paucity of robust methods to quantify anatomical measurements by age and objectively compare craniofacial dysmorphology and postoperative outcomes. Here, the authors present data in developing a racially and ethnically sensitive anthropomorphic database, providing plastic and craniofacial surgeons with "normal" three-dimensional anatomical parameters with which to appraise and optimize aesthetic and reconstructive outcomes., Methods: Patients with normal craniofacial anatomy undergoing head magnetic resonance imaging (MRI) scans from 2008 to 2021 were included in this retrospective study. Images were used to construct composite (template) images with diffeomorphic image registration method using the Advanced Normalization Tools package. Composites were thresholded to generate binary three-dimensional segmentations used for anatomical measurements in Materalise Mimics., Results: High-resolution MRI scans from 130 patients generated 12 composites from an average of 10 MRI sequences each: four 3-year-olds, four 4-year-olds, and four 5-year-olds (two male, two female, two Black, and two White). The average head circumference of 3-, 4-, and 5-year-old composites was 50.3, 51.5, and 51.7 cm, respectively, comparable to normative data published by the World Health Organization., Conclusions: Application of diffeomorphic registration-based image template algorithm to MRI is effective in creating composite templates to represent "normal" three-dimensional craniofacial and soft-tissue anatomy. Future research will focus on development of automated computational tools to characterize anatomical normality, generation of indices to grade preoperative severity, and quantification of postoperative results to reduce subjectivity bias., (Copyright © 2023 by the American Society of Plastic Surgeons.)
- Published
- 2024
- Full Text
- View/download PDF
166. Boosting quantification accuracy of chemical exchange saturation transfer MRI with a spatial-spectral redundancy-based denoising method.
- Author
-
Chen X, Wu J, Yang Y, Chen H, Zhou Y, Lin L, Wei Z, Xu J, Chen Z, and Chen L
- Subjects
- Reproducibility of Results, Signal-To-Noise Ratio, Magnetic Resonance Imaging methods, Algorithms
- Abstract
Chemical exchange saturation transfer (CEST) is a versatile technique that enables noninvasive detections of endogenous metabolites present in low concentrations in living tissue. However, CEST imaging suffers from an inherently low signal-to-noise ratio (SNR) due to the decreased water signal caused by the transfer of saturated spins. This limitation challenges the accuracy and reliability of quantification in CEST imaging. In this study, a novel spatial-spectral denoising method, called BOOST (suBspace denoising with nOnlocal lOw-rank constraint and Spectral local-smooThness regularization), was proposed to enhance the SNR of CEST images and boost quantification accuracy. More precisely, our method initially decomposes the noisy CEST images into a low-dimensional subspace by leveraging the global spectral low-rank prior. Subsequently, a spatial nonlocal self-similarity prior is applied to the subspace-based images. Simultaneously, the spectral local-smoothness property of Z-spectra is incorporated by imposing a weighted spectral total variation constraint. The efficiency and robustness of BOOST were validated in various scenarios, including numerical simulations and preclinical and clinical conditions, spanning magnetic field strengths from 3.0 to 11.7 T. The results demonstrated that BOOST outperforms state-of-the-art algorithms in terms of noise elimination. As a cost-effective and widely available post-processing method, BOOST can be easily integrated into existing CEST protocols, consequently promoting accuracy and reliability in detecting subtle CEST effects., (© 2023 John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
167. Alchemical Free Energy Workflows for the Computation of Protein-Ligand Binding Affinities.
- Author
-
Herz AM, Kellici T, Morao I, and Michel J
- Subjects
- Ligands, Workflow, Drug Discovery, Algorithms, Running
- Abstract
Alchemical free energy methods can be used for the efficient computation of relative binding free energies during preclinical drug discovery stages. In recent years, this has been facilitated further by the implementation of workflows that enable non-experts to quickly and consistently set up the required simulations. Given the correct input structures, workflows handle the difficult aspects of setting up perturbations, including consistently defining the perturbable molecule, its atom mapping and topology generation, perturbation network generation, running of the simulations via different sampling methods, and analysis of the results. Different academic and commercial workflows are discussed, including FEW, FESetup, FEPrepare, CHARMM-GUI, Transformato, PMX, QLigFEP, TIES, ProFESSA, PyAutoFEP, BioSimSpace, FEP+, Flare, and Orion. These workflows differ in various aspects, such as mapping algorithms or enhanced sampling methods. Some workflows can accommodate more than one molecular dynamics (MD) engine and use external libraries for tasks. Differences between workflows can present advantages for different use cases, however a lack of interoperability of the workflows' components hinders systematic comparisons., (© 2024. The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature.)
- Published
- 2024
- Full Text
- View/download PDF
168. Classification of forensic hyperspectral paper data using hybrid spectral similarity algorithms.
- Author
-
Devassy, Binu Melit, George, Sony, Nussbaum, Peter, and Thomas, Tessamma
- Subjects
- *
SPECTRAL imaging , *FORGERY , *ALGORITHMS , *FORENSIC sciences , *CLASSIFICATION , *CONFIDENCE intervals , *CLASSIFICATION algorithms - Abstract
Document forgeries that involve modification of the materials used, such as ink and paper, provide evidence of any malpractices being performed. Forensic specialists use different techniques to identify and classify these samples; however, the most preferred method is to use nondestructive techniques to avoid any potential damage to the original specimen under investigation. Hyperspectral imaging has already been explored in several application domains and used as a powerful method in forensic investigations to extract information about the materials under examination. To precisely classify the material information and utilize the hyperspectral imaging technique's potential, we probed the potential of some hybrid spectral similarity measures to classify different commonly used paper samples. A comparison of these methods is quantitatively presented in this article. Hybrid spectral similarity algorithms are tested on forensic analysis of paper data. We compared the classification capabilities of various hybrid spectral similarity algorithms on hyperspectral data of 40 different paper samples. The overall accuracy (OA), kappa K̂, Z‐score of kappa (ZK̂), and the 95% confidence interval of kappa (CI(K̂)) are used for comparison. The SID‐SAM and SID‐SCA produced an overall accuracy of 88% and 87%, respectively, which is highest among the hybrid spectral similarity measures tested. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
169. Risikoadaptierte Prostatakarzinomfrüherkennung 2.0 – Positionspapier der Deutschen Gesellschaft für Urologie 2024.
- Author
-
Michel, Maurice Stephan, Gschwend, Jürgen E., Wullich, Bernd, Krege, Susanne, Bolenz, Christian, Merseburger, Axel S., Krabbe, Laura-Maria, Schultz-Lampel, Daniela, König, Frank, Haferkamp, Axel, and Hadaschik, Boris
- Subjects
MORTALITY prevention ,RISK assessment ,BIOPSY ,PROSTATE-specific antigen ,EARLY detection of cancer ,PROSTATE tumors ,MAGNETIC resonance imaging ,ALGORITHMS - Abstract
Copyright of Die Urologie is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
170. Society of Skeletal Radiology– white paper. Guidelines for the diagnostic management of incidental solitary bone lesions on CT and MRI in adults: bone reporting and data system (Bone-RADS)
- Author
-
Connie Y. Chang, Hillary W. Garner, Shivani Ahlawat, Behrang Amini, Matthew D. Bucknor, Jonathan A. Flug, Iman Khodarahmi, Michael E. Mulligan, Jeffrey J. Peterson, Geoffrey M. Riley, Mohammad Samim, Santiago A. Lozano-Calderon, and Jim S. Wu
- Subjects
Adult ,Humans ,Radiology, Nuclear Medicine and imaging ,Radiology ,Tomography, X-Ray Computed ,Magnetic Resonance Imaging ,Algorithms - Abstract
The purpose of this article is to present algorithms for the diagnostic management of solitary bone lesions incidentally encountered on computed tomography (CT) and magnetic resonance (MRI) in adults. Based on review of the current literature and expert opinion, the Practice Guidelines and Technical Standards Committee of the Society of Skeletal Radiology (SSR) proposes a bone reporting and data system (Bone-RADS) for incidentally encountered solitary bone lesions on CT and MRI with four possible diagnostic management recommendations (Bone-RADS1, leave alone; Bone-RADS2, perform different imaging modality; Bone-RADS3, perform follow-up imaging; Bone-RADS4, biopsy and/or oncologic referral). Two algorithms for CT based on lesion density (lucent or sclerotic/mixed) and two for MRI allow the user to arrive at a specific Bone-RADS management recommendation. Representative cases are provided to illustrate the usability of the algorithms.
- Published
- 2022
171. A cellular segmentation algorithm with fast customization.
- Subjects
- Algorithms, Image Processing, Computer-Assisted
- Published
- 2022
- Full Text
- View/download PDF
172. Single-cell-specific drug activities are revealed by a tensor imputation algorithm.
- Subjects
- Oligonucleotide Array Sequence Analysis, Algorithms
- Published
- 2022
- Full Text
- View/download PDF
173. Prediction of black carbon in marine engines and correlation analysis of model characteristics based on multiple machine learning algorithms.
- Author
-
Sun Y, Lü L, Cai YK, and Lee P
- Subjects
- Support Vector Machine, Soot, Carbon, Machine Learning, Algorithms
- Abstract
Ship black carbon emissions have caused great harm to ecological environment. In order to estimate the black carbon emissions, thereby reducing the cost of black carbon experiments, here, we introduced four machine learning algorithms which are lasso regression, support vector machine, extreme gradient boosting, and artificial neural network to predict ship black carbon emissions. The prediction models were established with using the datasets acquired from similar marine engines under various steady-state conditions. The results show that SVM, XGB, and ANN have higher prediction accuracy than lasso regression, and the adjusted R
2 of each model is 0.9810, 0.9850, 0.9885, and 0.6088. Although ANN shows the best prediction performance, it is inferior to SVM and XGB in terms of model stability and training cost. Then, in order to simplify the optimization process of hyperparameters and improve the prediction accuracy of the model at the same time, we use three different swarm intelligence algorithms to automatically optimize the hyperparameters of SVM and XGB. In addition, we applied mutual information to measure the correlation between the characteristics of the prediction models and black carbon concentration and found that the characteristics which related to in-cylinder combustion have a strong correlation with the black carbon concentration. The findings in this paper prove the feasibility of machine learning in ship black carbon emission prediction and could provide references for reducing ship black carbon emissions and the formulation of emission regulations., (© 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)- Published
- 2022
- Full Text
- View/download PDF
174. Learning RNA structure prediction from crowd-designed RNAs.
- Subjects
- Learning, Algorithms, RNA chemistry, RNA genetics
- Published
- 2022
- Full Text
- View/download PDF
175. Automated analysis of pen-on-paper spirals for tremor detection, quantification, and differentiation.
- Author
-
Rajan, Roopa, Anandapadmanabhan, Reghu, Nageswaran, Sharmila, Radhakrishnan, Vineeth, Saini, Arti, Krishnan, Syam, Gupta, Anu, Vishnu, Venugopalan Y., Pandit, Awadh K., Singh, Rajesh Kumar, Radhakrishnan, Divya M, Singh, Mamta Bhushan, Bhatia, Rohit, Srivastava, Achal, Kishore, Asha, and Padma Srivastava, M. V.
- Subjects
STATISTICS ,RESEARCH ,CONFIDENCE intervals ,ANALYSIS of variance ,TASK performance ,HANDWRITING ,ACCELEROMETERS ,DYSTONIA ,MOVEMENT disorders ,TREMOR ,DRAWING ,DESCRIPTIVE statistics ,PARKINSON'S disease ,SENSITIVITY & specificity (Statistics) ,DATA analysis ,RECEIVER operating characteristic curves ,DATA analysis software ,ALGORITHMS - Abstract
OBJECTIVE: To develop an automated algorithm to detect, quantify, and differentiate between tremor using pen-on-paper spirals. METHODS: Patients with essential tremor (n = 25), dystonic tremor (n = 25), Parkinson’s disease (n = 25), and healthy volunteers (HV, n = 25) drew free-hand spirals. The algorithm derived the mean deviation (MD) and tremor variability from scanned images. MD and tremor variability were compared with 1) the Bain and Findley scale, 2) the Fahn–Tolosa–Marin tremor rating scale (FTM–TRS), and 3) the peak power and total power of the accelerometer spectra. Inter and intra loop widths were computed to differentiate between the tremor. RESULTS: MD was higher in the tremor group (48.9±26.3) than in HV (26.4±5.3; p < 0.001). The cut-off value of 30.3 had 80.9% sensitivity and 76.0% specificity for the detection of the tremor [area under the curve: 0.83; 95% confidence index (CI): 0.75, 0.91, p < 0.001]. MD correlated with the Bain and Findley ratings (rho = 0.491, p = 0 < 0.001), FTM–TRS part B (rho = 0.260, p = 0.032) and accelerometric measures of postural tremor (total power, rho = 0.366, p < 0.001; peak power, rho = 0.402, p < 0.001). Minimum Detectable Change was 19.9%. Inter loop width distinguished Parkinson’s disease spirals from dystonic tremor (p < 0.001, 95% CI: 54.6, 211.1), essential tremor (p = 0.003, 95% CI: 28.5, 184.9), or HV (p = 0.036, 95% CI: -160.4, -3.9). CONCLUSION: The automated analysis of pen-on-paper spirals generated robust variables to quantify the tremor and putative variables to distinguish them from each other. SIGNIFICANCE: This technique maybe useful for epidemiological surveys and follow-up studies on tremor. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
176. Digitalized Control Algorithm of Bridgeless Totem-Pole PFC with a Simple Control Structure Based on the Phase Angle.
- Author
-
Lee, Gi-Young, Park, Hae-Chan, Ji, Min-Woo, and Kim, Rae-Young
- Subjects
ELECTRIC current rectifiers ,ELECTRONIC paper ,PHASE-locked loops ,ALGORITHMS ,ANGLES ,VOLTAGE - Abstract
Compared to the conventional boost power factor correction (PFC) converter, a totem-pole bridgeless PFC has high efficiency because it does not have an input diode rectifier stage, but a current spike may occur when the polarity of the grid voltage changes. This paper proposes a digital control algorithm for bridgeless totem-pole PFC with a simple control structure based on the phase angle of grid voltage. The proposed algorithm has a PI-based double-loop control structure and performs DC-link voltage and input inductor current control. Rectifying switches operate based on the proposed rectification algorithm using phase angle information calculated through a single-phase phase-locked loop (PLL) to prevent current spikes. The feed-forward duty ratio value is calculated according to the polarity of the grid voltage and added to the double-loop controller to perform appropriate power factor control. The performance and feasibility of the proposed control algorithm are verified through a 3 kW hardware prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
177. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
- *
POWER transformers , *TRANSFORMER insulation , *MOISTURE , *ALGORITHMS , *MACHINE learning , *CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
178. Simulated Learners in Educational Technology: A Systematic Literature Review and a Turing-Like Test
- Author
-
Tanja Käser and Giora Alexandron
- Abstract
Simulation is a powerful approach that plays a significant role in science and technology. Computational models that simulate learner interactions and data hold great promise for educational technology as well. Amongst others, simulated learners can be used for teacher training, for generating and evaluating hypotheses on human learning, for developing adaptive learning algorithms, for building virtual worlds in which students can practice collaboration skills with simulated pals, and for testing learning environments. This paper provides the first systematic literature review on simulated learners in the broad area of artificial intelligence in education and related fields, focusing on the decade 2010-19. We analyze the trends regarding the use of simulated learners in educational technology within this decade, the purposes for which simulated learners are being used, and how the validity of the simulated learners is assessed. We find that simulated learner models tend to represent only narrow aspects of student learning. And, surprisingly, we also find that almost half of the studies using simulated learners do not provide "any" evidence that their modeling addresses the most fundamental question in simulation design -- is the model valid? This poses a threat to the reliability of results that are based on these models. Based on our findings, we propose that future research should focus on developing more complete simulated learner models. To validate these models, we suggest a standard and universal criterion, which is based on the lasting idea of Turing's Test. We discuss the properties of this test and its potential to move the field of simulated learners forward.
- Published
- 2024
- Full Text
- View/download PDF
179. Learning the Meanings of Function Words from Grounded Language Using a Visual Question Answering Model
- Author
-
Eva Portelance, Michael C. Frank, and Dan Jurafsky
- Abstract
Interpreting a seemingly simple function word like "or," "behind," or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network-based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learned by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on the frequency of models' input. Our findings offer proof-of-concept evidence that it is possible to learn the nuanced interpretations of function words in a visually grounded context by using non-symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.
- Published
- 2024
- Full Text
- View/download PDF
180. Enhancing Recall in Automated Record Screening: A Resampling Algorithm
- Author
-
Zhipeng Hou and Elizabeth Tipton
- Abstract
Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least 80% of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks.
- Published
- 2024
- Full Text
- View/download PDF
181. Assessment of Learning Parameters for Students' Adaptability in Online Education Using Machine Learning and Explainable AI
- Author
-
Sadhu Prasad Kar, Amit Kumar Das, Rajeev Chatterjee, and Jyotsna Kumar Mandal
- Abstract
Technology Enabled Learning (TEL) has a major impact on the learning adaptability of the learners. During the COVID-19 pandemic, there has been a drastic change in the learning methodology. The adaptability of learners from the various domains, levels and age has been a significant component of research in context to education. In this paper, the authors have proposed a machine learning and explainable AI based solution to identify critical learning parameters for students' adaptability level in online education. In this research the authors have employed various explainable AI (XAI) algorithms namely Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), FEature iMportance based eXplanable AI algorithm (FAMeX) for identifying the critical learning parameters to decide the adaptability level of a student. To test the efficacy of the solution, a dataset of students of several education levels of Bangladesh, collected from both online and offline surveys has been used. The results revealed are quite interesting, and counter intuitive.
- Published
- 2024
- Full Text
- View/download PDF
182. Optimization of Texture Rendering of 3D Building Model Based on Vertex Importance.
- Author
-
Shen, Wenfei, Huo, Liang, Shen, Tao, Zhang, Miao, and Li, Yucai
- Subjects
TEXTURE mapping ,DATA modeling ,CURVATURE ,ALGORITHMS - Abstract
In 3D building models, a large number of texture maps with different sizes increase the number of model data loading and drawing batches, which greatly reduces the drawing efficiency of the model. Therefore, this paper proposes a texture set mapping method based on vertex importance. Firstly, based on the 2D space boxing algorithm, the texture maps are merged and a series of Mipmap texture maps are generated, and then the vertex curvature, texture variability and location information of each vertex are calculated, normalized, and weighted to get the importance of each vertex, and then finally, different Mipmap-level textures are remapped according to the importance of the vertices. The experiment proves that the algorithm in this paper can reduce the amount of texture data on the one hand, and avoid the rendering pressure brought by the still large amount of data after merging on the other hand, so as to improve the rendering efficiency of the model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
183. Discussion paper: implications for the further development of the successfully in emergency medicine implemented AUD2IT-algorithm.
- Author
-
Przestrzelski, Christopher, Jakob, Antonina, Jakob, Clemens, and Hoffmann, Felix R.
- Subjects
DOCUMENTATION ,CURRICULUM ,HUMAN services programs ,EMERGENCY medicine ,EXPERIENCE ,MEDICAL records ,ELECTRONIC publications ,ALGORITHMS ,PATIENTS' attitudes - Abstract
The AUD2IT-algorithm is a tool to structure the data, which is collected during an emergency treatment. The goal is on the one hand to structure the documentation of the data and on the other hand to give a standardised data structure for the report during handover of an emergency patient. AUD2IT-algorithm was developed to provide residents a documentation aid, which helps to structure the medical reports without getting lost in unimportant details or forgetting important information. The sequence of anamnesis, clinical examination, considering a differential diagnosis, technical diagnostics, interpretation and therapy is rather an academic classification than a description of the real workflow. In a real setting, most of these steps take place simultaneously. Therefore, the application of the AUD2IT-algorithm should also be carried out according to the real processes. A big advantage of the AUD2IT-algorithm is that it can be used as a structure for the entire treatment process and also is entirely usable as a handover protocol within this process to make sure, that the existing state of knowledge is ensured at each point of a team-timeout. PR-E-(AUD2IT)-algorithm makes it possible to document a treatment process that, in principle, does not have to be limited to the field of emergency medicine. Also, in the outpatient treatment the PR-E-(AUD2IT)-algorithm could be used and further developed. One example could be the preparation and allocation of needed resources at the general practitioner. The algorithm is a standardised tool that can be used by healthcare professionals of any level of training. It gives the user a sense of security in their daily work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
184. Investigators from Midwest Orthopaedics at Rush Target Machine Learning (Paper 19: Evidence-based Machine Learning Algorithm To Predict Failure Following Cartilage Preservation Procedures In the Knee)
- Subjects
Papermaking machinery ,Machine learning ,Data mining ,Algorithms ,Data warehousing/data mining ,Algorithm ,Health ,Health care industry - Abstract
2023 MAY 28 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Devices & Surgical Technology Week -- Fresh data on Machine Learning are presented in a new report. [...]
- Published
- 2023
185. Performance Comparison of Tree-Based Algorithms for Wheel-Spinning Behavior Prediction
- Author
-
González-Esparza, Lydia Marion, Jin, Hao-Yue, Lu, Chang, and Cutumisu, Maria
- Abstract
Detecting wheel-spinning behaviors of students who interact with an Intelligent Tutoring System (ITS) is important for generating pertinent and effective feedback and developing more enriching learning experiences. This analysis compares decision tree and bagged tree models of student productive persistence (i.e., mastering a skill) using the ASSISTment 2009-2010 dataset for n = 4,217 middle-school students in the United States to predict whether a student is wheel-spinning. Although both models yielded high predictive accuracy, bagged trees significantly outperformed decision trees. Results show that (1) a tree-based model is effective at accurately predicting wheel-spinning and (2) students are taking more than the average amount of attempts to master a skill.
- Published
- 2022
- Full Text
- View/download PDF
186. Interrogating Algorithmic Bias: From Speculative Fiction to Liberatory Design
- Author
-
Gaskins, Nettrice
- Abstract
This paper reviews algorithmic or artificial intelligence (AI) bias in education technology, especially through the lenses of speculative fiction, speculative and liberatory design. It discusses the causes of the bias and reviews literature on various ways that algorithmic/AI bias manifests in education and in communities that are underrepresented in EdTech software development. While other recent work has responded to mainstream or private sector technology development, this review looks elsewhere where practitioners, artists, and activists engage underrepresented communities in brainstorming processes to identify and solve tough challenges. Their creative work includes films, toolkits, applications, prototypes and other physical artifacts, and other future-facing ideas that can provide guideposts for private sector development. Acknowledging the gaps in what has been studied, this paper proposes a different approach that includes speculative and liberatory design thinking, which can help developers better understand the educational and personal contexts of underrepresented groups. Early efforts to advocate for fairness and equity in AI and EdTech by groups such as the Algorithmic Justice League, the EdTech Equity Project, and EdSAFE AI Alliance is also explored.
- Published
- 2023
- Full Text
- View/download PDF
187. Hybrid Methods of Bibliographic Coupling and Text Similarity Measurement for Biomedical Paper Recommendation
- Author
-
Hongmei, Guo, Zhesi, Shen, Jianxun, Zeng, and Na, Hong
- Subjects
Cluster Analysis ,Algorithms - Abstract
The amount of available scientific literature is increasing, and studies have proposed various methods for evaluating document-document similarity in order to cluster or classify documents for science mapping and knowledge discovery. In this paper, we propose hybrid methods for bibliographic coupling (BC) and linear evaluation of text or content similarity: We combined BC with BM25, Cosine, and PMRA to compare their performances with single methods in paper recommendation tasks using TREC Genomics Track 2005datasets. For paper recommendation, BC and text-based methods complement each other, and hybrid methods were better than single methods. The combinations of BC with BM25 and BC with Cosine performed better than BC with PMRA. The performances were best when the weights of BM25, Cosine, and PMRA were 0.025, 0.2, and 0.2, respectively, in hybrid methods. For paper recommendation, the combinations of BC with text-based methods were better than BC or text-based methods used alone. The choice of method should depend on the actual data and research needs. In the future, the underlying reasons for the differences in performance and the specific part or type of information they complement in text clustering or recommendation need to be examined.
- Published
- 2022
188. Machine learning reveals how complex molecules bind to catalyst surfaces.
- Subjects
- Machine Learning, Algorithms
- Published
- 2022
- Full Text
- View/download PDF
189. New White Paper Centers AI as a Critical Component of 5G Telecoms Network Success
- Subjects
Algorithms ,Algorithm ,Telecommunications industry - Abstract
(GlobeNewswire) - Citing artificial intelligence (AI) and 5G as cornerstone technologies of this decade, a new whitepaper released today by InterDigital, Inc. (NASDAQ: IDCC) and written by ABI Research details [...]
- Published
- 2022
190. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
191. Reply to 'Describing center of pressure movement in stabilometry by ellipse area approximation' from Agnieszka Gołąb concerning the paper 'A Review of Center of Pressure (COP) Variables to Quantify Standing Balance in Elderly People: Algorithms and Open Access Code'
- Author
-
Flavien, Quijoux and Alice, Nicolaï
- Subjects
Access to Information ,Review Literature as Topic ,Movement ,Humans ,Postural Balance ,Algorithms ,Aged - Abstract
Letter to the Editor concerning "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Gołąb.
- Published
- 2022
192. Contemporaneous causality among office property prices of major Chinese cities with vector error correction modeling and directed acyclic graphs
- Author
-
Xu, Xiaojie and Zhang, Yun
- Published
- 2024
- Full Text
- View/download PDF
193. Managerial decision-making: exploration strategies in dynamic environments
- Author
-
Wan, Claire K. and Chih, Mingchang
- Published
- 2024
- Full Text
- View/download PDF
194. Letter on the results of the BASiNET method in the paper 'A systematic evaluation of computational tools for lncRNA identification'
- Author
-
Fabrício Martins Lopes and Matheus H Pimenta-Zanon
- Subjects
Computational Biology ,RNA, Long Noncoding ,Molecular Biology ,Algorithms ,Information Systems - Abstract
This letter points out a conceptual error made by the authors of a published paper, which presents a review and evaluation of computational methods in lncRNA identification. The error was made in the execution of the BASiNET method when considering an example file (toy model) made available by the authors with the aim of showing how a classification model could be stored in a file for later use. In this letter, this error is contextualized, the correct use of the BASiNET method is pointed out and the results of its correct execution to one of the datasets used in the review article are presented. The results clearly show the misuse of the method and present its correct use so that it can be fairly compared with other methods in the literature and prevent its misuse from being replicated by new studies.
- Published
- 2022
195. 'Re-Materialized' Medical Data: Paper-Based Transmission of Structured Medical Data Using QR-Code, for Medical Imaging Reports
- Author
-
Arthur, Lauriot Dit Prevost, Raphaël, Bentegeac, Audrey, Dequesnes, Adrien, Billiau, Emmanuel, Baudelet, Rémi, Legleye, Marc-Antoine, Hubaut, Michel, Cassagnou, Philippe, Puech, Rémi, Besson, and Emmanuel, Chazard
- Subjects
Diagnostic Imaging ,Radiography ,Information Storage and Retrieval ,Smartphone ,Algorithms - Abstract
Although paper-based transmission of medical information might seem outdated, it has proven efficient, and remains structurally safe from massive data leaks. As part of the ICIPEMIR project for improving medical imaging report, we explored the idea of structured data storage within a medical report, by embedding the data themselves in a QR-Code (and no URL-to-the-data). Three different datasets from ICIPEMIR were serialized, then encoded in a QR-Code. We compared 4 compression algorithms to reduce file size before QR-Encoding. YAML was the most concise format (character sparing), and allowed for embedding of a 2633-character serialized file within a QR-Code. The best compression rate was obtained with gzip, with a compression ratio of 2.32 in 15.7ms. Data were easily extracted and decompressed from a digital QR-Code using a simple command line. YAML file was also successfully recovered from the printed QR-Code with both Android and iOS smartphone. Minimal detected size was 3*3cm.
- Published
- 2022
196. Selected Papers of the 32nd International Workshop on Combinatorial Algorithms, IWOCA 2021.
- Author
-
Flocchini, Paola and Moura, Lucia
- Subjects
- *
EULERIAN graphs , *ALGORITHMS , *APPROXIMATION algorithms , *WEB hosting - Abstract
They give fixed parameter tractable algorithms for the problem parameterized by various structural parameters. The authors give a greedy loop-free algorithm for the exhaustive generation, a successor algorithm that runs in constant amortized time, among other algorithms, as well as results for the fixed spin generalization of this problem. IWOCA (International Workshop on Combinatorial Algorithms) is an annual conference series covering all aspects of combinatorial algorithms. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
197. A review paper of optimal resource allocation algorithm in cloud environment.
- Author
-
Patadiya, Namrata and Bhatt, Nirav
- Subjects
- *
RESOURCE allocation , *LITERATURE reviews , *SERVICE level agreements , *ALGORITHMS , *ELECTRONIC data processing , *CLOUD computing - Abstract
Cloud computing has become a popular approach for processing data and running computationally expensive services on a pay-as-you-go basis. Due to the ever-increasing requirement for cloud-based apps, appropriately allocating resources according to user requests while meeting service-level agreements between customers and service providers has become increasingly complex. An efficient and versatile resource allocation method is required to properly deploy these assets and meet user needs. The technique of distributing resources has become more arduous as user demand has increased. One of the key areas of research experts is how to design optimal solutions for this approach. In this paper, a literature review on proposed dynamic resource allocation approaches is introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
198. Insights into UK investment firms’ efforts to comply with MiFID II RTS 6 that governs the conduct of algorithmic trading
- Author
-
Culley, Alexander Conrad
- Published
- 2023
- Full Text
- View/download PDF
199. Rank selection for non-negative matrix factorization.
- Author
-
Cai Y, Gu H, and Kenney T
- Subjects
- Humans, Research Design, Algorithms, Microbiota
- Abstract
Non-Negative Matrix Factorization (NMF) is a widely used dimension reduction method that factorizes a non-negative data matrix into two lower dimensional non-negative matrices: one is the basis or feature matrix which consists of the variables and the other is the coefficients matrix which is the projections of data points to the new basis. The features can be interpreted as sub-structures of the data. The number of sub-structures in the feature matrix is also called the rank. This parameter controls the model complexity and is the only tuning parameter for the NMF model. An appropriate rank will extract the key latent features while minimizing the noise from the original data. However due to the large amount of optimization error always present in the NMF computation, the rank selection has been a difficult problem. We develop a novel rank selection method based on hypothesis testing, using a deconvolved bootstrap distribution to assess the significance level accurately. Through simulations, we compare our method with a rank selection method based on hypothesis testing using bootstrap distribution without deconvolution and a method based on cross-validation; we demonstrate that our method is not only accurate at estimating the true ranks for NMF, especially when the features are hard to distinguish, but also efficient at computation. When applied to real microbiome data (eg, OTU data and functional metagenomic data), our method also shows the ability to extract interpretable subcommunities in the data., (© 2023 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.)
- Published
- 2023
- Full Text
- View/download PDF
200. Resampling reduces bias amplification in experimental social networks.
- Author
-
Hardy MD, Thompson BD, Krafft PM, and Griffiths TL
- Subjects
- Humans, Bayes Theorem, Bias, Motivation, Social Networking, Algorithms
- Abstract
Large-scale social networks are thought to contribute to polarization by amplifying people's biases. However, the complexity of these technologies makes it difficult to identify the mechanisms responsible and evaluate mitigation strategies. Here we show under controlled laboratory conditions that transmission through social networks amplifies motivational biases on a simple artificial decision-making task. Participants in a large behavioural experiment showed increased rates of biased decision-making when part of a social network relative to asocial participants in 40 independently evolving populations. Drawing on ideas from Bayesian statistics, we identify a simple adjustment to content-selection algorithms that is predicted to mitigate bias amplification by generating samples of perspectives from within an individual's network that are more representative of the wider population. In two large experiments, this strategy was effective at reducing bias amplification while maintaining the benefits of information sharing. Simulations show that this algorithm can also be effective in more complex networks., (© 2023. The Author(s), under exclusive licence to Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.