6 results
Search Results
2. Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in Digital Age (CELDA) (Madrid, Spain, October 19-21, 2012)
- Author
-
International Association for Development of the Information Society (IADIS)
- Abstract
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a fast pace and affecting academia and professional practice in many ways. Paradigms such as just-in-time learning, constructivism, student-centered learning and collaborative approaches have emerged and are being supported by technological advancements such as simulations, virtual reality and multi-agents systems. These developments have created both opportunities and areas of serious concerns. This conference aimed to cover both technological as well as pedagogical issues related to these developments. The IADIS CELDA 2012 Conference received 98 submissions from more than 24 countries. Out of the papers submitted, 29 were accepted as full papers. In addition to the presentation of full papers, short papers and reflection papers, the conference also includes a keynote presentation from internationally distinguished researchers. Individual papers contain figures, tables, and references.
- Published
- 2012
3. A comparison of machine learning algorithms for the surveillance of autism spectrum disorder.
- Author
-
Lee, Scott H., Maenner, Matthew J., and Heilig, Charles M.
- Subjects
AUTISM spectrum disorders ,MACHINE learning ,SUPPORT vector machines ,DEEP learning ,CLASSIFICATION algorithms ,SUPERVISED learning - Abstract
Objective: The Centers for Disease Control and Prevention (CDC) coordinates a labor-intensive process to measure the prevalence of autism spectrum disorder (ASD) among children in the United States. Random forests methods have shown promise in speeding up this process, but they lag behind human classification accuracy by about 5%. We explore whether more recently available document classification algorithms can close this gap. Materials and methods: Using data gathered from a single surveillance site, we applied 8 supervised learning algorithms to predict whether children meet the case definition for ASD based solely on the words in their evaluations. We compared the algorithms’ performance across 10 random train-test splits of the data, using classification accuracy, F
1 score, and number of positive calls to evaluate their potential use for surveillance. Results: Across the 10 train-test cycles, the random forest and support vector machine with Naive Bayes features (NB-SVM) each achieved slightly more than 87% mean accuracy. The NB-SVM produced significantly more false negatives than false positives (P = 0.027), but the random forest did not, making its prevalence estimates very close to the true prevalence in the data. The best-performing neural network performed similarly to the random forest on both measures. Discussion: The random forest performed as well as more recently available models like the NB-SVM and the neural network, and it also produced good prevalence estimates. NB-SVM may not be a good candidate for use in a fully-automated surveillance workflow due to increased false negatives. More sophisticated algorithms, like hierarchical convolutional neural networks, may not be feasible to train due to characteristics of the data. Current algorithms might perform better if the data are abstracted and processed differently and if they take into account information about the children in addition to their evaluations. Conclusion: Deep learning models performed similarly to traditional machine learning methods at predicting the clinician-assigned case status for CDC’s autism surveillance system. While deep learning methods had limited benefit in this task, they may have applications in other surveillance systems. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
4. Downscaling satellite soil moisture using geomorphometry and machine learning.
- Author
-
Guevara, Mario and Vargas, Rodrigo
- Subjects
SOIL moisture ,MICROWAVE remote sensing ,SATELLITE-based remote sensing ,MACHINE learning ,DOWNSCALING (Climatology) - Abstract
Annual soil moisture estimates are useful to characterize trends in the climate system, in the capacity of soils to retain water and for predicting land and atmosphere interactions. The main source of soil moisture spatial information across large areas (e.g., continents) is satellite-based microwave remote sensing. However, satellite soil moisture datasets have coarse spatial resolution (e.g., 25–50 km grids); and large areas from regional-to-global scales have spatial information gaps. We provide an alternative approach to predict soil moisture spatial patterns (and associated uncertainty) with higher spatial resolution across areas where no information is otherwise available. This approach relies on geomorphometry derived terrain parameters and machine learning models to improve the statistical accuracy and the spatial resolution (from 27km to 1km grids) of satellite soil moisture information across the conterminous United States on an annual basis (1991–2016). We derived 15 primary and secondary terrain parameters from a digital elevation model. We trained a machine learning algorithm (i.e., kernel weighted nearest neighbors) for each year. Terrain parameters were used as predictors and annual satellite soil moisture estimates were used to train the models. The explained variance for all models-years was >70% (10-fold cross-validation). The 1km soil moisture grids (compared to the original satellite soil moisture estimates) had higher correlations (improving from r
2 = 0.1 to r2 = 0.46) and lower bias (improving from 0.062 to 0.057 m3/m3) with field soil moisture observations from the North American Soil Moisture Database (n = 668 locations with available data between 1991–2013; 0-5cm depth). We conclude that the fusion of geomorphometry methods and satellite soil moisture estimates is useful to increase the spatial resolution and accuracy of satellite-derived soil moisture. This approach can be applied to other satellite-derived soil moisture estimates and regions across the world. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
5. Personalized survival predictions via Trees of Predictors: An application to cardiac transplantation.
- Author
-
Yoon, Jinsung, Zame, William R., Banerjee, Amitava, Cadeiras, Martin, Alaa, Ahmed M., and van der Schaar, Mihaela
- Subjects
HEART transplantation complications ,COMPLICATIONS from organ transplantation ,HEART transplant recipients ,MACHINE learning ,PUBLIC health - Abstract
Background: Risk prediction is crucial in many areas of medical practice, such as cardiac transplantation, but existing clinical risk-scoring methods have suboptimal performance. We develop a novel risk prediction algorithm and test its performance on the database of all patients who were registered for cardiac transplantation in the United States during 1985-2015. Methods and findings: We develop a new, interpretable, methodology (ToPs: Trees of Predictors) built on the principle that specific predictive (survival) models should be used for specific clusters within the patient population. ToPs discovers these specific clusters and the specific predictive model that performs best for each cluster. In comparison with existing clinical risk scoring methods and state-of-the-art machine learning methods, our method provides significant improvements in survival predictions, both post- and pre-cardiac transplantation. For instance: in terms of 3-month survival post-transplantation, our method achieves AUC of 0.660; the best clinical risk scoring method (RSS) achieves 0.587. In terms of 3-year survival/mortality predictions post-transplantation (in comparison to RSS), holding specificity at 80.0%, our algorithm correctly predicts survival for 2,442 (14.0%) more patients (of 17,441 who actually survived); holding sensitivity at 80.0%, our algorithm correctly predicts mortality for 694 (13.0%) more patients (of 5,339 who did not survive). ToPs achieves similar improvements for other time horizons and for predictions pre-transplantation. ToPs discovers the most relevant features (covariates), uses available features to best advantage, and can adapt to changes in clinical practice. Conclusions: We show that, in comparison with existing clinical risk-scoring methods and other machine learning methods, ToPs significantly improves survival predictions both post- and pre-cardiac transplantation. ToPs provides a more accurate, personalized approach to survival prediction that can benefit patients, clinicians, and policymakers in making clinical decisions and setting clinical policy. Because survival prediction is widely used in clinical decision-making across diseases and clinical specialties, the implications of our methods are far-reaching. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Data-Driven Decisions for Reducing Readmissions for Heart Failure: General Methodology and Case Study.
- Author
-
Bayati, Mohsen, Braverman, Mark, Gillam, Michael, Mack, Karen M., Ruiz, George, Smith, Mark S., and Horvitz, Eric
- Subjects
PATIENT readmissions ,CONGESTIVE heart failure ,HOSPITAL admission & discharge ,MEDICAL decision making ,COST effectiveness ,PREVENTION - Abstract
Background: Several studies have focused on stratifying patients according to their level of readmission risk, fueled in part by incentive programs in the U.S. that link readmission rates to the annual payment update by Medicare. Patient-specific predictions about readmission have not seen widespread use because of their limited accuracy and questions about the efficacy of using measures of risk to guide clinical decisions. We construct a predictive model for readmissions for congestive heart failure (CHF) and study how its predictions can be used to perform patient-specific interventions. We assess the cost-effectiveness of a methodology that combines prediction and decision making to allocate interventions. The results highlight the importance of combining predictions with decision analysis. Methods: We construct a statistical classifier from a retrospective database of 793 hospital visits for heart failure that predicts the likelihood that patients will be rehospitalized within 30 days of discharge. We introduce a decision analysis that uses the predictions to guide decisions about post-discharge interventions. We perform a cost-effectiveness analysis of 379 additional hospital visits that were not included in either the formulation of the classifiers or the decision analysis. We report the performance of the methodology and show the overall expected value of employing a real-time decision system. Findings: For the cohort studied, readmissions are associated with a mean cost of $13,679 with a standard error of $1,214. Given a post-discharge plan that costs $1,300 and that reduces 30-day rehospitalizations by 35%, use of the proposed methods would provide an 18.2% reduction in rehospitalizations and save 3.8% of costs. Conclusions: Classifiers learned automatically from patient data can be joined with decision analysis to guide the allocation of post-discharge support to CHF patients. Such analyses are especially valuable in the common situation where it is not economically feasible to provide programs to all patients. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.