352,763 results
Search Results
102. News Diversity and Recommendation Systems: Setting the Interdisciplinary Scene
- Author
-
Joris, Glen, Colruyt, Camiel, Vermeulen, Judith, Vercoutere, Stefaan, De Grove, Frederik, Van Damme, Kristin, De Clercq, Orphée, Van Hee, Cynthia, De Marez, Lieven, Hoste, Veronique, Lievens, Eva, De Pessemier, Toon, Martens, Luc, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Tröltzsch, Fredi, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Reis, Ricardo, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Friedewald, Michael, editor, Önen, Melek, editor, Lievens, Eva, editor, Krenn, Stephan, editor, and Fricker, Samuel, editor
- Published
- 2020
- Full Text
- View/download PDF
103. Algorithm Selection and Model Evaluation in Application Design Using Machine Learning
- Author
-
Bethu, Srikanth, Sankara Babu, B., Madhavi, K., Gopala Krishna, P., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Boumerdassi, Selma, editor, Renault, Éric, editor, and Mühlethaler, Paul, editor
- Published
- 2020
- Full Text
- View/download PDF
104. Four-state rock-paper-scissors games in constrained Newman-Watts networks.
- Author
-
Zhang GY, Chen Y, Qi WK, and Qing SM
- Subjects
- Computer Simulation, Algorithms, Biological Evolution, Ecosystem, Food Chain, Game Theory, Models, Biological, Population Dynamics
- Abstract
We study the cyclic dominance of three species in two-dimensional constrained Newman-Watts networks with a four-state variant of the rock-paper-scissors game. By limiting the maximal connection distance Rmax in Newman-Watts networks with the long-range connection probability p , we depict more realistically the stochastic interactions among species within ecosystems. When we fix mobility and vary the value of p or Rmax, the Monte Carlo simulations show that the spiral waves grow in size, and the system becomes unstable and biodiversity is lost with increasing p or Rmax. These results are similar to recent results of Reichenbach et al. [Nature (London) 448, 1046 (2007)], in which they increase the mobility only without including long-range interactions. We compared extinctions with or without long-range connections and computed spatial correlation functions and correlation length. We conclude that long-range connections could improve the mobility of species, drastically changing their crossover to extinction and making the system more unstable.
- Published
- 2009
- Full Text
- View/download PDF
105. Generating Multiple Choice Questions from a Textbook: LLMs Match Human Performance on Most Metrics
- Author
-
Olney, Andrew M.
- Abstract
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and human-authored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM's performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance. [This paper was published in the "CEUR Workshop Proceedings," 2023.]
- Published
- 2023
106. Real-Time AI-Driven Assessment & Scaffolding That Improves Students' Mathematical Modeling during Science Inquiry
- Author
-
Adair, Amy, Segan, Ellie, Gobert, Janice, and Sao Pedro, Michael
- Abstract
Developing models and using mathematics are two key practices in internationally recognized science education standards, such as the Next Generation Science Standards (NGSS). However, students often struggle with these two intersecting practices, particularly when developing mathematical models about scientific phenomena. Formative performance-based assessments designed to elicit fine-grained data about students' competencies on these practices can be leveraged to develop embedded AI scaffolds to support students' learning. In this paper, we present the design and initial classroom test of virtual labs that automatically assess fine-grained sub-components of students' mathematical modeling competencies based on their actions within the learning environment. We describe how we leveraged underlying machine-learned and knowledge-engineered algorithms to trigger scaffolds, delivered proactively by a pedagogical agent, that address students' individual difficulties as they work. Results show that the students who received automated scaffolds for a given practice on their first virtual lab improved on that practice for the next virtual lab on the same science topic in a different scenario (a near-transfer task). These findings suggest that real-time automated scaffolds based on fine-grained assessment can foster students' mathematical modeling competencies related to the NGSS.
- Published
- 2023
107. A Bandit You Can Trust
- Author
-
Ethan Prihar, Adam Sales, and Neil Heffernan
- Abstract
This work proposes Dynamic Linear Epsilon-Greedy, a novel contextual multi-armed bandit algorithm that can adaptively assign personalized content to users while enabling unbiased statistical analysis. Traditional A/B testing and reinforcement learning approaches have trade-offs between empirical investigation and maximal impact on users. Our algorithm seeks to balance these objectives, allowing platforms to personalize content effectively while still gathering valuable data. Dynamic Linear Epsilon-Greedy was evaluated via simulation and an empirical study in the ASSISTments online learning platform. In simulation, Dynamic Linear Epsilon-Greedy performed comparably to existing algorithms and in ASSISTments, slightly increased students' learning compared to A/B testing. Data collected from its recommendations allowed for the identification of qualitative interactions, which showed high and low knowledge students benefited from different content. Dynamic Linear Epsilon-Greedy holds promise as a method to balance personalization with unbiased statistical analysis. All the data collected during the simulation and empirical study are publicly available at https://osf.io/zuwf7/. [This paper was published in: "UMAP '23: Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization," June 26-29, 2023.]
- Published
- 2023
- Full Text
- View/download PDF
108. Anniversary paper: image processing and manipulation through the pages of Medical Physics.
- Author
-
Armato SG 3rd and van Ginneken B
- Subjects
- Algorithms, Diagnostic Imaging trends, Forecasting, Health Physics trends, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Periodicals as Topic trends
- Abstract
The language of radiology has gradually evolved from "the film" (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to "the image," an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the "image" as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.
- Published
- 2008
- Full Text
- View/download PDF
109. Investigators from Midwest Orthopaedics at Rush Target Machine Learning (Paper 19: Evidence-based Machine Learning Algorithm To Predict Failure Following Cartilage Preservation Procedures In the Knee)
- Subjects
Data warehousing/data mining ,Algorithm ,Data mining ,Paper machines ,Algorithms ,Machine learning ,Papermaking machinery - Abstract
2023 MAY 28 (NewsRx) -- By a News Reporter-Staff News Editor at Medical Devices & Surgical Technology Week -- Fresh data on Machine Learning are presented in a new report. [...]
- Published
- 2023
110. Paper-based dosing algorithms for maintenance of warfarin anticoagulation.
- Author
-
Wilson SE, Costantini L, and Crowther MA
- Subjects
- Adult, Aged, Aged, 80 and over, Drug Administration Schedule, Drug Monitoring methods, Female, Follow-Up Studies, Humans, International Normalized Ratio, Male, Middle Aged, Treatment Outcome, Algorithms, Anticoagulants administration & dosage, Warfarin administration & dosage
- Abstract
We examined the quality of anticoagulation produced by two paper-based warfarin dosing algorithms in a randomized clinical trial of warfarin therapy. Fifty-eight patients were randomized to receive warfarin at a target international normalized ratio (INR) range of 2.1-3.0 and were followed for an average of 2.7 years. As a proportion of total patient-time, the percentage of time spent above, within, and below the therapeutic range was 11%, 71%, and 19% respectively. Fifty-six patients were randomized to receive warfarin at a higher target INR range (3.1-4.0) and had INRs within the therapeutic range for 40% of total patient time. We conclude that the performance, minimal cost, and ease-of-use of these algorithms make them well-suited for patient management within primary-care and research settings.
- Published
- 2007
- Full Text
- View/download PDF
111. Proceedings of Selected Research Paper Presentations at the 1980 Convention of the Association for Educational Communications and Technology and Sponsored by the Research and Theory Division (Denver, CO, April 21-24, 1980).
- Author
-
Simonson, Michael R. and Rohner, Daniel
- Abstract
The 31 papers in this collection represent approximately 35 percent of the manuscripts which were submitted for consideration to the Research and Theory Division of the Association for Educational Communications and Technology (AECT) for presentation at the 1980 AECT convention. All papers were subjected to a blind reviewing process and the ones finally selected represent some of the most current thinking in educational communications and technology. A listing of selected titles indicates the scope of the research paper presentations: "The Cognitive Effect in Bilingual Learners Given Different Pictorial Elaboration and Memory Tasks,""The Relationship of Communication Apprehension Level and Media Competency,""Implications of a Gestalt Approach to Research in Visual Communications,""Research on Pictures and Instructional Texts: Difficulties and Directions,""Imagery--A Return to Empirical Investigation,""A Meta-Analytic Study of Pictorial Stimulus Complexity,""Learner Interest and Instructional Design: A Conceptual Model,""The Organizing Function of Behavioral Objectives," and "Algorithmic Training for a Complex Perceptual-Motor Task." (LLS)
- Published
- 1980
112. FDA RELEASES TWO DISCUSSION PAPERS TO SPUR CONVERSATION ABOUT ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN DRUG DEVELOPMENT AND MANUFACTURING
- Subjects
United States. Food and Drug Administration ,Artificial intelligence ,Machine learning ,Computer science ,Algorithms ,Pharmaceutical industry ,News, opinion and commentary ,Algorithm ,Artificial intelligence - Abstract
SILVER SPRING, MD -- The following information was released by the U.S. Food and Drug Administration (FDA): By: Patrizia Cavazzoni, M.D., Director of the Center for Drug Evaluation and Research [...]
- Published
- 2023
113. A selection of papers from MICCAI 2004: the marriage of data and prior information.
- Author
-
Haynor DR, Barillot C, and Hellier P
- Subjects
- Congresses as Topic, Diagnostic Imaging trends, Publications, Subtraction Technique, Surgery, Computer-Assisted trends, United States, Algorithms, Artificial Intelligence, Diagnostic Imaging methods, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Models, Biological, Surgery, Computer-Assisted methods
- Published
- 2005
- Full Text
- View/download PDF
114. Recently, there are quite a few papers discussing delayed dynamical system with time-varying delays.
- Author
-
Chen T
- Subjects
- Nonlinear Dynamics, Time Factors, Algorithms, Neural Networks, Computer
- Published
- 2005
- Full Text
- View/download PDF
115. Answer to the comments of K. Dobbin, J. Shih and R. Simon on the paper 'Evaluation of the gene-specific dye-bias in cDNA microarray experiments'.
- Author
-
Martin-Magniette ML, Aubert J, Cabannes E, and Daudin JJ
- Subjects
- Computer Simulation, Fluorescent Dyes, Models, Statistical, Reproducibility of Results, Sensitivity and Specificity, Algorithms, Gene Expression Profiling methods, In Situ Hybridization, Fluorescence methods, Microscopy, Fluorescence methods, Models, Genetic, Oligonucleotide Array Sequence Analysis methods
- Published
- 2005
- Full Text
- View/download PDF
116. Tissue deformation and shape models in image-guided interventions: a discussion paper.
- Author
-
Hawkes DJ, Barratt D, Blackall JM, Chan C, Edwards PJ, Rhode K, Penney GP, McClelland J, and Hill DL
- Subjects
- Computer Simulation, Connective Tissue pathology, Elasticity, Movement, Algorithms, Connective Tissue physiopathology, Connective Tissue surgery, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Models, Biological, Subtraction Technique, Surgery, Computer-Assisted methods
- Abstract
This paper promotes the concept of active models in image-guided interventions. We outline the limitations of the rigid body assumption in image-guided interventions and describe how intraoperative imaging provides a rich source of information on spatial location of anatomical structures and therapy devices, allowing a preoperative plan to be updated during an intervention. Soft tissue deformation and variation from an atlas to a particular individual can both be determined using non-rigid registration. Established methods using free-form deformations have a very large number of degrees of freedom. Three examples of deformable models--motion models, biomechanical models and statistical shape models--are used to illustrate how prior information can be used to restrict the number of degrees of freedom of the registration algorithm and thus provide active models for image-guided interventions. We provide preliminary results from applications for each type of model.
- Published
- 2005
- Full Text
- View/download PDF
117. Reply to Nespolo's paper entitled "New invariants and dimensionless numbers: futile renaissance of old allacies?".
- Author
-
Günther B and Morgado E
- Subjects
- Algorithms, Models, Biological
- Published
- 2005
- Full Text
- View/download PDF
118. Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics
- Author
-
Roldan-Valadez, Ernesto, Salazar-Ruiz, Shirley Yoselin, Ibarra-Contreras, Rafael, and Rios, Camilo
- Published
- 2019
- Full Text
- View/download PDF
119. Impact factor correlations with Scimago Journal Rank, Source Normalized Impact per Paper, Eigenfactor Score, and the CiteScore in Radiology, Nuclear Medicine & Medical Imaging journals
- Author
-
Villaseñor-Almaraz, Moises, Islas-Serrano, Juan, Murata, Chiharu, and Roldan-Valadez, Ernesto
- Published
- 2019
- Full Text
- View/download PDF
120. Exploring Semi-Supervised Learning for Audio-Based Automated Classroom Observations
- Author
-
International Association for Development of the Information Society (IADIS), Chanchal, Akchunya, and Zualkernan, Imran
- Abstract
Systematic classroom observation is often used in evaluating and enhancing the quality of classroom instruction. However, classroom observation can potentially suffer from human bias. In addition, the traditional classroom observation is too expensive for resource-constrained environments (e.g., Sub-Saharan Africa, South and Central Asia). A cost-effective automation of classroom observation could potentially enhance both quality and resolution of feedback to the teacher, and hence potentially result in enhancing quality of instruction. Audio-based automatic classroom observation using supervised deep learning techniques has yielded good results in limited contexts. However, one challenge when using supervised techniques is the high cost of collecting and labelling the classroom audio data. One solution for such data-starved scenarios is to use semi-supervised learning (SSL) which requires significantly lesser data and labels. This paper explores an audio-adaptation of the state-of-the-art SSL FixMatch algorithm to automate classroom observation. An adaptation of the FixMatch algorithm was proposed to automate the coding for the Stallings class observation system. The proposed system was trained on classroom audio data collected in the wild. The supervised approach had an F1-score of 0.83 on 100% labeled data. The proposed FixMatch adaptation achieved an impressive F1-score of 0.81 on 20% labeled data, 0.79 on 15% labeled data, 0.76 on 10% labeled data, and 0.72 using only 5% of labeled data. This suggests that algorithms like FixMatch that use consistency regularization and pseudo-labeling have a great potential for being used to automate classroom observation using a small labelled set of audio snippets.
- Published
- 2022
121. Cost Optimal Production-Scheduling Model Based on VNS-NSGA-II Hybrid Algorithm—Study on Tissue Paper Mill.
- Author
-
Zhang, Huanhuan, Li, Jigeng, Hong, Mengna, Man, Yi, and He, Zhenglei
- Subjects
PAPER mills ,FLOW shop scheduling ,PRODUCTION scheduling ,INDUSTRIAL costs ,ALGORITHMS - Abstract
With the development of the customization concept, small-batch and multi-variety production will become one of the major production modes, especially for fast-moving consumer goods. However, this production mode has two issues: high production cost and the long manufacturing period. To address these issues, this study proposes a multi-objective optimization model for the flexible flow-shop to optimize the production scheduling, which would maximize the production efficiency by minimizing the production cost and makespan. The model is designed based on hybrid algorithms, which combine a fast non-dominated genetic algorithm (NSGA-II) and a variable neighborhood search algorithm (VNS). In this model, NSGA-II is the major algorithm to calculate the optimal solutions. VNS is to improve the quality of the solution obtained by NSGA-II. The model is verified by an example of a real-world typical FFS, a tissue papermaking mill. The results show that the scheduling model can reduce production costs by 4.2% and makespan by 6.8% compared with manual scheduling. The hybrid VNS-NSGA-II model also shows better performance than NSGA-II, both in production cost and makespan. Hybrid algorithms are a good solution for multi-objective optimization issues in flexible flow-shop production scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
122. Extractive Summarization Using Cohesion Network Analysis and Submodular Set Functions
- Author
-
Cioaca, Valentin Sergiu, Dascalu, Mihai, and McNamara, Danielle S.
- Abstract
Numerous approaches have been introduced to automate the process of text summarization, but only few can be easily adapted to multiple languages. This paper introduces a multilingual text processing pipeline integrated in the open-source "ReaderBench" framework, which can be retrofit to cover more than 50 languages. While considering the extensibility of the approach and the problem of missing labeled data for training in various languages besides English, an unsupervised algorithm was preferred to perform extractive summarization (i.e., select the most representative sentences from the original document). Specifically, two different approaches relying on text cohesion were implemented:(1) a graph-based text representation derived from Cohesion Network Analysis that extends TextRank; and (2) a class of submodular set functions. Evaluations were performed on the DUC dataset and use as baseline the implementation of TextRank from Gensim. Our results using the submodular set functions outperform the baseline. In addition, two use cases on English and Romanian languages are presented, with corresponding graphical representations for the two methods. [This paper was published in: 22nd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) Proceedings, 2020, pp. 161-168 (ISBN 978-1-7281-7628-4).]
- Published
- 2021
- Full Text
- View/download PDF
123. An Approach to Automatic Reconstruction of Apictorial Hand Torn Paper Document.
- Author
-
Lotus, Rayappan, Varghese, Justin, and Saudia, Subash
- Subjects
AUTOMATION ,PAPER ,ARCHAEOLOGY ,FORENSIC sciences ,ALGORITHMS - Abstract
Digital automation in reconstruction of apictorial hand torn paper document increases efficacy and reduces human effort. Reconstruction of torn document has importance in various fields like archaeology, art conservation and forensic sciences. The devised novel technique for hand torn paper document, consists of pre-processing, feature extraction and reconstruction phase. Torn fragment's boundaries are simplified as polygons using douglas peucker polyline simplification algorithm. Features such as Euclidean distance and number of sudden changes in contour orientation are extracted. Our matching criteria identify the matching counterparts. Proposed features curtail ambiguity and enriches efficacy in reconstruction. Reconstructed results of hand torn paper document favour the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2016
124. Variational Temporal IRT: Fast, Accurate, and Explainable Inference of Dynamic Learner Proficiency
- Author
-
Kim, Yunsung, Sreechan, Piech, Chris, and Thille, Candace
- Abstract
Dynamic Item Response Models extend the standard Item Response Theory (IRT) to capture temporal dynamics in learner ability. While these models have the potential to allow instructional systems to actively monitor the evolution of learner proficiency in real time, existing dynamic item response models rely on expensive inference algorithms that scale poorly to massive datasets. In this work, we propose Variational Temporal IRT (VTIRT) for fast and accurate inference of dynamic learner proficiency. VTIRT offers orders of magnitude speedup in inference runtime while still providing accurate inference. Moreover, the proposed algorithm is intrinsically interpretable by virtue of its modular design. When applied to 9 real student datasets, VTIRT consistently yields improvements in predicting future learner performance over other learner proficiency models. [For the complete proceedings, see ED630829.]
- Published
- 2023
125. Optimizing Parameters for Accurate Position Data Mining in Diverse Classrooms Layouts
- Author
-
Shou, Tianze, Borchers, Conrad, Karumbaiah, Shamya, and Aleven, Vincent
- Abstract
Spatial analytics receive increased attention in educational data mining. A critical issue in stop detection (i.e., the automatic extraction of timestamped and located stops in the movement of individuals) is a lack of validation of stop accuracy to represent phenomena of interest. Next to a radius that an actor does not exceed for a certain duration to establish a stop, this study presents a reproducible procedure to optimize a range parameter for K-12 classrooms where students sitting within a certain vicinity of an inferred stop are tagged as being visited. This extension is motivated by adapting parameters to infer teacher visits (i.e., on-task and off-task conversations between the teacher and one or more students) in an intelligent tutoring system classroom with a dense layout. We evaluate the accuracy of our algorithm and highlight a tradeoff between precision and recall in teacher visit detection, which favors recall. We recommend that future research adjust their parameter search based on stop detection precision thresholds. This adjustment led to better cross-validation accuracy than maximizing parameters for an average of precision and recall (F1 = 0.18 compared to 0.09). As stop sample size shrinks with higher precision cutoffs, thresholds can be informed by ensuring sufficient statistical power in offline analyses. We share avenues for future research to refine our procedure further. Detecting teacher visits may benefit from additional spatial features (e.g., teacher movement trajectory) and can facilitate studying the interplay of teacher behavior and student learning. [For the complete proceedings, see ED630829.]
- Published
- 2023
126. Is Your Model 'MADD'? A Novel Metric to Evaluate Algorithmic Fairness for Predictive Student Models
- Author
-
Verger, Mélina, Lallé, Sébastien, Bouchet, François, and Luengo, Vanda
- Abstract
Predictive student models are increasingly used in learning environments due to their ability to enhance educational outcomes and support stakeholders in making informed decisions. However, predictive models can be biased and produce unfair outcomes, leading to potential discrimination against some students and possible harmful long-term implications. This has prompted research on fairness metrics meant to capture and quantify such biases. Nonetheless, so far, existing fairness metrics used in education are predictive performance-oriented, focusing on assessing biased outcomes across groups of students, without considering the behaviors of the models nor the severity of the biases in the outcomes. Therefore, we propose a novel metric, the Model Absolute Density Distance (MADD), to analyze models' discriminatory behaviors independently from their predictive performance. We also provide a complementary visualization-based analysis to enable fine-grained human assessment of how the models discriminate between groups of students. We evaluate our approach on the common task of predicting student success in online courses, using several common predictive classification models on an open educational dataset. We also compare our metric to the only predictive performance-oriented fairness metric developed in education, ABROCA. Results on this dataset show that: (1) fair predictive performance does not guarantee fair models' behaviors and thus fair outcomes; (2) there is no direct relationship between data bias and predictive performance bias nor discriminatory behaviors bias; and (3) trained on the same data, models exhibit different discriminatory behaviors, according to different sensitive features too. We thus recommend using the MADD on models that show satisfying predictive performance, to gain a finer-grained understanding on how they behave and regarding who and to refine models selection and their usage. Altogether, this work contributes to advancing the research on fair student models in education. Source code and data are in open access at https://github.com/melinaverger/MADD. [For the complete proceedings, see ED630829.]
- Published
- 2023
127. Clustering to Define Interview Participants for Analyzing Student Feedback: A Case of Legends of Learning
- Author
-
Karimov, Ayaz, Saarela, Mirka, and Kärkkäinen, Tommi
- Abstract
Within the last decade, different educational data mining techniques, particularly quantitative methods such as clustering, and regression analysis are widely used to analyze the data from educational games. In this research, we implemented a quantitative data mining technique (clustering) to further investigate students' feedback. Students played educational games within a week on the educational games platform, Legends of Learning and after a week, we asked them to fulfill the feedback survey about their feelings on the use of this platform. To analyze the collected data from students, firstly, we prepared clusters and selected one prototype student closest to the centroid of each cluster to interview. Interviews were held to explain the clusters more and due to time and resource limitations, we were unable to interview all (N=60) students, thus only the most representative students were interviewed. In addition to the students, we conducted an interview with the teacher as well to get her detailed feedback and observations on the usage of educational games. We also asked students to take an exam before and after the research to see the impact of games on their grades. Our results depict that though educational games can increase students' motivation, they may negatively impact some students' grades. And even though playing games made students feel interested and fun, they would not like to play them on a daily basis. Hence, using educational games for a certain duration such as subject revision weeks may positively influence students' grades and motivation. [For the complete proceedings, see ED630829.]
- Published
- 2023
128. Proceedings of the International Conference on Educational Data Mining (EDM) (16th, Bengaluru, India, July 11-14, 2023)
- Author
-
International Educational Data Mining Society, Feng, Mingyu, Käser, Tanja, and Talukdar, Partha
- Abstract
The Indian Institute of Science is proud to host the fully in-person sixteenth iteration of the International Conference on Educational Data Mining (EDM) during July 11-14, 2023. EDM is the annual flagship conference of the International Educational Data Mining Society. The theme of this year's conference is "Educational data mining for amplifying human potential." Not all students or seekers of knowledge receive the education necessary to help them realize their full potential, be it due to a lack of resources or lack of access to high quality teaching. The dearth in high-quality educational content, teaching aids, and methodologies, and non-availability of objective feedback on how they could become better teachers, deprive our teachers from achieving their full potential. The administrators and policy makers lack tools for making optimal decisions such as optimal class sizes, class composition, and course sequencing. All these handicap the nations, particularly the economically emergent ones, who recognize the centrality of education for their growth. EDM-2023 has striven to focus on concepts, principles, and techniques mined from educational data for amplifying the potential of all the stakeholders in the education system. The spotlights of EDM-2023 include: (1) Five keynote talks by outstanding researchers of eminence; (2) A plenary Test of Time award talk and a Banquet talk; (3) Five tutorials (foundational as well as advanced); (4) Four thought provoking panels on contemporary themes; (5) Peer reviewed technical paper and poster presentations; (6) Doctoral students consortium; and (7) An enchanting cultural programme. [Individual papers are indexed in ERIC.]
- Published
- 2023
129. Development and Preliminary Testing of the Algopaint Unplugged Computational Thinking Assessment for Preschool Education
- Author
-
Zsoldos-Marchi?, Iuliana and Bálint-Svella, Éva
- Abstract
The concept, development and assessment of computational thinking have increasingly become the focus of research in recent years. Most of this type of research focuses on older children or adults. Preschool age is a sensitive period when many skills develop intensively, so the development of computational thinking skills can already begin at this age. The increased interest in this field requires the development of appropriate assessments. Currently, there are only a limited number of computational thinking assessments for preschool children. Based on this shortcoming, an assessment tool, named AlgoPaint Unplugged Computational Thinking Assessment for Preschool, was created addressed for 4-7 years old children. It is a paper-pencil-based test, which examines the following computational thinking domains: algorithms and debugging. Regarding computational concepts, simple instructions, simple and nested loops, and conditionals are included in the test. For the preliminary testing, AlgoPaint test was applied by 11 preschool teachers with 56 preschool age children. The test was also evaluated by 6 experts in algorithmic thinking working at universities. Based on the feedback given by the teachers and the experts, and the results of the children, AlgoPaint Computational Thinking Test was revised and completed. The revised version of the test is included in the appendix of the paper.
- Published
- 2023
130. Mining, Analyzing, and Modeling the Cognitive Strategies Students Use to Construct Higher Quality Causal Maps
- Author
-
Allan Jeong and Hyoung Seok-Shin
- Abstract
The Jeong (2020) study found that greater use of backward and depth-first processing was associated with higher scores on students' argument maps and that analysis of only the first five nodes students placed in their maps predicted map scores. This study utilized the jMAP tool and algorithms developed in the Jeong (2020) study to determine if the same processes produce higher-quality causal maps. This study analyzed the first five nodes that students (n = 37) placed in their causal maps to reveal that: 1) use of backward, forward, breadth-first, and depth-first processing produced maps of similar quality; and 2) backward processing had three times more impact on maps scores than depth-first processing to suggest that linking events into chains using backward chaining is one approach to constructing higher quality causal maps. These findings are compared with prior research findings and discussed in terms of noted differences in the task demands of constructing argument versus causal maps to gain insights into why, how, and when specific processes/strategies can be applied to create higher-quality causal maps and argument maps. These insights provide guidance on ways to develop diagramming and analytic tools that automate, analyze, and provide real-time support to improve the quality of students' maps, learning, understanding, and problem-solving skills. [For the full proceedings, see ED636095.]
- Published
- 2023
131. The Effects of Age and Learning with Educational Robotic Devices on Children's Algorithmic Thinking
- Author
-
Angeli, Charoula, Diakou, Panayiota, and Anastasiou, Vaso
- Abstract
Educational Robotics is increasingly used in elementary-school classrooms to develop students' algorithmic thinking and programming skills. However, most research appears descriptive and lacks experimental evidence on the effects of teaching interventions using robotics to develop algorithmic thinking. Using the robots Dash and Dot, this study examined algorithmic thinking development in groups of children aged 6, 9, and 12. The results showed a statistically significant main effect between the age of students and algorithmic thinking skills and a statistically significant main effect between intervention and algorithmic thinking. In conclusion, the findings underscore the necessity of providing learners with structured, scaffolded activities tailored to their age to effectively nurture algorithmic thinking skills when engaging in Dash and Dot activities. [For the full proceedings, see ED636095.]
- Published
- 2023
132. Multidimensional Scaling: Review and Geographical Applications, Technical Paper No. 10.
- Author
-
Association of American Geographers, Washington, DC. Commission on College Geography., Golledge, R. G., Rushton, Gerard, Golledge, R. G., Rushton, Gerard, and Association of American Geographers, Washington, DC. Commission on College Geography.
- Abstract
The purpose of this monograph is to show that sufficient achievements in scaling applications have been made to justify serious study of scaling methodologies, particularly multidimensional scaling (MDS) as a tool for geographers. To be useful research, it was felt that the common methodological and technical problems that specialized researchers share with other scholars should be indicated by review of the applications, and that an adequate statement on the mathematics and heuristics of scaling algorithms is necessary. As a review of applications, subroutines in scaling programs are "dissected" in order to understand how certain critical parameters are defined and used. This research work is presented in three parts relating to 1) basic fundamentals of scaling, data requirements, and algorithm constructions and problems; 2) two step-by-step examples of the non-metric section of a multidimensional scaling algorithm; and 3) a review of geographical applications of the approach in a variety of problem areas. The position of this paper is that MDS provides a useful and constructive methodology for examining the problems of preference and choice for researchers in geography. In conclusion, some problems of using MDS are mentioned and its potential uses in geography given. (Author/ND)
- Published
- 1972
133. Noninvasive cardiac output monitor algorithms are more sophisticated and perform better than indicated in modeling paper.
- Author
-
Orr JA, Kück K, and Brewer LM
- Subjects
- Computer Simulation, Humans, Algorithms, Carbon Dioxide, Cardiac Output physiology, Monitoring, Physiologic instrumentation, Pulmonary Circulation physiology
- Published
- 2003
- Full Text
- View/download PDF
134. Introducing a Vector Space Model to Perform a Proactive Credit Scoring
- Author
-
Saia, Roberto, Carta, Salvatore, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Ghosh, Ashish, Series Editor, Fred, Ana, editor, Dietz, Jan, editor, Aveiro, David, editor, Liu, Kecheng, editor, and Bernardino, Jorge, editor
- Published
- 2019
- Full Text
- View/download PDF
135. Using Group Performance to Interpret Individual Responses to Criterion-Referenced Tests. Professional Paper 25.
- Author
-
Southwest Regional Laboratory for Educational Research and Development, Los Alamitos, CA. and Besel, Ronald
- Abstract
The contention is made that group performance data are useful in the construction and interpretation of criterion-referenced tests. The Mastery Learning Test Model, which was developed for analyzing criterion-referenced test data, is described. An estimate of the proportion of students in an instructional group having achieved the referent objectives is usable as a prior probability in interpreting individual responses. Considering instructional group performance enhances estimates of individual performance. Correlational data from a set of test items and a representative population of students are used to estimate the required item parameters. (Author)
- Published
- 1973
136. Operations Research as a Metaphor for Evaluation. Research on Evaluation Program Paper and Report Series.
- Author
-
Northwest Regional Educational Lab., Portland, OR. and Page, Ellis B.
- Abstract
One of a series of research reports examining objective principles successfully used in other fields which can lend integrity and legitimacy to evaluation, this report presents an overview of operations research (OR) as a potential source of evaluation methodology. The nature of the methods common to this discipline are summarized and the possibility of employing these methods in educational evaluation (EE) are investigated. The author points to the idea that OR is the science of evaluation since it directly addresses decision making. In demonstrating that EE is targeted on decision making, feasable alternatives, outcome predictions and probabilities for each alternative, and estimation of costs and benefits are identified as prerequisites of EE. Decision Analysis, a subfield of OR, is then provided as a model wherein all the concerns of EE are made explicit, and an optimal solution located by organizing them into a single algorithmic structure known as a decision tree. Other transportation, network and simulation models are also applied to hypothetical EE problems. After reviewing some of the literature and the current situation in the fields of OR and EE, the author concludes that EE is one of the few fields not to have taken advantage of OR's potential. (AEF)
- Published
- 1979
137. Development of a stand-alone precalculated Monte Carlo code to calculate the dose by alpha and beta emitters from the Ra-224 decay chain.
- Author
-
Hoseini-Ghahfarokhi M, Kamio Y, Mondor J, Jabbari K, and Carrier JF
- Subjects
- Humans, Phantoms, Imaging, Alpha Particles therapeutic use, Benchmarking, Protons, Algorithms
- Abstract
Background: Recent developments in alpha and beta emitting radionuclide therapy highlight the importance of developing efficient methods for patient-specific dosimetry. Traditional tabulated methods such as Medical Internal Radiation Dose (MIRD) estimate the dose at the organ level while more recent numerical methods based on Monte Carlo (MC) simulations are able to calculate dose at the voxel level. A precalculated MC (PMC) approach was developed in this work as an alternative to time-consuming fully simulated MC. Once the spatial distribution of alpha and beta emitters is determined using imaging and/or numerical methods, the PMC code can be used to achieve an accurate voxelized 3D distribution of the deposited energy without relying on full MC calculations., Purpose: To implement the PMC method to calculate energy deposited by alpha and beta particles emitted from the Ra-224 decay chain., Methods: The GEANT4 (version 10.7) MC toolkit was used to generate databases of precalculated tracks to be integrated in the PMC code as well as to benchmark its output. In this regard, energy spectra of alpha and beta particles emitted by the Ra-224 decay chain were generated using GAMOS (version 6.2.0) and imported into GEANT4 macro files. Either alpha or beta emitting sources were defined at the center of a homogeneous phantom filled with various materials such as soft tissue, bone, and lung where particles were emitted either mono-directionally (for database generation) or isotropically (for benchmarking). Two heterogeneous phantoms were used to demonstrate PMC code compatibility with boundary crossing events. Each precalculated database was generated step-by-step by storing particle track information from GEANT4 simulations followed by its integration in a PMC code developed in MATLAB. For a user-defined number of histories, one of the tracks in a given database was selected randomly and rotated randomly to reflect an isotropic emission. Afterward, deposited energy was divided between voxels based on step length in each voxel using a ray-tracing approach. The radial distribution of deposited energy was benchmarked against fully simulated MC calculations using GEANT4. The effect of the GEANT4 parameter StepMax on the accuracy and speed of the code was also investigated., Results: In the case of alpha decay, primary alpha particles show the highest contribution (>99%) in deposited energy compared to their secondary particles. In most cases, protons act as the main secondary particles in the deposition of energy. However, for a lung phantom, using a range cutoff parameter of 10 µm on primary alpha particles yields a higher contribution of secondary electrons than protons. Differences between deposited energy calculated by PMC and fully simulated MC are within 2% for all alpha and beta emitters in homogeneous and heterogeneous phantoms. Additionally, statistical uncertainties are less than 1% for voxels with doses higher than 5% of the maximum dose. Moreover, optimization of the parameter StepMax is necessary to achieve the best tradeoff between code accuracy and speed., Conclusions: The PMC code shows good performance for dose calculations deposited by alpha and beta emitters. As a stand-alone algorithm, it is suitable to be integrated into clinical treatment planning systems., (© 2023 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
138. Image collection and annotation platforms to establish a multi-source database of oral lesions.
- Author
-
Rajendran S, Lim JH, Yogalingam K, Kallarakkal TG, Zain RB, Jayasinghe RD, Rimal J, Kerr AR, Amtha R, Patil K, Welikala RA, Lim YZ, Remagnino P, Gibson J, Tilakaratne WM, Liew CS, Yang YH, Barman SA, Chan CS, and Cheong SC
- Subjects
- Humans, Algorithms, Mouth Neoplasms
- Abstract
Objective: To describe the development of a platform for image collection and annotation that resulted in a multi-sourced international image dataset of oral lesions to facilitate the development of automated lesion classification algorithms., Materials and Methods: We developed a web-interface, hosted on a web server to collect oral lesions images from international partners. Further, we developed a customised annotation tool, also a web-interface for systematic annotation of images to build a rich clinically labelled dataset. We evaluated the sensitivities comparing referral decisions through the annotation process with the clinical diagnosis of the lesions., Results: The image repository hosts 2474 images of oral lesions consisting of oral cancer, oral potentially malignant disorders and other oral lesions that were collected through MeMoSA
® UPLOAD. Eight-hundred images were annotated by seven oral medicine specialists on MeMoSA® ANNOTATE, to mark the lesion and to collect clinical labels. The sensitivity in referral decision for all lesions that required a referral for cancer management/surveillance was moderate to high depending on the type of lesion (64.3%-100%)., Conclusion: This is the first description of a database with clinically labelled oral lesions. This database could accelerate the improvement of AI algorithms that can promote the early detection of high-risk oral lesions., (© 2022 Wiley Periodicals LLC.)- Published
- 2023
- Full Text
- View/download PDF
139. Explainability and white box in drug discovery.
- Author
-
Kırboğa KK, Abbasi S, and Küçüksille EU
- Subjects
- Drug Delivery Systems, Drug Discovery, Machine Learning, Artificial Intelligence, Algorithms
- Abstract
Recently, artificial intelligence (AI) techniques have been increasingly used to overcome the challenges in drug discovery. Although traditional AI techniques generally have high accuracy rates, there may be difficulties in explaining the decision process and patterns. This can create difficulties in understanding and making sense of the outputs of algorithms used in drug discovery. Therefore, using explainable AI (XAI) techniques, the causes and consequences of the decision process are better understood. This can help further improve the drug discovery process and make the right decisions. To address this issue, Explainable Artificial Intelligence (XAI) emerged as a process and method that securely captures the results and outputs of machine learning (ML) and deep learning (DL) algorithms. Using techniques such as SHAP (SHApley Additive ExPlanations) and LIME (Locally Interpretable Model-Independent Explanations) has made the drug targeting phase clearer and more understandable. XAI methods are expected to reduce time and cost in future computational drug discovery studies. This review provides a comprehensive overview of XAI-based drug discovery and development prediction. XAI mechanisms to increase confidence in AI and modeling methods. The limitations and future directions of XAI in drug discovery are also discussed., (© 2023 John Wiley & Sons Ltd.)
- Published
- 2023
- Full Text
- View/download PDF
140. Machine learning toward high-performance electrochemical sensors.
- Author
-
Giordano GF, Ferreira LF, Bezerra ÍRS, Barbosa JA, Costa JNY, Pimentel GJC, and Lima RS
- Subjects
- Reproducibility of Results, Machine Learning, Supervised Machine Learning, Artificial Intelligence, Algorithms
- Abstract
The so-coined fourth paradigm in science has reached the sensing area, with the use of machine learning (ML) toward data-driven improvements in sensitivity, reproducibility, and accuracy, along with the determination of multiple targets from a single measurement using multi-output regression models. Particularly, the use of supervised ML models trained on large data sets produced by electrical and electrochemical bio/sensors has emerged as an impacting trend in the literature by allowing accurate analyses even in the presence of usual issues such as electrode fouling, poor signal-to-noise ratio, chemical interferences, and matrix effects. In this trend article, apart from an outlook for the coming years, we present examples from the literature that demonstrate how helpful ML algorithms can be for dispensing the adoption of experimental methods to address the aforesaid interfering issues, ultimately contributing to translate testing technologies into on-site, practical, and daily applications., (© 2023. Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2023
- Full Text
- View/download PDF
141. Applied machine learning in hematopathology.
- Author
-
Dehkharghanian T, Mu Y, Tizhoosh HR, and Campbell CJV
- Subjects
- Humans, Pathologists, Workflow, Machine Learning, Algorithms
- Abstract
An increasing number of machine learning applications are being developed and applied to digital pathology, including hematopathology. The goal of these modern computerized tools is often to support diagnostic workflows by extracting and summarizing information from multiple data sources, including digital images of human tissue. Hematopathology is inherently multimodal and can serve as an ideal case study for machine learning applications. However, hematopathology also poses unique challenges compared to other pathology subspecialities when applying machine learning approaches. By modeling the pathologist workflow and thinking process, machine learning algorithms may be designed to address practical and tangible problems in hematopathology. In this article, we discuss the current trends in machine learning in hematopathology. We review currently available machine learning enabled medical devices supporting hematopathology workflows. We then explore current machine learning research trends of the field with a focus on bone marrow cytology and histopathology, and how adoption of new machine learning tools may be enabled through the transition to digital pathology., (© 2023 The Authors. International Journal of Laboratory Hematology published by John Wiley & Sons Ltd.)
- Published
- 2023
- Full Text
- View/download PDF
142. A cellular reference atlas of the human lung.
- Subjects
- Humans, Thorax, Lung, Algorithms
- Published
- 2023
- Full Text
- View/download PDF
143. Obesity and labour market outcomes in Italy: a dynamic panel data evidence with correlated random effects.
- Author
-
Pacifico A
- Subjects
- Humans, Bayes Theorem, Cross-Sectional Studies, Markov Chains, Monte Carlo Method, Italy, Algorithms, Obesity epidemiology
- Abstract
This paper investigates the effects of obesity, socio-economic variables, and individual-specific factors on work productivity across Italian regions. A dynamic panel data with correlated random effects is used to jointly deal with incidental parameters, endogeneity issues, and functional forms of misspecification. Methodologically, a hierarchical semiparametric Bayesian approach is involved in shrinking high dimensional model classes, and then obtaining a subset of potential predictors affecting outcomes. Monte Carlo designs are addressed to construct exact posterior distributions and then perform accurate forecasts. Cross-sectional Heterogeneity is modelled nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors. Prevention policies and strategies to handle health and labour market prospects are also discussed., (© 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2023
- Full Text
- View/download PDF
144. The spectrum of semantic and syntactic labour
- Author
-
Warner, Julian
- Published
- 2024
- Full Text
- View/download PDF
145. The Synthesis of the Switching Systems Optimal Parameters Search Algorithms
- Author
-
Druzhinina, Olga, Masina, Olga, Petrov, Alexey, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Ghosh, Ashish, Series Editor, Evtushenko, Yury, editor, Jaćimović, Milojica, editor, Khachay, Michael, editor, Kochetov, Yury, editor, Malkova, Vlasta, editor, and Posypkin, Mikhail, editor
- Published
- 2019
- Full Text
- View/download PDF
146. Automatic Categorization of Tweets on the Political Electoral Theme Using Supervised Classification Algorithms
- Author
-
Cumbicus-Pineda, Oscar M., Ordoñez-Ordoñez, Pablo F., Neyra-Romero, Lisset A., Figueroa-Diaz, Roberth, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Washio, Takashi, Editorial Board Member, Yuan, Junsong, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Botto-Tobar, Miguel, editor, Pizarro, Guillermo, editor, Zúñiga-Prieto, Miguel, editor, D’Armas, Mayra, editor, and Zúñiga Sánchez, Miguel, editor
- Published
- 2019
- Full Text
- View/download PDF
147. Cable Dynamic Modeling and Applications in Three-Dimensional Space
- Author
-
Gomes, Sebastião C. P., Zanela, Elisane B., Pereira, Adriana E. L., Fleury, Agenor de T., editor, Rade, Domingos A., editor, and Kurka, Paulo R. G., editor
- Published
- 2019
- Full Text
- View/download PDF
148. Penalized weighted smoothed quantile regression for high-dimensional longitudinal data.
- Author
-
Song Y, Han H, Fu L, and Wang T
- Subjects
- Humans, Computer Simulation, Linear Models, Sample Size, Models, Statistical, Algorithms
- Abstract
Quantile regression, known as a robust alternative to linear regression, has been widely used in statistical modeling and inference. In this paper, we propose a penalized weighted convolution-type smoothed method for variable selection and robust parameter estimation of the quantile regression with high dimensional longitudinal data. The proposed method utilizes a twice-differentiable and smoothed loss function instead of the check function in quantile regression without penalty, and can select the important covariates consistently using the efficient gradient-based iterative algorithms when the dimension of covariates is larger than the sample size. Moreover, the proposed method can circumvent the influence of outliers in the response variable and/or the covariates. To incorporate the correlation within each subject and enhance the accuracy of the parameter estimation, a two-step weighted estimation method is also established. Furthermore, we prove the oracle properties of the proposed method under some regularity conditions. Finally, the performance of the proposed method is demonstrated by simulation studies and two real examples., (© 2024 John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
149. Pros and cons of artificial intelligence implementation in diagnostic pathology.
- Author
-
van Diest PJ, Flach RN, van Dooijeweert C, Makineli S, Breimer GE, Stathonikos N, Pham P, Nguyen TQ, and Veta M
- Subjects
- Humans, Workflow, Artificial Intelligence, Algorithms
- Abstract
The rapid introduction of digital pathology has greatly facilitated development of artificial intelligence (AI) models in pathology that have shown great promise in assisting morphological diagnostics and quantitation of therapeutic targets. We are now at a tipping point where companies have started to bring algorithms to the market, and questions arise whether the pathology community is ready to implement AI in routine workflow. However, concerns also arise about the use of AI in pathology. This article reviews the pros and cons of introducing AI in diagnostic pathology., (© 2024 The Authors. Histopathology published by John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
150. Self-attention-based generative adversarial network optimized with color harmony algorithm for brain tumor classification.
- Author
-
S SP, A S, T K, and S D
- Subjects
- Humans, Color, Neural Networks, Computer, Deep Learning, Brain Neoplasms diagnostic imaging, Brain Neoplasms classification, Brain Neoplasms pathology, Algorithms, Magnetic Resonance Imaging, Image Processing, Computer-Assisted methods
- Abstract
This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.