44,154 results
Search Results
2. Proceedings of the International Conference on Educational Data Mining (EDM) (16th, Bengaluru, India, July 11-14, 2023)
- Author
-
International Educational Data Mining Society, Feng, Mingyu, Käser, Tanja, and Talukdar, Partha
- Abstract
The Indian Institute of Science is proud to host the fully in-person sixteenth iteration of the International Conference on Educational Data Mining (EDM) during July 11-14, 2023. EDM is the annual flagship conference of the International Educational Data Mining Society. The theme of this year's conference is "Educational data mining for amplifying human potential." Not all students or seekers of knowledge receive the education necessary to help them realize their full potential, be it due to a lack of resources or lack of access to high quality teaching. The dearth in high-quality educational content, teaching aids, and methodologies, and non-availability of objective feedback on how they could become better teachers, deprive our teachers from achieving their full potential. The administrators and policy makers lack tools for making optimal decisions such as optimal class sizes, class composition, and course sequencing. All these handicap the nations, particularly the economically emergent ones, who recognize the centrality of education for their growth. EDM-2023 has striven to focus on concepts, principles, and techniques mined from educational data for amplifying the potential of all the stakeholders in the education system. The spotlights of EDM-2023 include: (1) Five keynote talks by outstanding researchers of eminence; (2) A plenary Test of Time award talk and a Banquet talk; (3) Five tutorials (foundational as well as advanced); (4) Four thought provoking panels on contemporary themes; (5) Peer reviewed technical paper and poster presentations; (6) Doctoral students consortium; and (7) An enchanting cultural programme. [Individual papers are indexed in ERIC.]
- Published
- 2023
3. ChatGPT and Bard in Education: A Comparative Review
- Author
-
Gustavo Simas da Silva and Vânia Ribas Ulbricht
- Abstract
ChatGPT and Bard, two chatbots powered by Large Language Models (LLMs), are propelling the educational sector towards a new era of instructional innovation. Within this educational paradigm, the present investigation conducts a comparative analysis of these groundbreaking chatbots, scrutinizing their distinct operational characteristics and applications as depicted in current scholarly discourse. ChatGPT emerges as an exemplary tool in task-oriented textual interactions, while Bard brandishes unique features such as Text-To-Speech (TTS) functionality, which enhances accessibility and inclusive education, as well as integration with Google Workspace applications. This research critically examines their utilization in various spheres such as pedagogy, academic research, Massive Open Online Courses (MOOCs), mathematics, and software programming. Findings accentuate ChatGPT's superior efficacy in content drafting, code generation, language translation, and providing clinically precise responses, notwithstanding Bard's significant potential encapsulated in its exclusive features. Furthermore, the study traverses' crucial ethical aspects, including privacy concerns and inherent bias, underscoring the profound implications of these Artificial Intelligence (AI) technologies on literature and advocating against the indiscriminate reliance on such models. [For the full proceedings, see ED636095.]
- Published
- 2023
4. Yet Another Predictive Model? Fair Predictions of Students' Learning Outcomes in an Online Math Learning Platform
- Author
-
Li, Chenglu, Xing, Wanli, and Leite, Walter
- Abstract
To support online learners at a large scale, extensive studies have adopted machine learning (ML) techniques to analyze students' artifacts and predict their learning outcomes automatically. However, limited attention has been paid to the fairness of prediction with ML in educational settings. This study intends to fill the gap by introducing a generic algorithm that can orchestrate with existing ML algorithms while yielding fairer results. Specifically, we have implemented logistic regression with the Seldonian algorithm and compared the fairness-aware model with fairness-unaware ML models. The results show that the Seldonian algorithm can achieve comparable predictive performance while producing notably higher fairness. [This paper was published in: "LAK21: 11th International Learning Analytics and Knowledge Conference (LAK21), April 12-16, 2021, Irvine, CA, USA," ACM, 2021.]
- Published
- 2021
- Full Text
- View/download PDF
5. Modeling Consistency Using Engagement Patterns in Online Courses
- Author
-
Zhou, Jianing and Bhat, Suma
- Abstract
Consistency of learning behaviors is known to play an important role in learners' engagement in a course and impact their learning outcomes. Despite significant advances in the area of learning analytics (LA) in measuring various self-regulated learning behaviors, using LA to measure consistency of online course engagement patterns remains largely unexplored. This study focuses on modeling consistency of learners in online courses to address this research gap. Toward this, we propose a novel unsupervised algorithm that combines sequence pattern mining and ideas from information retrieval with a clustering algorithm to first extract engagement patterns of learners, represent learners in a vector space of these patterns and finally group them into groups with similar consistency levels. Using clickstream data recorded in a popular learning management system over two offerings of a STEM course, we validate our proposed approach to detect learners that are inconsistent in their behaviors. We find that our method not only groups learners by consistency levels, but also provides reliable instructor support at an early stage in a course. [This paper was published in: "LAK21: 11th International Learning Analytics and Knowledge Conference (LAK21), April 12-16, 2021, Irvine, CA, USA." ACM, 2021, pp. 226-236.]
- Published
- 2021
- Full Text
- View/download PDF
6. Generating Response-Specific Elaborated Feedback Using Long-Form Neural Question Answering
- Author
-
Olney, Andrew M.
- Abstract
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and knowledge engineering to generate. This study presents an alternative approach that formulates elaborated feedback in terms of long-form question answering (LFQA). An off-the-shelf LFQA system was evaluated by human raters in a 2x2x2x2 ablation design that manipulated the context documents given to the LFQA model and the post-processing of model output. Results indicate that context manipulations improve performance but that postprocessing can have detrimental results. [This paper was published in: "Proceedings of the Eighth ACM Conference on Learning @ Scale," 2021, pp. 27-36.]
- Published
- 2021
- Full Text
- View/download PDF
7. Recommendation Systems on E-Learning and Social Learning: A Systematic Review
- Author
-
Souabi, Sonia, Retbi, Asmaâ, Idrissi, Mohammed Khalidi, and Bennani, Samir
- Abstract
E-learning is renowned as one of the highly effective modalities of learning. Social learning, in turn, is considered to be of major importance as it promotes collaboration between learners. For properly managing learning resources, recommender systems have been implemented in e-learning to enhance learners' experience. Whilst recommender systems are of widespread concern in online learning, it is still unclear to educators how recommender systems can improve the learning process and have a positive impact on learning. This paper seeks to provide an overview of the recommender systems proposed in e-learning between 2007 and the first part of 2021. Out of 100 initially identified publications for the period between 2007 and the first part of 2021, 51 articles were included for final synthesis, according to specific criteria. The descriptive results show that most of the disciplines involved in educational recommender systems papers have approached e-learning in a general way without putting as much emphasis on social learning, and that recommender systems based on explicit feedbacks and ratings were the most frequently used in empirical studies. The synthesis of results presents several recommender systems types in e-learning: (1) content-based recommender systems; (2) collaborative-filtering recommender systems; (3) hybrid recommender systems; and (4) recommender systems based on supervised and unsupervised algorithms. The conclusions reflect on the almost lack of critical reflection on the importance of addressing recommender systems in social learning and social educational networks in particular, especially as social learning has particular requirements, the weak databases size used in some research work, the importance of acknowledging the strengths and weaknesses of each type of recommender system in an educational context and the need for further exploration of implicit feedbacks more than explicit learners' feedbacks for more accurate recommendations.
- Published
- 2021
8. Say What? Automatic Modeling of Collaborative Problem Solving Skills from Student Speech in the Wild
- Author
-
Pugh, Samuel L., Subburaj, Shree Krishna, Rao, Arjun Ramesh, Stewart, Angela E. B., Andrews-Todd, Jessica, and D'Mello, Sidney K.
- Abstract
We investigated the feasibility of using automatic speech recognition (ASR) and natural language processing (NLP) to classify collaborative problem solving (CPS) skills from recorded speech in noisy environments. We analyzed data from 44 dyads of middle and high school students who used videoconferencing to collaboratively solve physics and math problems (35 and 9 dyads in school and lab environments, respectively). Trained coders identified seven cognitive and social CPS skills (e.g., sharing information) in 8,660 utterances. We used a state-of-the-art deep transfer learning approach for NLP, Bidirectional Encoder Representations from Transformers (BERT), with a special input representation enabling the model to analyze adjacent utterances for contextual cues. We achieved a micro-average AUROC score (across seven CPS skills) of 0.80 using ASR transcripts, compared to 0.91 for human transcripts, indicating a decrease in performance attributable to ASR error. We found that the noisy school setting introduced additional ASR error, which reduced model performance (micro-average AUROC of 0.78) compared to the lab (AUROC = 0.83). We discuss implications for real-time CPS assessment and support in schools. [For the full proceedings, see ED615472.]
- Published
- 2021
9. Fair-Capacitated Clustering
- Author
-
Quy, Tai Le, Roy, Arjun, Friege, Gunnar, and Ntoutsi, Eirini
- Abstract
Traditionally, clustering algorithms focus on partitioning the data into groups of similar instances. The similarity objective, however, is not sufficient in applications where a "fair-representation" of the groups in terms of protected attributes like gender or race, is required for each cluster. Moreover, in many applications, to make the clusters useful for the end-user, a "balanced cardinality" among the clusters is required. Our motivation comes from the education domain where studies indicate that students might learn better in diverse student groups and of course groups of similar cardinality are more practical e.g., for group assignments. To this end, we introduce the "fair-capacitated clustering problem" that partitions the data into clusters of similar instances while ensuring cluster fairness and balancing cluster cardinalities. We propose a two-step solution to the problem: (1) we rely on fairlets to generate minimal sets that satisfy the fair constraint; and (2) we propose two approaches, namely hierarchical clustering and partitioning-based clustering, to obtain the fair-capacitated clustering. Our experiments on three educational datasets show that our approaches deliver well-balanced clusters in terms of both fairness and cardinality while maintaining a good clustering quality. [For the full proceedings, see ED615472.]
- Published
- 2021
10. MI Theory: Past, Current and Future--A Review of MI Theory in the Past 50 Years
- Author
-
Zhang, Weiwen
- Abstract
Recently Prof. Howard Gardner, an outstanding psychologist in the worldwide accepted the interview from Dr. Weiwen Zhang, and talked about a wide range of MI theory and relevant fields, which mainly involved in its core ideas, current situation and future development, and also involved its application in some current hot issues, which gave us important enlightenment in relevant fields.
- Published
- 2020
11. Towards Fair Educational Data Mining: A Case Study on Detecting At-Risk Students
- Author
-
Hu, Qian and Rangwala, Huzefa
- Abstract
Over the past decade, machine learning has become an integral part of educational technologies. With more and more applications such as students' performance prediction, course recommendation, dropout prediction and knowledge tracing relying upon machine learning models, there is increasing evidence and concerns about bias and unfairness of these models. Unfair models can lead to inequitable outcomes for some groups of students and negatively impact their learning. We show by real-world examples that educational data has embedded bias that leads to biased student modeling, which urges the development of fairness formalizations and fair algorithms for educational applications. Several formalizations of fairness have been proposed that can be classified into two types: (i) group fairness and (ii) individual fairness. Group fairness guarantees that groups are treated fairly as a whole, which might not be fair to some individuals. Thus individual fairness has been proposed to make sure fairness is achieved on individual level. In this work, we focus on developing an individually fair model for identifying students at-risk of underperforming. We propose a model which is based on the idea that the prediction for a student (identifying at-risk students) should not be influenced by his/her sensitive attributes. The proposed model is shown to effectively remove bias from these predictions and hence, making them useful in aiding all students. [For the full proceedings, see ED607784.]
- Published
- 2020
12. Combining Machine Learning and Natural Language Processing to Assess Literary Text Comprehension
- Author
-
Balyan, Renu, McCarthy, Kathryn S., and McNamara, Danielle S.
- Abstract
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about literary works. Three types of NLP feature sets: unigrams (single content words), elaborative (new) n-grams, and linguistic features were used to classify idea units (paraphrase, text-based inference, interpretive inference). The most accurate classifications emerged using all three NLP features sets in combination, with accuracy ranging from 0.61 to 0.94 (F = 0.18 to 0.81). Random Forests, which employs multiple decision trees and a bagging approach, was the most accurate classifier for these data. In contrast, the single classifier, Trees, which tends to "overfit" the data during training, was the least accurate. Ensemble classifiers were generally more accurate than single classifiers. However, Support Vector Machines accuracy was comparable to that of the ensemble classifiers. This is likely due to Support Vector Machines' unique ability to support high dimension feature spaces. The findings suggest that combining the power of NLP and machine learning is an effective means of automating literary text comprehension assessment. [This paper was published in: A. Hershkovitz & L. Paquette (Eds.), "Proceedings of the 10th International Conference on Educational Data Mining" (pp. 244-249), Wuhan, China: International Educational Data Mining Society.]
- Published
- 2017
13. Assessing Question Quality Using NLP
- Author
-
Kopp, Kristopher J., Johnson, Amy M., Crossley, Scott A., and McNamara, Danielle S.
- Abstract
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict question quality. NLP indices related to lexical sophistication modestly predicted question type. Accuracies improved when predicting two levels (shallow versus deep). [This paper was published in: E. Andre, R. Baker, X. Hu, M. M. T. Rodrigo, & B. du Boulay (Eds.), "Proceedings of the 18th International Conference on Artificial Intelligence in Education" (pp. 523-527). Wuhan, China: Springer.]
- Published
- 2017
14. Toward the Automatic Labeling of Course Questions for Ensuring Their Alignment with Learning Outcomes
- Author
-
Supraja, S., Hartman, Kevin, Tatinati, Sivanagaraja, and Khong, Andy W. H.
- Abstract
Expertise in a domain of knowledge is characterized by a greater fluency for solving problems within that domain and a greater facility for transferring the structure of that knowledge to other domains. Deliberate practice and the feedback that takes place during practice activities serve as gateways for developing domain expertise. However, there is a difficulty in consistently aligning feedback about a learner's practice performance with the intended learning outcomes of those activities -- especially in situations where the person providing feedback is unfamiliar with the intention of those activities. To address this problem, we propose an intelligent model to automatically label opportunities for practice (assessment questions) according to the learning outcomes intended by the course designers. As a proof of concept, we used a reduced version of Bloom's Taxonomy to define the intended learning outcomes. Using a factorial design, we employed term frequency-inverse document frequency (TF-IDF) and latent Dirichlet allocation (LDA) to transform questions from text to word weightages with support vector machine (SVM) and extreme learning machine (ELM) to train and automatically label the questions. We trained our models with 120 questions labeled by the subject matter expert of an undergraduate engineering course. Compared to existing works which create models based on a selfgenerated dataset, our proposed approach uses 30 untrained questions from online/textbook sources to validate the performance of our models. Exhaustive comparison analysis of the testing set showed that TF-IDF with ELM outperformed the other combinations by yielding 0.86 reliability (F1 measure) with the subject matter expert. [For the full proceedings, see ED596512.]
- Published
- 2017
15. Mining Innovative Augmented Graph Grammars for Argument Diagrams through Novelty Selection
- Author
-
Xue, Linting, Lynch, Collin F., and Chi, Min
- Abstract
Augmented Graph Grammars are a graph-based rule formalism that supports rich relational structures. They can be used to represent complex social networks, chemical structures, and student-produced argument diagrams for automated analysis or grading. In prior work we have shown that Evolutionary Computation (EC) can be applied to induce empirically-valid grammars for student-produced argument diagrams based upon fitness selection. However this research has shown that while the traditional EC algorithm does converge to an optimal fitness, premature convergence can lead to it getting stuck in local maxima, which may lead to undiscovered rules. In this work, we augmented the standard EC algorithm to induce more heterogeneous Augmented Graph Grammars by replacing the fitness selection with a novelty-based selection mechanism every ten generations. Our results show that this novelty selection increases the diversity of the population and produces better, and more heterogeneous, grammars. [For the full proceedings, see ED596512.]
- Published
- 2017
16. Combining Machine Learning and Natural Language Processing to Assess Literary Text Comprehension
- Author
-
Balyan, Renu, McCarthy, Kathryn S., and McNamara, Danielle S.
- Abstract
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about literary works. Three types of NLP feature sets: unigrams (single content words), elaborative (new) n-grams, and linguistic features were used to classify idea units (paraphrase, text-based inference, interpretive inference). The most accurate classifications emerged using all three NLP features sets in combination, with accuracy ranging from 0.61 to 0.94 (F=0.18 to 0.81). Random Forests, which employs multiple decision trees and a bagging approach, was the most accurate classifier for these data. In contrast, the single classifier, Trees, which tends to "overfit" the data during training, was the least accurate. Ensemble classifiers were generally more accurate than single classifiers. However, Support Vector Machines accuracy was comparable to that of the ensemble classifiers. This is likely due to Support Vector Machines' unique ability to support high dimension feature spaces. The findings suggest that combining the power of NLP and machine learning is an effective means of automating literary text comprehension assessment. [For the full proceedings, see ED596512. For the corresponding grantee submission, see ED577127.]
- Published
- 2017
17. Proceedings of the International Conference on Educational Data Mining (EDM) (10th, Wuhan, China, June 25-28, 2017)
- Author
-
International Educational Data Mining Society, Hu, Xiangen, Barnes, Tiffany, Hershkovitz, Arnon, and Paquette, Luc
- Abstract
The 10th International Conference on Educational Data Mining (EDM 2017) is held under the auspices of the International Educational Data Mining Society at the Optics Velley Kingdom Plaza Hotel, Wuhan, Hubei Province, in China. This years conference features two invited talks by: Dr. Jie Tang, Associate Professor with the Department of Computer Science and Technology at Tsinghua University; and Dr. Ron Cole, President of Boulder Learning Inc. The main conference invited contributions to the Research Track and Industry Track. 122 submissions were received (71 full, 47 short, 4 industry). 18 full papers papers were accepted (25% acceptance rate) and 32 short papers for oral presentation (42% acceptance rate) and an additional 39 for poster presentations, 3 demonstrations. The industry track includes all 4 submitted industry papers and 1 paper initially submitted as a full paper. The EDM conference provides opportunities for young researchers, and particularly Ph.D. students, to present their research ideas and receive feedback from the peers and more senior researchers. This year, the Doctoral Consortium features 6 such presentations. In addition to the main program, the conference includes 3 workshops: (1) Graph-based Educational Data Mining (G-EDM 2017); (2) Sharing and Reusing Data & Analytics Methods with LearnSphere; and (3) Deep Learning with Educational Data; and 2 tutorials: (1) Why Data Standards are Critical for EDM and AIED; and (2) Principal Stratification for EDM Experiments. [For the 2016 proceedings, see ED592609.]
- Published
- 2017
18. Towards Interpretable Automated Machine Learning for STEM Career Prediction
- Author
-
Liu, Ruitao and Tan, Aixin
- Abstract
In this paper, we describe our solution to predict student STEM career choices during the 2017 ASSISTments Datamining Competition. We built a machine learning system that automatically reformats the data set, generates new features and prunes redundant ones, and performs model and feature selection. We designed the system to automatically find a model that optimizes prediction performance, yet the final model is a simple logistic regression that allows researchers to discover important features and study their effects on STEM career choices. We also compared our method to other methods, which revealed that the key to good prediction is proper feature enrichment in the beginning stage of the data analysis, while feature selection in a later stage allows a simpler final model.
- Published
- 2020
19. Teaching AI Search Algorithms in a Web-Based Educational System
- Author
-
Grivokostopoulou, Foteini and Hatzilygeroudis, Ioannis
- Abstract
In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice exercises, which are interactive exercises where immediate feedback is given when a student makes an error, but further help is optional, are also used. So, the student can try by him/herself to correct the error or ask for help from the system. Finally, the student can take a test consisting of assessment exercises, which are interactive, but no help is provided. The result of the test determines the student's knowledge level. Evaluation of the system through a pre-test/post-test and experimental/control group method gave very promising results about learning capabilities of the method. Also, results of a questionnaire show that the majority of the students liked the system very much. [For the full proceedings see ED562127.]
- Published
- 2013
20. E-Learning Software for Improving Student's Music Performance Using Comparisons
- Author
-
Delgado, M., Fajardo, W., and Molina-Solana, M.
- Abstract
In the last decades there have been several attempts to use computers in Music Education. New pedagogical trends encourage incorporating technology tools in the process of learning music. Between them, those systems based on Artificial Intelligence are the most promising ones, as they can derive new information from the inputs and visualize them in several meaningful ways. This paper presents an application of machine learning to music performance which is able to discover the similarities and differences between a given performance and those from other musicians. Such a system would help students to better learn how to perform a certain piece of music, allowing them to compare with other students or master performers. [For the full proceedings, see ED562127.]
- Published
- 2013
21. Proceedings of the International Conference on Educational Data Mining (EDM) (9th, Raleigh, North Carolina, June 29-July 2, 2016)
- Author
-
International Educational Data Mining Society, Barnes, Tiffany, Chi, Min, and Feng, Mingyu
- Abstract
The 9th International Conference on Educational Data Mining (EDM 2016) is held under the auspices of the International Educational Data Mining Society at the Sheraton Raleigh Hotel, in downtown Raleigh, North Carolina, in the USA. The conference, held June 29-July 2, 2016, follows the eight previous editions (Madrid 2015, London 2014, Memphis 2013, Chania 2012, Eindhoven 2011, Pittsburgh 2010, Cordoba 2009 and Montreal 2008). The EDM conference is the leading international forum for high-quality research that leverages educational data, learning analytics, and machine learning to answer research questions that shed light on the learning processes. This year's conference features three invited talks by: Rakesh Agrawal, President and Founder of Data Insights Laboratories; Marcia C. Linn, Professor of the University of California at Berkeley; and Judy Kay, Professor of the University of Sydney. Judy Kay's invited paper entitled "Enabling people to harness and control EDM for lifelong, life-wide learning" is also presented in the proceedings. Together with the "Journal of Educational Data Mining" ("JEDM"), the EDM 2016 conference supports a "JEDM" Track that provides researchers a venue to deliver more substantial mature work than is possible in a conference proceedings and to present their work to a live audience. The papers submitted to this track followed the "JEDM" peer review process; three papers have been accepted to the track and were presented at the conference. The abstracts of the invited talks, panels and accepted "JEDM" Track papers can be found in these proceedings. [For the 2015 proceedings, see ED560503.]
- Published
- 2016
22. Student Profile Modeling Using Boosting Algorithms
- Author
-
Hamim, Touria, Benabbou, Faouzia, and Sael, Nawal
- Abstract
The student profile has become an important component of education systems. Many systems objectives, as e-recommendation, e-orientation, e-recruitment and dropout prediction are essentially based on the profile for decision support. Machine learning plays an important role in this context and several studies have been carried out either for classification, prediction or clustering purpose. In this paper, the authors present a comparative study between different boosting algorithms which have been used successfully in many fields and for many purposes. In addition, the authors applied feature selection methods Fisher Score, Information Gain combined with Recursive Feature Elimination to enhance the preprocessing task and models' performances. Using multi-label dataset predict the class of the student performance in mathematics, this article results show that the Light Gradient Boosting Machine (LightGBM) algorithm achieved the best performance when using Information gain with Recursive Feature Elimination method compared to the other boosting algorithms.
- Published
- 2022
- Full Text
- View/download PDF
23. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor
- Author
-
International Working Group on Educational Data Mining, Rus, Vasile, Lintean, Mihai, and Azevedo, Roger
- Abstract
This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge activation, a self-regulatory process. We describe two major categories of methods and combine each method with various machine learning algorithms. A detailed comparison among the methods and across all algorithms is also provided. The evaluation of the proposed methods is performed by comparing the prediction of the methods with human judgments on a set of 309 prior knowledge activation paragraphs collected from previous experiments with MetaTutor on college students. According to our experiments, a content-based method with word-weighting and Bayes Nets algorithm is the most accurate. (Contains 1 figure and 2 tables.) [For the complete proceedings, "Proceedings of the International Conference on Educational Data Mining (EDM) (2nd, Cordoba, Spain, July 1-3, 2009)," see ED539041.]
- Published
- 2009
24. Do We Betray Errors Beforehand? The Use of Eye Tracking, Automated Face Recognition and Computer Algorithms to Analyse Learning from Errors
- Author
-
Harteis, Christian, Fischer, Christoph, Töniges, Torben, and Wrede, Britta
- Abstract
Preventing humans from committing errors is a crucial aspect of man-machine interaction and systems of computer assistance. It is a basic implication that those systems need to recognise errors before they occur. This paper reports an exploratory study that utilises eye-tracking technology and automated face recognition in order to analyse test persons' emotional reactions and cognitive load during a computer game and learning through trial and error. Computer algorithms based on machine learning and big data were tested that identify particular patterns of test persons' gaze behaviour and facial expressions that antecede errors in a computer game. The results show that emotions and learning from errors are positively correlated and that gaze behaviour and facial expressions inform about the errors that follow. However, the algorithms still need to be improved through further studies to be suitable for daily use. This research is innovative in its use of mathematical formulae to operationalise learning through errors and the use of computer algorithms to predict errors in human behaviour in trial-and-error situations.
- Published
- 2018
25. A Review Paper on Deep Learning Approach for Crop Yield Prediction Assessment
- Author
-
Richa Verma Ayushi
- Subjects
business.industry ,Deep learning ,Crop yield ,Agricultural engineering ,Artificial intelligence ,business ,Mathematics - Abstract
Precise assessment of harvest yield is a difficult field of work. The equipment and programming stage to foresee the harvest yield relies on different components like climate, soil fruitfulness, genotype, and different collaborating wards. The assignment is unpredictable inferable from the information that should be gathered in volumes to comprehend crop yield through remote sensor organizations and distant detecting. This paper audits the previous 15 years of exploration work in the improvement of assessing crop yield utilizing profound learning calculations. The meaning of examining progressions utilizing profound learning methods will help in dynamic for foreseeing the harvest yield. The cross breed mix of profound learning with distant detecting and remote sensor organizations can give accuracy agribusiness later on.
- Published
- 2021
26. Deep Learning Forwarding in NDN with a Case Study of Ethernet LAN
- Author
-
Ayadi, Mohamed Issam, Maizate, Abderrahim, Ouzzif, Mohamm, and Mahmoudi, Charif
- Abstract
In this paper, the authors propose a novel forwarding strategy based on deep learning that can adaptively route interests/data packets through ethernet links without relying on the FIB table. The experiment was conducted as a proof of concept. They developed an approach and an algorithm that leverage existing intelligent forwarding approaches in order to build an NDN forwarder that can reduce forwarding cost in terms of prefix name lookup, and memory requirement in FIB simulation results showed that the approach is promising in terms of cross-validation score and prediction in ethernet LAN scenario.
- Published
- 2021
- Full Text
- View/download PDF
27. Proceedings of the International Association for Development of the Information Society (IADIS) International Conference on Cognition and Exploratory Learning in Digital Age (CELDA) (Madrid, Spain, October 19-21, 2012)
- Author
-
International Association for Development of the Information Society (IADIS)
- Abstract
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a fast pace and affecting academia and professional practice in many ways. Paradigms such as just-in-time learning, constructivism, student-centered learning and collaborative approaches have emerged and are being supported by technological advancements such as simulations, virtual reality and multi-agents systems. These developments have created both opportunities and areas of serious concerns. This conference aimed to cover both technological as well as pedagogical issues related to these developments. The IADIS CELDA 2012 Conference received 98 submissions from more than 24 countries. Out of the papers submitted, 29 were accepted as full papers. In addition to the presentation of full papers, short papers and reflection papers, the conference also includes a keynote presentation from internationally distinguished researchers. Individual papers contain figures, tables, and references.
- Published
- 2012
28. Classification option for Korean traditional paper based on type of raw materials, using near-infrared spectroscopy and multivariate statistical methods
- Author
-
Seon Hwa Jeong, Kyung Ju Jang, and Tae Young Heo
- Subjects
Environmental Engineering ,business.industry ,Near-infrared spectroscopy ,Bioengineering ,Pattern recognition ,Raw material ,Linear discriminant analysis ,Random forest ,Support vector machine ,Statistical classification ,Partial least squares regression ,Artificial intelligence ,Multivariate statistical ,business ,Waste Management and Disposal ,Mathematics - Abstract
Depending on the different types of raw materials used to produce hanji, a Korean traditional handmade paper, there can be significant differences in the durability and mechanical properties of the final product. In this study, near-infrared spectroscopy (NIR) combined with multivariate statistical methods were used to confirm the classification possibility of hanji based on the various type of raw materials. The hanji papers were prepared from paper mulberry trees, cooking agents, and mucilage. Altogether, a total of 60 hanji spectra were collected by NIR. Then, the 60 spectra were grouped into four categories: the control, paper mulberry, cooking agent, and mucilage type based on each of the types of raw materials contained in the hanji. Three different classification algorithms – partial least squares discriminant analysis (PLS-DA), support vector machines (SVM), and random forest (RF) – were used to classify the hanji types. The best hanji material classification performance was obtained when the hanji samples were classified according to paper mulberry type, wherein the prediction accuracies of PLS-DA, SVM, and RF were 100%, 100%, and 98%, respectively. These results suggested that NIR in combination with multivariate statistical methods can be used for hanji material classification.
- Published
- 2020
29. Proceedings of the International Conference on Educational Data Mining (EDM) (2nd, Cordoba, Spain, July 1-3, 2009)
- Author
-
International Working Group on Educational Data Mining, Barnes, Tiffany, Desmarais, Michel, Romero, Cristobal, and Ventura, Sebastian
- Abstract
The Second International Conference on Educational Data Mining (EDM2009) was held at the University of Cordoba, Spain, on July 1-3, 2009. EDM brings together researchers from computer science, education, psychology, psychometrics, and statistics to analyze large data sets to answer educational research questions. The increase in instrumented educational software and databases of student test scores, has created large repositories of data reflecting how students learn. The EDM conference focuses on computational approaches for using those data to address important educational questions. The broad collection of research disciplines ensures cross fertilization of ideas, with the central questions of educational research serving as a unifying focus. This publication presents the following papers: (1) A Comparison of Student Skill Knowledge Estimates (Elizabeth Ayers, Rebecca Nugent, Nema Dean); (2) Differences Between Intelligent Tutor Lessons, and the Choice to Go Off-Task (Ryan S.J.d. Baker); (3) A User-Driven and Data-Driven Approach for Supporting Teachers in Reflection and Adaptation of Adaptive Tutorials (Dror Ben-Naim, Michael Bain, and Nadine Marcus); (4) Detecting Symptoms of Low Performance Using Production Rules (Javier Bravo and Alvaro Ortigosa); (5) Predicting Students Drop Out: A Case Study (Gerben W. Dekker, Mykola Pechenizkiy and Jan M. Vleeshouwers); (6) Using Learning Decomposition and Bootstrapping with Randomization to Compare the Impact of Different Educational Interventions on Learning (Mingyu Feng, Joseph E. Beck and Neil T. Heffernan); (7) Does Self-Discipline impact students' knowledge and learning? (Yue Gong, Dovan Rai, Joseph E. Beck, and Neil T. Heffernan); (8) Consistency of Students' Pace in Online Learning (Arnon Hershkovitz and Rafi Nachmias); (9) Student Consistency and Implications for Feedback in Online Assessment Systems (Tara M. Madhyastha and Steven Tanimoto); (10) Edu-mining for Book Recommendation for Pupils (Ryo Nagata, Keigo Takeda, Koji Suda, Junichi Kakegawa, and Koichiro Morihiro); (11) Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students (Rebecca Nugent, Elizabeth Ayers, and Nema Dean); (12) Determining the Significance of Item Order In Randomized Problem Sets (Zachary A. Pardos and Neil T. Heffernan); (13) Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models (Philip I. Pavlik Jr., Hao Cen, Kenneth R. Koedinger); (14) Detecting and Understanding the Impact of Cognitive and Interpersonal Conflict in Computer Supported Collaborative Learning Environments (David Nadler Prata, Ryan S.J.d. Baker, Evandro d.B. Costa, Carolyn P. Rose, Yue Cui, Adriana M.J.B. de Carvalho); (15) Using Dirichlet priors to improve model parameter plausibility (Dovan Rai, Yue Gong, Joseph E. Beck); (16) Reducing the Knowledge Tracing Space (Steven Ritter, Thomas K. Harris, Tristan Nixon, Daniel Dickison, R. Charles Murray, and Brendon Towle); (17) Automatic Detection of Student Mental Models During Prior Knowledge Activation in MetaTutor (Vasile Rus, Mihai Lintean, and Roger Azevedo); (18) Automatic Concept Relationships Discovery for an Adaptive E-course (Marian Simko, Maria Bielikova); (19) Unsupervised MDP Value Selection for Automating ITS Capabilities (John Stamper and Tiffany Barnes); (20) Recommendation in Higher Education Using Data Mining Techniques (Cesar Vialardi, Javier Bravo Agapito, Leila Shafti, Alvaro and Ortigosa); (21) Developing an Argument Learning Environment Using Agent-Based ITS (ALES) (Safia Abbas and Hajime Sawamura); (22) A Data Mining Approach to Reveal Representative Collaboration Indicators in Open Collaboration Frameworks (Antonio R. Anaya and Jesus G. Boticario); (23) Dimensions of Difficulty in Translating Natural Language into First-Order Logic (Dave Barker-Plummer, Richard Cox, and Robert Dale); (24) Predicting Correctness of Problem Solving from Low-level Log Data in Intelligent Tutoring Systems (Suleyman Cetintas, Luo Si, Yan Ping Xin, and Casey Hord); (25) Back to the future: a non-automated method of constructing transfer models (Ming Feng and Joseph Beck); (26) How do Students Organize Personal Information Spaces? (Sharon Hardof-Jaffe, Arnon Hershkovitz, Hama Abu-Kishk, Ofer Bergman, and Rafi Nachmias); (27) Improving Student Question Classification (Cecily Heiner and Joseph L. Zachary); (28) Why, What, and How to Log? Lessons from LISTEN (Jack Mostow and Joseph E. Beck); (29) Process Mining Online Assessment Data (Mykola Pechenizkiy, Nikola Trcka, Ekaterina Vasilyeva, Wil van der Aalst, and Paul De Bra); (30) Obtaining Rubric Weights For Assessments By More Than One Lecturer Using A Pairwise Learning Model (J. R. Quevedo and E. Montanes); (31) Collaborative Data Mining Tool for Education (Enrique Garcia, Cristobal Romero, Sebastian Ventura, Miguel Gea, and Carlos de Castro); (32) Predicting Student Grades in Learning Management Systems with Multiple Instance Genetic Programming (Amelia Zafra and Sebastian Ventura); and (33) Visualization of Differences in Data Measuring Mathematical Skills (Lukas Zoubek and Michal Burda). Individual papers contain tables, figures, footnotes, references and appendices.
- Published
- 2009
30. Proceeding of the International Scientific Colloquium: MATHEMATICS AND CHILDREN (How to Teach and Learn Mathematics) (Osijek, Croatia, April 13, 2007)
- Author
-
Pavlekovic, Margita
- Abstract
The main aim of the Organisational Committee of the international scientific colloquium Mathematics and Children is to encourage additional scientific research in the field of mathematics teaching in Croatia. The development of science and education is a part of a long-term Education Sector Development Plan 2005-2010. Following the example of Europe and the rest of the world, special attention in the field of education is given to mathematical literacy of children (PISA programme) as well as to mathematics teacher training (quality insurance in higher education). Mathematics teaching in Croatia faces modified strategic, organizational, social and technical conditions. Introducing one-shift classes in primary schools, including children with special needs (talented ones and those with difficulties) in regular classes, extended day program for all students, two teachers per class, greater mobility of children and teachers in schools and new teaching technologies demand changes in the methodology of mathematical education of both children and future teachers of mathematics. It is important to develop a life-long learning programme for teachers of mathematics that includes doctoral studies. Research in the field of mathematics teaching implies multi- and interdisciplinarity. Therefore a cooperation with scientists outside the field of mathematics (psychologists, special-ed teachers, educators) is an imperative, although we strongly believe that improvements in mathematics teaching should be encouraged within the field of mathematics. A precondition for developing new approaches and methodologies in mathematics teaching in Croatia is a first-hand experience with the results of international research and standards in mathematics teaching and defining doctoral studies within the same field. We believe that the lectures, discussions and experience exchange between Croatian and international participants of the Mathematics and Children meeting will initiate and intensify scientific cooperation in the field of mathematics teaching on the international level. We would also like for this event to initiate the start of doctoral studies in the field of mathematics teaching in Croatia following the examples from Europe and worldwide. We are very grateful to numerous Croatian and international scientists who have recognized the importance of this event and managed to find the time to attend this gathering. We would also like to thank the heads and entrepreneurs of the local community who financed this event for the most part. Papers include: (1) An Overview of the Authorised Curriculum in Teaching Mathematics Harmonised with the Bologna Declaration at the Department of Mathematics, University of Sarajevo (Sefket Arslanagic); (2) Role of Different Representations of Mathematical Concepts for Learning with Understanding (Tatjana Hodnik-Cadez); (3) The Scientific Frameworks of Teaching Mathematics (Zdravko Kurnik); (4) An Evergreen Problem (Emil Molnar); (5) Mathematically Gifted Children: What Can We Teach Them and What Can We Learn? (Vesna Vlahovic-Stetic); (6) Difficulties in Teaching Mathematics in the Second Grade of Primary School (Josip Cindric and Maja Cindric); (7) Children and Simple Combinatorial Situations (Maja Cotic and Darjo Felda); (8) National Curriculum Framework for Primary Mathematics Education--European Experiences and Trends (Aleksandra Cizmesija); (9) Dynamic Mathematics Class and the Smart Board (Sasa Duka and Damir Tomic); (10) The Dyscalculic Child, Mathematics and Teacher Study Students (Lidija Goljevacki and Aleksandra Krampac-Grljusic); (11) Is the Language of Mathematics Difficult? (The level of technical language use among teacher training college students) (Eva Kopasz); (12) Assessment and Evaluation in Mathematics Education (Zeljka Milin-Sipus); (13) Origami and Mathematics (Franka Miriam-Bruckler); (14) Attitudes of the Students of Teaching Studies towards Mathematics (Irena Misurac-Zorica); (15) Partnership among Faculties, Schools and Families for the Improvement of Mathematics Education of the Gifted Children (Ksenija Mogus and Silvija Mihaljevic); (16) Expert System for Detecting a Child's Gift in Mathematics (Margita Pavlekovic, Marijana Zekic-Susac, and Ivana Durdevic); (17) Boris Pavkovic (portrait of a distinguished methodologist and popularizer of mathematics) (Mirko Polonijo); (18) Mathematics in Play and Leisure Activities--LEGO Building Bricks (Tomislav Rudec); (19) Basic Knowledge of Mathematics and Teacher Training (Sanja Rukavina); (20) Solving Linear Equations Using Computer's Drawing Tools (Miljenko Stanic); (21) Developing the Problem-Solving Skills of Children Suffering from Dyscalculia through Mathematical Tasks with a Text (Aniko Straubingerne Kemler); (22) The Concept of the Square and the Rectangle at the Age 10-11 (Ibolya Szilagyne Szinger); (23) The Use of Computers in Teaching Mathematics (Sanja Varosanec); and (24) From Active Experimenting to Abstract Notion Concept (Amalija Zakelj and Aco Cankar). (Individual papers contain tables, graphs, and references.) [Papers are presented in both English and Croatian. These proceedings were published by the University Josip Juraj Strossmayer in Osijek, Faculty of Philosophy in Osijek. Abstract was modified to meet ERIC guidelines
- Published
- 2007
31. Physics driven behavioural clustering of free-falling paper shapes
- Author
-
Fumiya Iida, Toby Howison, Josie Hughes, Fabio Giardina, Howison, Toby [0000-0001-8548-5550], Iida, Fumiya [0000-0001-9246-7190], and Apollo - University of Cambridge Repository
- Subjects
Inertia ,Physiology ,Physical system ,Social Sciences ,computer.software_genre ,Systems Science ,01 natural sciences ,010305 fluids & plasmas ,Physical Phenomena ,Physical phenomena ,Medicine and Health Sciences ,Psychology ,Cluster Analysis ,Moment of Inertia ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,theoretical model ,article ,Classical Mechanics ,Dynamical Systems ,Variety (cybernetics) ,Free falling ,machine learning ,Physical Sciences ,Medicine ,physics ,Algorithms ,Research Article ,Paper ,Computer and Information Sciences ,Reynolds Number ,Science ,Fluid Mechanics ,Research and Analysis Methods ,Machine learning ,Continuum Mechanics ,Motion ,Machine Learning Algorithms ,Artificial Intelligence ,0103 physical sciences ,010306 general physics ,Set (psychology) ,Cluster analysis ,Behavior ,Biological Locomotion ,business.industry ,Biology and Life Sciences ,Fluid Dynamics ,Models, Theoretical ,Nonlinear Dynamics ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. © 2019 Howison et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- Published
- 2019
32. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ.
- Author
-
Bartlet, Tom
- Subjects
ARTIFICIAL intelligence ,CHATGPT ,UNDERGRADUATES ,ELECTRICAL engineering ,MATHEMATICS - Abstract
The article discusses about a study suggesting that the artificial intelligence (AI) chatbot ChatGPT could successfully complete Massachusetts Institute of Technology's (MIT) undergraduate curriculum with 100-percent accuracy. It discusses that the study's assertion that MIT raised questions and intrigued experts given recent advancements in chatbot capabilities; and completing MIT's undergraduate curriculum in mathematics, computer science, and electrical engineering.
- Published
- 2023
34. Effects of bending curvature and ambient illuminance on the visual performance of young and elderly participants using simulated electronic paper displays
- Author
-
An-Hsiang Wang, Hui-Tzu Kuo, and Su-Lun Hwang
- Subjects
business.industry ,Acoustics ,Illuminance ,Bending ,Curvature ,law.invention ,Human-Computer Interaction ,Hardware and Architecture ,law ,Computer vision ,Artificial intelligence ,Electronic paper ,Electrical and Electronic Engineering ,Set (psychology) ,business ,Mathematics - Abstract
This study investigated the effects of bending curvature of simulated e-paper displays (−10 cm, plane and 10 cm) and ambient illuminance (50, 500, 6000 and 12,000 lx) on the visual performance of young and elderly users for simulated electronic-paper displays. In total, 24 people comprising 12 young and 12 elderly users participated in this study. Young users demonstrated significantly enhanced visual performance using the simulated electrophoretic ink (E ink) e-paper display with ambient illuminance set at 50 and 500 lx; however, participants did not demonstrate significantly different visual performance using the 2 simulated e-paper displays with ambient illuminance set at 6000 and 12,000 lx. The bending curvature did not exhibit significant effects on the visual performance of young participants under various ambient illuminances. The elderly participants exhibited significantly enhanced visual performance using the simulated E ink e-paper display only when ambient illuminance was set at 50 lx; however, participants did not exhibit significantly different visual performance using the two simulated e-paper displays with ambient illuminance set at other levels. Elderly users exhibited significantly enhanced visual performance when the bending curvature was set at 10 cm, with ambient illuminance set at 50 lx. Elderly users also demonstrated significantly enhanced visual performance under bending curvature settings of 10 cm and plane, with ambient illuminance set at 500 lx. However, the bending curvature of the simulated e-paper did not exhibit significant effects on the visual performance of elderly participants under ambient illuminance settings of 6000 and 12,000 lx.
- Published
- 2012
35. A Method for Rational Provision of Learning Syllabus
- Author
-
Drasute, Vida, Drasutis, Sigitas, and Baziuke, Dalia
- Abstract
Distance, e-education supported by technological development have enhanced educational processes by combining achievements from various scientific fields. This techno-educational enhancement has happened within the last decade. Virtual learning has grown in popularity as a new way of learning that gives preference to learn in convenient time and places, using specially prepared learning materials. The progression of technological developments implies the rise of user demands. It also aspires to look for flexible, but qualitative, training and learning ways. One of the outcomes is adaptive and intelligent learning environments that show to become a common tool used for virtual learning. This paper focuses on the key feature elements of adaptive learning environments (ALE), also researching about intelligent agents and their behaviour in ALEs. The method to provide individual learning syllabus according student knowledge in rational way is proposed. It is based on latent semantic indexing algorithm and is featured with an ability to update syllabus according individual learning outcomes of the student. This is yet another tool in a teacher's tool box. The description of results obtained during the exploration of proposed method is presented as well.
- Published
- 2011
36. Cooperative object detection in road traffic 1 1The research for this paper was financially supported by the Hollósi Ferenc Tudástámogató Alapítvány
- Author
-
Olivér Törő, Tamás Bécsi, Szilárd Aradi, and Péter Gáspár
- Subjects
020301 aerospace & aeronautics ,business.industry ,Gaussian ,Recursion (computer science) ,020206 networking & telecommunications ,02 engineering and technology ,Object detection ,Nonlinear system ,Noise ,symbols.namesake ,0203 mechanical engineering ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,State space ,Computer vision ,Artificial intelligence ,Affine transformation ,business ,Particle filter ,Mathematics - Abstract
Multi-sensor object detection and tracking on a highway scene with radar measurements is presented. The estimation algorithm is the random finite set based Bernoulli filter, working in the Bayesian framework. The recursion for calculating the Bayes estimation is implemented as a particle filter. A method is presented for calculating the likelihoods, suitable for particle filtering performed with moving sensors, assuming additive Gaussian measurement noise. In our approach, for calculating the posterior estimate of the object state, the measurement likelihoods are computed in the state space, instead of the measurement space, by mapping each measurement to the global coordinate system. The map consists of a nonlinear and an affine part. While the affine transformation trivially preserves the Gaussian nature, the nonlinear is well-proven to be approximated as affine too. This approach allows the particles to be drawn directly from the state space, hence the evaluation of the measurement model is not needed, which saves computational power.
- Published
- 2017
37. Effect of character size and lighting on legibility of electronic papers
- Author
-
Der-Song Lee, Kong-King Shieh, Shie-Chang Jeng, and I-Hsuan Shen
- Subjects
Liquid-crystal display ,Cholesteric liquid crystal ,business.industry ,Illuminance ,Legibility ,law.invention ,Human-Computer Interaction ,Light source ,Character (mathematics) ,Optics ,Hardware and Architecture ,law ,Computer vision ,Electronic paper ,Artificial intelligence ,Electrical and Electronic Engineering ,Visual angle ,business ,Mathematics - Abstract
Effects of character size under ambient illuminances and light sources on legibility of electronic paper displays (electrophoretic display and cholesteric liquid crystal display) were studied and compared with paper. Sixty subjects participated in a letter-search task in the experiment. The results showed that search speed depends on the illuminance but not light source. Search speed increased as the illuminance increase from 300, 700 to 1500 lx. Search speed also increased with the increase of character size, from character height of 1.4 mm (9.6 min visual angle), 2.2 mm (15.1 min visual angle) to 3.3 mm (22.7 min visual angle), and the increase leveled off at 4.3 mm (29.6 min visual angle). The effect of character size on accuracy was also significant. Accuracy increased with the increase of character size. However, the effect of illuminance and light source on accuracy was not statistically significant. Based on the results of this study, it seems that E-paper displays may need greater illumination (700 lx or higher), greater character size (3.3 mm or 22 min of visual angle).
- Published
- 2008
38. PHOTOGRAPHIC PAPER TEXTURE CLASSIFICATION USING MODEL DEVIATION OF LOCAL VISUAL DESCRIPTORS
- Author
-
David Picard, Inbar Fijalkow, Ngoc-Son Vu, Multimedia Indexation and Data Integration (MIDI), Equipes Traitement de l'Information et Systèmes (ETIS - UMR 8051), Ecole Nationale Supérieure de l'Electronique et de ses Applications (ENSEA)-Centre National de la Recherche Scientifique (CNRS)-CY Cergy Paris Université (CY)-Ecole Nationale Supérieure de l'Electronique et de ses Applications (ENSEA)-Centre National de la Recherche Scientifique (CNRS)-CY Cergy Paris Université (CY), ICI, and Picard, David
- Subjects
Texture compression ,Contextual image classification ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,business.industry ,Image classification ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Texture (geology) ,Image texture analysis ,ComputingMethodologies_PATTERNRECOGNITION ,Categorization ,Image texture ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Texture filtering ,Computer vision ,Artificial intelligence ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing ,ComputingMethodologies_COMPUTERGRAPHICS ,Feature detection (computer vision) ,Photographic paper ,Mathematics - Abstract
International audience; This paper investigates the classification of photographic paper tex-tures using visual descriptors. Such classification is called fine grain due to the very low inter-class variability. We propose a novel image representation for photographic paper texture categorization, relying on the incorporation of a powerful local descriptor into an efficient higher-order model deviation where texture is represented by com-puting statistics on the occurrences of specific local visual patterns. We perform an evaluation on two different challenging datasets of photographic paper textures and show such advanced methods in-deed outperforms existing descriptors.
- Published
- 2014
39. A Note on the Paper 'Fuzzy Analytic Hierarchy Process: Fallacy of the Popular Methods'
- Author
-
Jana Krejčí and Michele Fedrizzi
- Subjects
Fallacy ,Fuzzy analytic hierarchy process ,business.industry ,Analytic network process ,Analytic hierarchy process ,Type-2 fuzzy sets and systems ,Fuzzy logic ,Artificial Intelligence ,Control and Systems Engineering ,Reciprocity (social psychology) ,Artificial intelligence ,business ,Mathematical economics ,Software ,Fuzzy ahp ,Information Systems ,Mathematics - Abstract
In the last 30 years, several distinguished researchers have proposed and discussed different fuzzy versions of well-known Saaty's Analytic Hierarchy Process (AHP). The paper recently published by K. Zhü in the European Journal of Operational Research heavily criticizes the fuzzy approaches to AHP, claiming the fallacy of all of them. Therefore, it seems to be necessary to clarify whether the criticisms are well-founded or not. The present paper aims to rebut Zhü's claims by showing that the evidences and the reasonings in his paper are very poor and far from proving the fallacy of fuzzy AHP.
- Published
- 2015
40. Development of a Descriptive Paper Test Item and a Counting Formula for Evaluating Elementary School Students' Scientific Hypothesis Generating Ability
- Author
-
Dong Hoon Shin and Eun Byul Jo
- Subjects
Structure (mathematical logic) ,Test item ,business.industry ,Mathematics education ,Artificial intelligence ,Hypothesis ,business ,Quotient ,Mathematics - Abstract
The purpose of this study is to develop a descriptive paper test item which can evaluate elementary school students’ HGA (scientific Hypothesis Generating Ability) and to propose a counting formula that can easily assess student’s HGA objectively and quantitatively. To make the test item can possibly evaluate all the students from 6th graders to 3rd graders, the ‘rabbit’s ear’ item is developed. Developed test item was distributed to four different elementary schools in Seoul. Total 280 students who were in the 6th grade solved the item. All the students’ reponses to the item were analyzed. Based on the analyzed data evaluation factors and evaluation criteria are extracted to design a Hypothesis Generating ability Quotient (HGQ). As the result ‘Explican’s Degree of Likeness’ and ‘Hypothesis’ Degree of Explanation’ are chosen as evaluation factors. Also precedent evaluation criteria were renewed. At first, Explican’s Degree of Likeness evaluation criterion was turned four levels into three levels and each content of evaluation criterion is also modified. Secondly, new evaluation factor ‘Hypothesis’ Degree of Explanation’ was developed as combined three different evaluation criteria, ‘level of explican’, ‘number of explican’ and ‘structure of explican’. This evaluation factor was designed to assess how the suggested hypothesis can elaborately explain the cause of one phenomenon. Newly designed evaluation factors and evaluation criteria can assess HGA more in detail and reduce the scoring discordant through the markers. Lastly, Developed counting formula is much more simple than precedent Kwon’s equation for evaluating the Hypothesis Explanation Quotient. So it could help easily distinguish one student’s scientific hypothesis generating ability.Key words : scientific hypothesis generating ability, evaluation factor, evaluation criteria, hypothesis generating ability quotient, counting formula
- Published
- 2016
41. A position paper on the agenda for soft decision analysis
- Author
-
Robert Fullér and Christer Carlsson
- Subjects
Scrutiny ,Operations research ,Artificial Intelligence ,Logic ,Management science ,Bullwhip effect ,sort ,Fuzzy number ,Position paper ,Bullwhip ,Decision problem ,Mathematics ,Decision analysis - Abstract
In this position paper, we shall undertake a critical scrutiny of the decision analysis (DA) paradigm on the basis of experience we gained from working on the bullwhip effect. We had some success with sorting out the complexities of the bullwhip by using fuzzy numbers in the bullwhip models, and we found out that this points the way to a more viable approach to sort out complex decision problems. As a positive criticism of the DA paradigm we have collected the insights we have gained in an agenda for soft decision analysis (SDA), which we hasten to add is not complete but--hopefully--an opening for focused research and continuing development.
- Published
- 2002
42. Why are papers about filters on residuated structures (usually) trivial?
- Author
-
Martin Víta
- Subjects
Pure mathematics ,Information Systems and Management ,Property (philosophy) ,Generalization ,Extension (predicate logic) ,Computer Science Applications ,Theoretical Computer Science ,Algebra ,Artificial Intelligence ,Control and Systems Engineering ,Simple (abstract algebra) ,Filter (mathematics) ,Residuated lattice ,Software ,Quotient ,Mathematics - Abstract
In this paper we introduce a notion of a t-filter on residuated lattices which is a generalization of several special types of filters. We provide some basic properties of t-filters and show how particular results about special types of filters (e.g. Extension property, Triple of equivalent characteristics, and Quotient characteristics) are uniformly covered by this simple general framework.
- Published
- 2014
43. An enhanced memetic differential evolution in filter design for defect detection in paper production
- Author
-
Ville Tirronen, Kirsi Majava, Ferrante Neri, Tommi Kärkkäinen, and Tuomo Rossi
- Subjects
Paper ,Quality Control ,Mathematical optimization ,Population ,Evolutionary algorithm ,multimeme algorithms ,digital filter design ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,FIR filter ,Humans ,Industry ,Local search (optimization) ,Computer Simulation ,memetic algorithms ,education ,Metaheuristic ,Mathematics ,Probability ,edge detection ,education.field_of_study ,Electronic Data Processing ,Stochastic Processes ,Models, Statistical ,business.industry ,differential evolution ,paper production ,Models, Theoretical ,Computational Mathematics ,Filter design ,Differential evolution ,Simulated annealing ,Memetic algorithm ,business ,Algorithms ,Software - Abstract
This article proposes an Enhanced Memetic Differential Evolution (EMDE) for designing digital filters which aim at detecting defects of the paper produced during an industrial process. Defect detection is handled by means of two Gabor filters and their design is performed by the EMDE. The EMDE is a novel adaptive evolutionary algorithm which combines the powerful explorative features of Differential Evolution with the exploitative features of three local search algorithms employing different pivot rules and neighborhood generating functions. These local search algorithms are the Hooke Jeeves Algorithm, a Stochastic Local Search, and Simulated Annealing. The local search algorithms are adaptively coordinated by means of a control parameter that measures fitness distribution among individuals of the population and a novel probabilistic scheme. Numerical results confirm that Differential Evolution is an efficient evolutionary framework for the image processing problem under investigation and show that the EMDE performs well. As a matter of fact, the application of the EMDE leads to a design of an efficiently tailored filter. A comparison with various popular metaheuristics proves the effectiveness of the EMDE in terms of convergence speed, stagnation prevention, and capability in detecting solutions having high performance.
- Published
- 2008
44. Development of Four-Square Fiducial Markers for Analysis of Paper Analytical Devices
- Author
-
Jenna Wilson, Ewa Misiolek, Ian Bentley, and Tabitha Ricketts
- Subjects
business.industry ,Square (unit) ,Development (differential geometry) ,Computer vision ,General Medicine ,Artificial intelligence ,Fiducial marker ,business ,Mathematics - Abstract
Fiducial markers are used in image processing to determine locations of interest based on fixed points of reference. There are a number of applications for these markers across various fields ranging from advertising to radiation therapy. The four-square fiducial markers discussed in this manuscript allow for the determination of locations of interest on digital images. These new markers are easily detected, provide information about image orientation, and allow for local color sampling. The markers are intended for use in a pharmaceutical assessment process in which images of colorimetric chemical tests are taken by a smart-phone in the field, uploaded to a database, and analyzed to collect quantitative information about the colors resulting from the tests. KEYWORDS: Image Processing; Digital Imaging; Fiducial Markers; Image Thresholding; Color Calibration; MATLAB; Paper Analytical Devices
- Published
- 2016
45. Genetic algorithms for wavenumber selection in forensic differentiation of paper by linear discriminant analysis
- Author
-
Choong Yeun Liong, Abdul Aziz Jemain, Khairul Osman, and Loong Chuen Lee
- Subjects
business.industry ,Wavenumber ,Pattern recognition ,Artificial intelligence ,business ,Linear discriminant analysis ,Selection (genetic algorithm) ,Mathematics - Abstract
Selection of the most significant variables, i.e. the wavenumber, from an infrared (IR) spectrum is always difficult to be achieved. In this preliminary paper, the feasibility of genetic algorithms (GA) in identifying most informative wavenumbers from 150 IR spectra of papers was investigated. The list of selected wavenumbers was then employed in Linear Discriminant Analysis (LDA). GA procedure was repeated 30 times to get different lists of variables. Then the performances of LDA models were estimated via leave-one-out cross-validation. A total of six to eight wavenumbers were identified to be valuable variables in the GA procedures. All the 30 LDA models achieve correct classification rates between 97.3% to 100.0%. Therefore the GA-LDA model could be a suitable tool for differentiating white papers that appeared to be highly similar in their IR fingerprints.
- Published
- 2016
46. Five Notes on the Application of Proof Theory to Computer Science.
- Author
-
Stanford Univ., CA. Inst. for Mathematical Studies in Social Science. and Kreisel, Georg
- Abstract
The primary aim of these five technical papers is to indicate aspects of proof theory which may be of use in the study of non-numerical computing. The three main papers are entitled: "Checking of Computer Programs;""Consistency Proofs and Programs for Translators;" and "Experiments with Computers on the Complexity of Non-numerical computations." The author shows that many theorems on computability in traditional metamathematics are of little use to the computer scientist because they do not lead to feasible algorithms. He also suggests alternative approaches to proof theory which would be of greater applicability. (MM)
- Published
- 1971
47. Remarks on the paper by A. De Visscher, 'what does the g-index really measure?'
- Author
-
Leo Egghe
- Subjects
Human-Computer Interaction ,Informetrics ,Square root ,Artificial Intelligence ,Computer Networks and Communications ,Assertion ,g-index ,Measure (mathematics) ,Mathematical economics ,Software ,Information Systems ,Mathematics - Abstract
The author presents a different view on properties of impact measures than given in the paper of De Visscher (2011). He argues that a good impact measure works better when citations are concentrated rather than spread out over articles. The author also presents theoretical evidence that the g-index and the R-index can be close to the square root of the total number of citations, whereas this is not the case for the A-index. Here the author confirms an assertion of De Visscher. © 2012 Wiley Periodicals, Inc.
- Published
- 2012
48. Decision Making through the Game of Scissors-Paper-Stone and Simulation
- Author
-
Dae-Hyeon Cho
- Subjects
Non-cooperative game ,Sequential game ,business.industry ,Example of a game without a value ,Simulations and games in economics education ,Repeated game ,Combinatorial game theory ,Artificial intelligence ,business ,Game tree ,Video game design ,Mathematical economics ,Mathematics - Abstract
In many sports games, we would use a coin or the game of scissors-paper-stone to decide which side will begin first. We consider the game of scissors-paper-stone when two teams are composed of N respectively. We continue the game of scissors-paper-stone until the winner of the two team is decided. Using sample spaces and enumerating the elements we calculated the mean number of the game when N = 1, N = 2 and N = 3. In case of N = 1 and N = 2, we simulate the game and find the mean and variance when the repetition number n = 20; 30; 50; 100; 150; 200.
- Published
- 2010
49. Oil-paper aging evaluation by fuzzy clustering and factor analysis to statistical parameters of partial discharges
- Author
-
Stanislaw Grzybowski, Jian Li, Lijun Yang, and Ruijin Liao
- Subjects
Fuzzy clustering ,business.industry ,Fuzzy set ,Statistical parameter ,Pattern recognition ,Thermal aging ,Fuzzy logic ,Kernel (statistics) ,Partial discharge ,Electronic engineering ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Cluster analysis ,Mathematics - Abstract
A thermal aging experiment was conducted for oil-paper insulation to evaluate the insulation's response under thermal stress. Oil-papers at different aging stages were used to create oil-paper bound gas cavity specimens, which were used to collect data on degrees of polymerization (DP) and partial discharge (PD). Evaluation of oil-paper aging was based on statistical operators of PD and factor analysis was used to extract the principle parameters of PD. Three types of fuzzy clustering approaches were used to classify PD of aged oil-paper: the fuzzy c-means, the kernel fuzzy c-means, and the possibilistic fuzzy c-means. The clustering results showed that the possibilistic fuzzy c-means clustering was capable of classifying PD of oil-paper bound gas cavity specimens. The factor analysis method was also verified to be helpful in fuzzy clustering of PD data samples by reducing number of parameters.
- Published
- 2010
50. Recycled paper visual indexing for quality control
- Author
-
José Orlando Maldonado and Manuel Graña
- Subjects
Discrete wavelet transform ,business.industry ,Search engine indexing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Process (computing) ,Wavelet transform ,Pattern recognition ,Linear discriminant analysis ,Computer Science Applications ,Gabor filter ,Artificial Intelligence ,Pattern recognition (psychology) ,Point (geometry) ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
In this paper, we describe the development of a system for evaluating an specific quality characteristic of recycled paper sheets using techniques of image analysis and pattern recognition. We call Bumpiness the phenomenon of interest, which is new in the literature on paper quality. This phenomenon is characterized by the appearance of macroscopic undulations on the paper sheet surface that may emerge shortly or some time after its production. We explore the detection and measurement of this defect by means of computer vision and statistical pattern recognition techniques that may allow early detection at the production site. Our goal is to give an scalar continuous measure of Bumpiness. We propose features computed from Gabor filter banks (GFB) and discrete wavelet transforms (DWT) for the characterization of paper sheet surface Bumpiness in recycled paper images. The starting point is to state the problem as a classification of the paper sheet images into two classes: low and high Bumpiness. In this setting we obtain, with both proposed texture modelling approaches (GFB and DWT), classification accuracies comparable to the agreement between human observers. The best performance is obtained using DWT features. Finally, we propose as the scalar index of Bumpines the fisher discriminant analysis (FDA) function defined on the space of the best features for the classification task. We perform an innovative validation process of this Bumpiness index, based on the ordering of random pairs of images, obtaining a very high agreement with the human observers.
- Published
- 2009
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.