9,490 results
Search Results
2. Resetting Targets: Examining Large Effect Sizes and Disappointing Benchmark Progress. Occasional Paper. RTI Press Publication OP-0060-1904
- Author
-
RTI International, Stern, Jonathan M. B., and Piper, Benjamin
- Abstract
This paper uses recent evidence from international early grade reading programs to provide guidance about how best to create appropriate targets and more effectively identify improved program outcomes. Recent results show that World Bank and US Agency for International Development-funded large-scale international education interventions in low- and middle-income countries tend to produce larger impacts than do interventions in the United States, as measured by effect sizes. However, these effect sizes rarely translate into large gains in mean oral reading fluency scores and are associated with only small increases in the proportion of students meeting country-level reading benchmarks. The limited impact of these low- and middle-income countries' reading programs on the proportion of students meeting reading benchmarks is in large part caused by right-skewed distributions of student reading scores. In other words, modest impacts on the proportion of students meeting benchmarks are caused by low mean scores and large proportions of nonreaders at baseline. It is essential to take these factors into consideration when setting program targets for reading fluency and comprehension. We recommend that program designers in lower-performing countries use baseline assessment data to develop benchmarks based on multiple performance categories that allow for more ambitious targets focused on reducing nonreaders and increasing beginning readers, with more modest targets aimed at improving oral reading fluency scores and increasing the percentage of proficient readers.
- Published
- 2019
3. How to Write a Research Paper
- Author
-
Borràs, Eulàlia
- Abstract
Generally speaking, when one writes about their research they are making a contribution to the scientific community and disseminating the results of findings in scientific articles. This means that other researchers have access to the research produced and can examine the subjects raised in greater depth to advance scientific knowledge. This paper discusses the format of papers that are strictly academic. The specific structure of the text will be determined by whether it is for a master's dissertation, a doctoral thesis, a chapter of a specialist book or an article for a scientific journal. In the case of qualitative research, it is necessary, when writing the text, to bear in mind a series of processes that will be explained in this handbook, such as: (1) the justification for the research in terms of its social and educational interest, and in theoretical terms; (2) the gathering of information or data; (3) the treatment and organization of the data; (4) the adoption of a theoretical and methodological framework; (5) data analysis; (6) the interpretation of data in an original and/or creative way, and obtaining the findings; (7) setting out a discussion on the relevance of the results; and (8) setting out the conclusions. Differences between a master's dissertation and a thesis are also described. A discussion of differences between articles in scientific journals and chapters of a book are discussed. [A Catalan version of this chapter is also included in the book. The transcription symbols used in this chapter are based on conventions developed by the GREIP group (see Moore & Llompart, this volume) and are included in the annex.
- Published
- 2017
4. Discussion paper: implications for the further development of the successfully in emergency medicine implemented AUD2IT-algorithm
- Author
-
Christopher Przestrzelski, Antonina Jakob, Clemens Jakob, and Felix R. Hoffmann
- Subjects
handover ,emergency medicine ,process management ,interoperability (IoP) ,data ,Medicine ,Public aspects of medicine ,RA1-1270 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The AUD2IT-algorithm is a tool to structure the data, which is collected during an emergency treatment. The goal is on the one hand to structure the documentation of the data and on the other hand to give a standardised data structure for the report during handover of an emergency patient. AUD2IT-algorithm was developed to provide residents a documentation aid, which helps to structure the medical reports without getting lost in unimportant details or forgetting important information. The sequence of anamnesis, clinical examination, considering a differential diagnosis, technical diagnostics, interpretation and therapy is rather an academic classification than a description of the real workflow. In a real setting, most of these steps take place simultaneously. Therefore, the application of the AUD2IT-algorithm should also be carried out according to the real processes. A big advantage of the AUD2IT-algorithm is that it can be used as a structure for the entire treatment process and also is entirely usable as a handover protocol within this process to make sure, that the existing state of knowledge is ensured at each point of a team-timeout. PR-E-(AUD2IT)-algorithm makes it possible to document a treatment process that, in principle, does not have to be limited to the field of emergency medicine. Also, in the outpatient treatment the PR-E-(AUD2IT)-algorithm could be used and further developed. One example could be the preparation and allocation of needed resources at the general practitioner. The algorithm is a standardised tool that can be used by healthcare professionals of any level of training. It gives the user a sense of security in their daily work.
- Published
- 2024
- Full Text
- View/download PDF
5. Easing the Transition from Paper to Screen: An Evaluatory Framework for CAA Migration
- Author
-
McAlpine, Mhairi
- Abstract
Computer assisted assessment is becoming more and more common through further and higher education. There is some debate about how easy it will be to migrate current assessment practice to a computer enhanced format and how items which are currently re-used for formative purposes may be adapted to be presented online. This paper proposes an evaluatory framework to assess and enhance the practicability of large-scale CAA migration for existing items and assessments. The framework can also be used as a tool for exposing compromises between delivery mechanism and validity--exposing the limits of validity of modified paper based assessments and highlighting the crucial areas for transformative assessments. (Contains 1 note, 5 tables, and 1 figure.)
- Published
- 2004
- Full Text
- View/download PDF
6. HS2 Phase One: Heritage GIS Digital Archive (Data paper)
- Author
-
Fred Farshid Aryankhesal
- Subjects
archaeology ,data ,archive ,historic environment ,hs2 ,Archaeology ,CC1-960 - Abstract
High Speed 2 (HS2) will be the largest programme of historic environment investigation and recording works ever undertaken in the UK. It is certain that the creation of HS2's historic environment physical and digital archive (High Speed Two Ltd. 2023) is an integral part of the lasting legacy of the programme, which presents an unprecedented opportunity for significant knowledge creation. HS2 historic environment works that have been undertaken for Phase One of HS2 between London to the West Midlands has resulted in a substantial digital archive, including Geographic Information Systems (GIS) data. This data paper highlights the GIS spatial datasets generated from the HS2 Phase One historic environment fieldwork programme. It explains the technical components of the datasets which are deposited with the Archaeology Data Service (ADS).
- Published
- 2023
- Full Text
- View/download PDF
7. Dear Diary, Is Plastic Better than Paper? I Can't Remember: Comment on Green, Rafaeli, Bolger, Shrout, and Reis (2006)
- Author
-
Takarangi, Melanie K. T., Garry, Maryanne, and Loftus, Elizabeth F.
- Abstract
In this commentary, the authors discuss the implications of A. S. Green, E. Rafaeli, N. Bolger, P. E. Shrout, and H. T. Reis's (2006) diary studies with respect to memory. Researchers must take 2 issues into account when determining whether paper-and-pencil or handheld electronic diaries gather more trustworthy data. The first issue is a matter of prospective memory, and the second is a matter of reconstructive memory. The authors review the research on these issues and conclude that regardless of the type of diary researchers use, several factors can conspire to produce prompt--but inaccurate--data.
- Published
- 2006
8. Metalurgija Journal 1962-2022 y – List of Published Papers
- Author
-
I. Mamuzić
- Subjects
metallurgy ,journal ,articles ,list ,data ,Mining engineering. Metallurgy ,TN1-997 - Abstract
For the interval 1962-2022 y, during the continuos publication last 60 years, in Metalurgija Journal Authors from 40 countries from Mexico to China (all 5 continent ) have publish. The goal of the Article is give List of Papers published in this interval, 199 issues or 238 numbers, 2721 scientific and technical, and 287 contributions (total 3008 papers ) of Authors whose investigation results and ideas have been examined and found on the pages of this Journal. Thanks for all.
- Published
- 2022
9. A Shared Lens around Sensemaking in Learning Analytics: What Activity Theory, Definition of a Situation and Affordances Can Offer
- Author
-
Oleksandra Poquet
- Abstract
The paper argues that learning analytics as a research field can benefit from a theory-informed shared language to describe sensemaking of learning and teaching data. To make the case for such shared language, first, I critically review prominent sensemaking theories to then demonstrate how studies in learning analytics do not use coherent descriptions of sensemaking, eclectically combining the paradigms that have underlying differences. I then propose a conceptualization of sensemaking that overcomes the differences between these theories and explains how the concepts of "activity system," the "definition of the situation" and "affordances" can be used to capture individual differences in sensemaking. The paper concludes with a preliminary framework and examples demonstrating its utility in raising new theoretical questions, informing design principles and providing shared language for researchers in learning analytics.
- Published
- 2024
- Full Text
- View/download PDF
10. Can't Inflate Data? Let the Models Unite and Vote: Data-Agnostic Method to Avoid Overfit with Small Data
- Author
-
Shimmei, Machi and Matsuda, Noboru
- Abstract
We propose an innovative, effective, and data-agnostic method to train a deep-neural network model with an extremely small training dataset, called VELR (Voting-based Ensemble Learning with Rejection). In educational research and practice, providing valid labels for a sufficient amount of data to be used for supervised learning can be very costly and often impractical. The shortage of training data often results in deep neural networks being overfitting. There are many methods to avoid overfitting such as data augmentation and regularization. Though, data augmentation is considerably data dependent and does not usually work well for natural language processing tasks. Moreover, regularization is often quite task specific and costly. To address this issue, we propose an ensemble of overfitting models with uncertainty-based rejection. We hypothesize that misclassification can be identified by estimating the distribution of the class-posterior probability P(y|x) as a random variable. The proposed VELR method is data independent, and it does not require changes to the model structure or the re-training of the model. Empirical studies demonstrated that VELR achieved classification accuracy of 0.7 with only 200 samples per class on the CIFAR-10 dataset, but 75% of input samples were rejected. VELR was also applied to a question generation task using a BERT language model with only 350 training data points, which resulted in generating questions that are indistinguishable from human-generated questions. The paper concludes that VELR has potential applications to a broad range of real-world problems where misclassification is very costly, which is quite common in the educational domain. [For the complete proceedings, see ED630829.]
- Published
- 2023
11. Cleaning up the paper trail – our clinical notes in open view
- Author
-
Lambe, Gerard, Linnane, Niall, Callanan, Ian, and Butler, Marcus W.
- Published
- 2018
- Full Text
- View/download PDF
12. Manuscripts Submitted for Publication in the Information Profession in Africa: A Comparative Analysis of Characteristics of Rejected and Accepted Papers.
- Author
-
Aina, L. O. and Mabawonku, I. M.
- Abstract
Examines the characteristics of rejected manuscripts submitted to the "African Journal of Library, Archives and Information Science." Most of the papers were rejected because they contributed nothing new to knowledge (65.5%), used unreliable data (13.1%) and lacked focus (13.1%). There were no remarkable differences with regard to status and affiliations between authors of rejected and accepted papers. (Author/AEF)
- Published
- 1998
13. "Paper More Precious Than Blood": Chinese Exclusion Era Identity Documentation Processes and Racialization of Identity Data.
- Author
-
Nham, Kai
- Subjects
- *
RACIALIZATION ,CHINESE Exclusion Act of 1882 - Abstract
This project interrogates the United States' national fixation on the answer to the question: Who are you? In this article, it is posed that identity documentation practices arising out of the Chinese Exclusion Act era cast identity as an empirical and immutable phenomenon, specifically in response to the racialization of American-born Chinese settlers as duplicitous, through the mechanisms that information is collected, the actual information itself, and the cross-references or connections created between cases. Through tracing this lineage, racialized identification data is identified and theorized as part of hegemonic data regimes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Sanctity of Digital Privacy and Personal Data during COVID-19: Are Youths Enough Digitally Literate to Deal with It?
- Author
-
Ghosh, Swagata, Chauhan, Gajendra Singh, and Kotwal, Renu
- Abstract
The COVID-19 pandemic has fast-tracked the development of digital applications and inspired everyone to adapt to the technologies to curb the spread of outbreak. As this crisis intensifies, the rapid usage of digital devices and apps has echoed the serious concerns about civil liberties, privacy, and data protection. Considering the situation, this research aimed to explore the internet using habits of the youths of West Bengal, a state in eastern India, during COVID-19. Besides, the paper explored their experiences of using various digital applications, the fundamental digital literacy and how safely they protect data from breaches. Thus, the paper presents the results by conducting an online survey among the youths in West Bengal. The result, from 215 participants, highlighted that the increased use of these digital applications has not matched the demand for digital privacy literacy among the young generation of the state. While this pandemic has raised their concerns over digital privacy and data protection, yet they do not undertake any strong protection mechanisms to safeguard them digitally. Besides, this paper suggests suitable plans to raise awareness among this generation and form a healthy digital citizenship with a proper regulatory framework as it is the need of the hour.
- Published
- 2023
15. Psychological Testing at Entrance Exam at 'Dunarea de Jos' University of Galati, Romania
- Author
-
Andrei, Mihaela and Pricopie-Filip, Alina
- Abstract
The university admission test comes after the high school graduation exam - the baccalaureate. The baccalaureate results of each candidate must be known by the university admissions committee. They provide information on the degree of intelligence, the skills acquired up to this date, but also the presence of inclinations and skills indispensable to the fulfillment of professional aspirations. The university entrance exam should not be focused only on quantity and quality of knowledge. Besides that, one of the objectives of this exam must be to test the interest in completing the studies through the university level for which he opts, but also the candidate's skills that "offer" him the productive and satisfying course of the entire cycle of higher education, even the perspective of future achievements. To realize that three psychological investigation tools of candidates (tests) can be used, necessary to highlight: (1) personality profile of the candidate; (2) interest profile, motivational; and (3) aptitude profile. The paper proposes a new admission methodology: the data collected through the proposed tests and correlated with the high school graduation data can accept the candidates, as admitted to the profile they opted for, or can redirect them to choose the right path. [For the full proceedings, see ED630948.]
- Published
- 2022
16. Towards Real Interpretability of Student Success Prediction Combining Methods of XAI and Social Science
- Author
-
Cohausz, Lea
- Abstract
Despite calls to increase the focus on explainability and interpretability in EDM and, in particular, student success prediction, so that it becomes useful for personalized intervention systems, only few efforts have been undertaken in that direction so far. In this paper, we argue that this is mainly due to the limitations of current Explainable Artificial Intelligence (XAI) approaches regarding interpretability. We further argue that the issue, thus, calls for a a combination of AI and social science methods utilizing the strengths of both. For this, we introduce a step-wise model of interpretability where the first step constitutes of knowing important features, the second step of understanding counterfactuals regarding a particular person's prediction, and the third step of uncovering causal relations relevant for a set of similar students. We show that LIME, a current XAI method, reaches the first but not subsequent steps. To reach step two, we propose an extension to LIME, Minimal Counterfactual-LIME, finding the smallest number of changes necessary to change a prediction. Reaching step three, however, is more involved and additionally requires theoretical and causal reasoning - to this end, we construct an easily applicable framework. Using artificial data, we showcase that our methods can recover connections among features; additionally, we demonstrate its applicability on real-life data. Limitations of our methods are discussed and collaborations with social scientists encouraged. [For the full proceedings, see ED623995.]
- Published
- 2022
17. Can Population-Based Engagement Improve Personalisation? A Novel Dataset and Experiments
- Author
-
Bulathwela, Sahan, Verma, Meghana, Pérez-Ortiz, María, Yilmaz, Emine, and Shawe-Taylor, John
- Abstract
This work explores how population-based engagement prediction can address cold-start at scale in large learning resource collections. The paper introduces: (1) VLE, a novel dataset that consists of content and video based features extracted from publicly available scientific video lectures coupled with implicit and explicit signals related to learner engagement; (2) two standard tasks related to predicting and ranking context-agnostic engagement in video lectures with preliminary baselines; and (3) a set of experiments that validate the usefulness of the proposed dataset. Our experimental results indicate that the newly proposed VLE dataset leads to building context-agnostic engagement prediction models that are significantly performant than ones based on previous datasets, mainly attributing to the increase of training examples. VLE dataset's suitability in building models towards Computer Science/ Artificial Intelligence education focused on e-learning/MOOC use-cases is also evidenced. Further experiments in combining the built model with a personalising algorithm show promising improvements in addressing the cold-start problem encountered in educational recommenders. This is the largest and most diverse publicly available dataset to our knowledge that deals with learner engagement prediction tasks. The dataset, helper tools, descriptive statistics and example code snippets are available publicly. [For the full proceedings, see ED623995.]
- Published
- 2022
18. DerSql, Generating SQL from an Entity-Relation Diagram
- Author
-
Andrea Domínguez-Lara and Wulfrano Arturo Luna-Ramírez
- Abstract
The automatic code generation is the process of generating source code snippets from a program, i.e., code for generating code. Its importance lies in facilitating software development, particularly important is helping in the implementation of software designs such as engineering diagrams, in such a case, automatic code generation copes with the problem of how to obtain code from a graphic representation, for instance an UML diagram or a Relational Diagram. Some advantages of automatic code generation are: a) to obtain the source code more quickly and to do it with lower margins of error; b) it is promising to be applied in teaching contexts, whilst provide instructors with a tool to teach, the expected results of assignments can be assessed by comparing the results of students and the automatic generated code. Furthermore, one of the most frequently tasks in classrooms when teaching relational databases is the design of Entity-Relationship Diagrams which eventually become SQL code. The manual transition from an Entity-Relationship Diagram to SQL code is a time-consuming process and requires of a skilled eye to be successfully performed. In this paper, we present "DerSql," an extension of the DIA Diagrammer, a well-known free software engineering tool, to automatically generate SQL code from an Entity-Relationship Diagrams. The results are tested for the case of 1 -- 1 and 1 -- n arities relationships. We consider that "DerSql" represents a remarkable tool for teaching while it is a promising advance in developing DIA as a 4th Generation software engineering application. [For the full proceedings, see ED638044.]
- Published
- 2022
19. Skill up Tennessee: Job Training That Works
- Author
-
Sneed, Christopher T., Upendram, Sreedhar, Cummings, Clint, and Fox, Janet E.
- Abstract
Employment and training services offered through Extension are part of and continue a long tradition of policy-focused employment and job training. This paper chronicles the successes of UT Extension's work as a third-party partner in the delivery of workforce development programming geared toward individuals receiving Supplemental Nutrition Assistance Program (SNAP) benefits. The paper begins with an overview of the federal program and a discussion of how Tennessee forged a state-level partnership for the delivery of workforce services. Data showing program success including number of participants served, supportive services offered, and economic impact are highlighted. Finally, lessons learned are outlined.
- Published
- 2023
20. Privacy Harm and Non-Compliance from a Legal Perspective
- Author
-
Suvineetha Herath, Haywood Gelman, and Lisa Mckee
- Abstract
In today's data-sharing paradigm, personal data has become a valuable resource that intensifies the risk of unauthorized access and data breach. Increased data mining techniques used to analyze big data have posed significant risks to data security and privacy. Consequently, data breaches are a significant threat to individual privacy. Privacy is a multifaceted concept covering many areas, including the right to access, erasure, and rectify personal data. This paper explores the legal aspects of privacy harm and how they transform into legal action. Privacy harm is the negative impact to an individual as a result of the unauthorized release, gathering, distillation, or expropriation of personal information. Privacy Enhancing Technologies (PETs) emerged as a solution to address data privacy issues and minimize the risk of privacy harm. It is essential to implement privacy enhancement mechanisms to protect Personally Identifiable Information (PII) from unlawful use or access. FIPPs (Fair Information Practice Principles), based on the 1973 Code of Fair Information Practice (CFIP), and the Organization for Economic Cooperation and Development (OECD), are a collection of widely accepted, influential US codes that agencies use when evaluating information systems, processes, programs, and activities affecting individual privacy. Regulatory compliance places a responsibility on organizations to follow best practices to ensure the protection of individual data privacy rights. This paper will focus on FIPPs, relevance to US state privacy laws, their influence on OECD, and reference to the EU General Data Processing Regulation. (GDPR).
- Published
- 2023
21. Classroom Equity Data Inquiry for Racial Equity
- Author
-
Rebekah Sidman-Taveau
- Abstract
Longstanding inequities exist in community colleges across the United States. To address these inequities, California Community Colleges educators have engaged in a variety of practices including the writing of equity plans and participation in equity data inquiry. However, there is an urgent need for greater focus on racial equity and for more faculty involvement in equity work at the classroom level. This paper presents a teacher case study exploring Classroom Equity Data Inquiry (CEDI), a tool for faculty professional learning focused on equitable student outcomes. In CEDI, professors examine their disaggregated classroom data, reflect on their class equity gaps, and pursue relevant professional development. They implement targeted interventions and then assess those interventions. This paper describes the author's sustained CEDI utilizing six years of equity data in her English as a Second Language classes at a small northern California community college. First, it provides a definition and rationale for CEDI. Second, it details the author's CEDI process and challenges. Third, it shares the author's changes in thinking and practice including high impact interventions the author implemented to reduce equity gaps for men of color in her classes. Fourth, the article describes positive qualitative student data and increased success and retention rates for Hispanic and multi-race males following the interventions. The article concludes that CEDI requires training, support, and time, but that the approach merits further research. More research is needed on CEDI methods and their possible impact on racial equity in the classroom.
- Published
- 2024
- Full Text
- View/download PDF
22. Toward Redefining Library Research Support Services in Australia and Aotearoa New Zealand: An Evidence-Based Practice Approach
- Author
-
Alisa Howlett, Eleanor Colla, and Rebecca Joyce
- Abstract
An increasingly complex and demanding research landscape has seen university libraries rapidly evolve their services. While research data management, bibliometrics, and research impact services have predominantly featured in the literature to date, the full scope of support libraries are currently providing to their institutions is unknown. This paper aims to present an up-to-date view of the scope and extent of research support services by university libraries across Australia and Aotearoa New Zealand. A coding process analyzed content data from university library websites. Eleven research support areas were identified. Service delivery is split between synchronous and asynchronous modes. This paper describes a lived experience of an evidence-based library and information practice approach to improving research support services at two Australian university libraries, and while it highlights continued maturation of research support services, more research is needed to better understand influences on service development.
- Published
- 2024
- Full Text
- View/download PDF
23. Law Case Teaching Combining Big Data Environment with SPSS Statistics
- Author
-
Zhao Wang
- Abstract
This paper proposes an online learning platform learner DM method based on the improved fuzzy C clustering (FCM) algorithm, constructs a learner feature database, and combines clustering analysis and SPSS statistical methods to statistically summarize the big data of law, thus improving the deficiencies of static and absolute classification of students in the student model. In the experiment paper, the improved algorithm is implemented and the experimental data is analyzed. The results show that the learner behavior feature extraction model in this paper has fewer errors and higher recall rate. Compared with the traditional CF algorithm, the error rate is reduced by 19.64% and the recall rate is increased by 22.85%. This study provides better targeted teaching programs and case resources for legal case teaching and promotes the innovation of legal case teaching mode.
- Published
- 2024
- Full Text
- View/download PDF
24. The Data Awareness Framework as Part of Data Literacies in K-12 Education
- Author
-
Lukas Höper and Carsten Schulte
- Abstract
Purpose: In today's digital world, data-driven digital artefacts pose challenges for education, as many students lack an understanding of data and feel powerless when interacting with them. This paper aims to address these challenges and introduces the data awareness framework. It focuses on understanding data-driven technologies and reflecting on the role of data in everyday life. The paper also presents an empirical study on young school students' data awareness. Design/methodology/approach: The study involves a teaching unit on data awareness framed by a pre- and post-test design using a questionnaire on students' awareness and understanding of and reflection on data practices of data-driven digital artefacts. Findings: The study's findings indicate that the data awareness framework supports students in understanding data practices of data-driven digital artefacts. The findings also suggest that the framework encourages students to reflect on these data practices and think about their daily behaviour. Originality/value: Students learn a model about interactions with data-driven digital artefacts and use it to analyse data-driven applications. This approach appears to enable students to understand these artefacts from everyday life and reflect on these interactions. The work contributes to research on data and artificial intelligence literacies and suggests a way to support students in developing self-determination and agency during interactions with data-driven digital artefacts.
- Published
- 2024
- Full Text
- View/download PDF
25. Semi-Supervised Learning Method for Adjusting Biased Item Difficulty Estimates Caused by Nonignorable Missingness under 2PL-IRT Model
- Author
-
Xue, Kang, Huggins-Manley, Anne Corinne, and Leite, Walter
- Abstract
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of nonignorable missing data in the VLE log le data, and this is expected to negatively impact IRT item parameter estimation accuracy, which then negatively impacts any future ability estimates utilized in the VLE. In the psychometric literature, methods for handling missing data are mostly centered around conditions in which the data and the amount of missing data are not as large as those that come from VLEs. In this paper, we introduce a semi-supervised learning method to deal with a large proportion of missingness contained in VLE data from which one needs to obtain unbiased item parameter estimates. The proposed framework showed its potential for obtaining unbiased item parameter estimates that can then be fixed in the VLE in order to obtain ongoing ability estimates for operational purposes. [This paper was published in: V. Cavalli-Sforza, A. N. Rafferty, C. Romero, & J. Whitehill (Eds.), "Proceedings of The 13th International Conference on Educational Data Mining (EDM 2020)," (pp. 715-719).]
- Published
- 2020
26. Choosing American Colleges from Afar: Chinese Students' Perspectives
- Author
-
Yefei Xue, Siguo Li, and Liang Ding
- Abstract
Chinese students studying abroad have been increasing rapidly in the past decades and become a significant financial contribution to receiving countries. Accordingly, understanding their enrollment choice is essential to facilitate college marketing and admission strategies. Though the decision process is believed to be different from domestic students, empirical analysis of Chinese students' enrollment choices is still lacking. This paper fills the void by examining the influential factors of Chinese students' enrollment choice with novel student-level data. We find that in addition to factors domestic students typically consider, such as financial aid and academic quality, Chinese students particularly emphasize college ranking, reputation, and location in their decision process. Furthermore, unlike domestic students who usually prefer colleges with proximity to home, Chinese students' location preference is linked to job prosperity. We also find that the impact of the factors varies for students from different regions of China, which can be attributable to uneven economic development within the country.
- Published
- 2024
27. Data Papers as a New Form of Knowledge Organization in the Field of Research Data.
- Author
-
Schöpfel, Joachim, Farace, Dominic, Prost, Hélène, and Zane, Antonella
- Subjects
KNOWLEDGE management ,BUSINESS models ,METADATA ,SCHOLARLY publishing ,DESCRIPTIVE statistics - Abstract
Data papers have been defined as scholarly journal publications whose primary purpose is to describe research data. Our survey provides more insights about the environment of data papers, i.e., disciplines, publishers and business models, and about their structure, length, formats, metadata, and licensing. Data papers are a product of the emerging ecosystem of data-driven open science. They contribute to the FAIR principles for research data management. However, the boundaries with other categories of academic publishing are partly blurred. Data papers are (can be) generated automatically and are potentially machine-readable. Data papers are essentially information, i.e., description of data, but also partly contribute to the generation of knowledge and data on its own. Part of the new ecosystem of open and data-driven science, data papers and data journals are an interesting and relevant object for the assessment and understanding of the transition of the former system of academic publishing. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Effect of hydrocarbon type on reactivity of exhaust gases. Paper 650524
- Author
-
Fleming, R
- Published
- 2020
29. Understanding Privacy and Data Protection Issues in Learning Analytics Using a Systematic Review
- Author
-
Liu, Qinyi and Khalil, Mohammad
- Abstract
The field of learning analytics has advanced from infancy stages into a more practical domain, where tangible solutions are being implemented. Nevertheless, the field has encountered numerous privacy and data protection issues that have garnered significant and growing attention. In this systematic review, four databases were searched concerning privacy and data protection issues of learning analytics. A final corpus of 47 papers published in top educational technology journals was selected after running an eligibility check. An analysis of the final corpus was carried out to answer the following three research questions: (1) What are the privacy and data protection issues in learning analytics? (2) What are the similarities and differences between the views of stakeholders from different backgrounds on privacy and data protection issues in learning analytics? (3) How have previous approaches attempted to address privacy and data protection issues? The results of the systematic review show that there are eight distinct, intertwined privacy and data protection issues that cut across the learning analytics cycle. There are both cross-regional similarities and three sets of differences in stakeholder perceptions towards privacy and data protection in learning analytics. With regard to previous attempts to approach privacy and data protection issues in learning analytics, there is a notable dearth of applied evidence, which impedes the assessment of their effectiveness. The findings of our paper suggest that privacy and data protection issues should not be relaxed at any point in the implementation of learning analytics, as these issues persist throughout the learning analytics development cycle. One key implication of this review suggests that solutions to privacy and data protection issues in learning analytics should be more evidence-based, thereby increasing the trustworthiness of learning analytics and its usefulness.
- Published
- 2023
- Full Text
- View/download PDF
30. Implement Adaptation in a Case Based ITS
- Author
-
Graf von Malotky, Nikolaj Troels and Martens, Alke
- Abstract
ITSs have the requirement to be adaptive to the student with AI. The classical ITS architecture defines three components to split the data and to keep it flexible and thus adaptive. However, there is a lack of abstract descriptions how to put adaptive behavior into practice. This paper defines how you can structure your data for case based systems in a way that adaptivity is easier to achieve while maintaining the classical splitting of the system and reducing the data footprint. Building a case based system from a collection of exchangeable steps is also possible with this approach. Two variants of adaptivity based on the data structure are explored and both can be used in conjunction. [For the full proceedings, see ED621108.]
- Published
- 2021
31. Persona Journey Mapping to Drive Equity during an LMS Transition
- Author
-
Kam Moi Lee, Megan Mcfarland, and Kari Goin Kono
- Abstract
One way to achieve equitable design is to directly include users who will be impacted the most in the planning and facilitation of a project. Common financial, logistical, and/or temporal constraints reveal that direct inclusion of the people most impacted is not always possible. If this barrier arises, one promising alternative is the creation and use of personas. Using a vignette and case study qualitative methodological approach, three researchers at a large urban university in the Pacific Northwest detail personas and journey mapping as an equitable design practice during a LMS migration on a rapid development timeline. This paper details how personas were created using empirical data, how journey mapping impacted various teams, and how centering equity better prepared staff to support instructors throughout the migration while addressing the student learning impact.
- Published
- 2023
32. A Review Paper on Big Data and Data Mining Concepts and Techniques
- Author
-
Prasdika Prasdika and Bambang Sugiantoro
- Subjects
data ,big data ,data mining ,Electronic computers. Computer science ,QA75.5-76.95 ,Economic growth, development, planning ,HD72-88 - Abstract
In the digital era like today the growth of data in the database is very rapid, all things related to technology have a large contribution to data growth as well as social media, financial technology and scientific data. Therefore, topics such as big data and data mining are topics that are often discussed. Data mining is a method of extracting information through from big data to produce an information pattern or data anomaly
- Published
- 2018
- Full Text
- View/download PDF
33. 2022 BenchCouncil International Symposium on benchmarking, measuring and optimizing (Bench 2022) call for papers.
- Author
-
Chunjie Luo and Wanling Gao
- Subjects
BENCHMARKING (Management) ,DATA management ,HARDWARE ,COMPUTER software ,DATA - Published
- 2022
- Full Text
- View/download PDF
34. Using Markup Languages for Accessible Scientific, Technical, and Scholarly Document Creation
- Author
-
White, Jason J. G.
- Abstract
In using software to write a scientific, technical, or other scholarly document, authors have essentially two options. They can either write it in a 'what you see is what you get' (WYSIWYG) editor such as a word processor, or write it in a text editor using a markup language such as HTML, LATEX, Markdown, or AsciiDoc. This paper gives an overview of the latter approach, focusing on both the non-visual accessibility of the writing process, and that of the documents produced. Currently popular markup languages and established tools associated with them are introduced. Support for mathematical notation is considered. In addition, domain-specific programming languages for constructing various types of diagrams can be well integrated into the document production process. These languages offer interesting potential to facilitate the non-visual creation of graphical content, while raising insufficiently explored research questions. The flexibility with which documents written in current markup languages can be converted to different output formats is emphasized. These formats include HTML, EPUB, and PDF, as well as file formats used by contemporary word processors. Such conversion facilities can serve as means of enhancing the accessibility of a document both for the author (during the editing and proofreading process) and for those among the document's recipients who use assistive technologies, such as screen readers and screen magnifiers. Current developments associated with markup languages and the accessibility of scientific or technical documents are described. The paper concludes with general commentary, together with a summary of opportunities for further research and software development.
- Published
- 2022
35. Using Community-Based Problems to Increase Motivation in a Data Science Virtual Internship
- Author
-
Johnson, Jillian C. and Olney, Andrew M.
- Abstract
Typical data science instruction uses generic datasets like survival rates on the Titanic, which may not be motivating for students. Will introducing real-life data science problems fill this motivational deficit? To analyze this question, we contrasted learning with generic datasets and artificial problems (Phase 1) with a community-sourced dataset and authentic problems (Phase 2) in the context of an 8-week virtual internship. Retrospective survey questions indicated interns experienced increased motivation in Phase 2. Additionally, analysis of intern discourse using Linguistic Inquiry and Word Count (LIWC) indicated a significant difference in linguistic measures between the two phases. Phase 1 had significantly greater measures of pronouns with a small-medium effect size, 2nd person words with a medium-large effect size, positive emotion with a medium effect size, inter-rogations with a medium-large effect size, question marks with a medium-large effect size, risk with a medium-large effect size, and causal words with a medium effect size. These results in conjunction with a retrospective survey suggest that phase 1 had more questions asked, more causal relationships defined, and included linguistic features of success and failure. Results from Phase 2 indicated that community-sourced data and problems may increase motivation for learning data science. [For the full proceedings, see ED623995.]
- Published
- 2022
36. Summary of Papers on Predicting Aggregated-Scale Coastal Evolution
- Published
- 2003
37. Synergistic competencies of business graduates for the digital age: directions for higher education
- Author
-
Butcher, Luke, Sung, Billy, and Cheah, Isaac
- Published
- 2024
- Full Text
- View/download PDF
38. Efficiency assessment of Indian paper mills through fuzzy DEA.
- Author
-
Singh, Natthan and Pant, Millie
- Subjects
PAPER mills ,BIOCHEMICAL oxygen demand ,DATA envelopment analysis ,CHEMICAL oxygen demand ,WATER consumption ,RAW materials - Abstract
The present study proposes a Fuzzy Data Envelopment Analysis (FDEA) approach for analyzing the performance of 8 selected paper mills of India. The proposed approach named FDEA considers the use of fuzzy weights in the objective function and makes use of alpha cut to decide the fuzzy interval. The efficiency of paper mills is evaluated based on 3 input parameters (raw material, energy consumption, and water consumption) and 4 output parameters (paper production, Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD) and Greenhouse Gas (GHG) emissions). Further, this paper also analyzes the effect of negative outputs, like BOD, COD, and GHG on the efficiency of paper mills. The study indicates that FDEA can be used efficiently for evaluating the performance of a particular sector under similar conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Paper-Based Computing
- Author
-
Hannon, Charles
- Abstract
Faculty have a great deal of control over their lectures, lecture notes, and slides. This article discusses a coming wave of recording devices and other classroom technologies--this time wielded by the students--which will test this control and force serious conversations about how to best help students learn, what it means to own an idea, and what is meant when talking about developing a community of learners on campus. The harbinger of this wave is the Livescribe Pulse smart pen, created by an MIT engineer and initially aimed directly at the college student market. The smart pen points a tiny camera at specially marked paper, captures what is written, and converts the writing to PDF files and plain text in what is being called paper-based computing. The pen comes with microphones that capture audio and software that synchronizes it with the written notes. A student can replay an entire lecture at a later time, either by interacting with the written notes or through a computer. The pen's software also makes it easy to share recorded class sessions with other students at the Livescribe website or through Facebook. (Contains 4 endnotes.)
- Published
- 2008
40. Enhancing Teaching and Learning for Pupils with Dyslexia: A Comprehensive Review of Technological and Non-Technological Interventions
- Author
-
Salman Jav, Manoranjitham Muniandy, Chen Kang Lee, and Husniza Husni
- Abstract
Dyslexia is the most prevalent disorder in the world that causes difficulties with reading, writing, and spelling. Pupils with dyslexia show trouble with their cognitive skills. Various interventions were already introduced for their treatment but dyslexia is still a trending disorder. The available interventions utilized for these pupils' learning open the research area for the current state-of-art of learning interventions for pupils with dyslexia. The results of this Systematic Literature Review show the trending interventions, sensory approaches utilized, and difficulties for pupils with dyslexia learning. Papers published over a period of 5 years were reviewed and their data was collected using a rigid systematic process. Based on the gathered data, several analyses were conducted. The search shows that nowadays, technological-based interventions are trending specifically apps and games, in parallel haptics technology is in its very initial stage. The most predominant sensory approaches were visual and auditory, followed by kinesthetic and tactile, mainly intervening with non-technological and technological interventions. There are still many open issues and research opportunities in the field of learning interventions for pupils with dyslexia, as most researchers utilized the visual and auditory approaches for the feedback and guidance of these pupils, while they lack to utilize the kinesthetic and tactile.
- Published
- 2024
- Full Text
- View/download PDF
41. Discussion paper: implications for the further development of the successfully in emergency medicine implemented AUD2IT-algorithm.
- Author
-
Przestrzelski, Christopher, Jakob, Antonina, Jakob, Clemens, and Hoffmann, Felix R.
- Subjects
DOCUMENTATION ,CURRICULUM ,HUMAN services programs ,EMERGENCY medicine ,EXPERIENCE ,MEDICAL records ,ELECTRONIC publications ,ALGORITHMS ,PATIENTS' attitudes - Abstract
The AUD2IT-algorithm is a tool to structure the data, which is collected during an emergency treatment. The goal is on the one hand to structure the documentation of the data and on the other hand to give a standardised data structure for the report during handover of an emergency patient. AUD2IT-algorithm was developed to provide residents a documentation aid, which helps to structure the medical reports without getting lost in unimportant details or forgetting important information. The sequence of anamnesis, clinical examination, considering a differential diagnosis, technical diagnostics, interpretation and therapy is rather an academic classification than a description of the real workflow. In a real setting, most of these steps take place simultaneously. Therefore, the application of the AUD2IT-algorithm should also be carried out according to the real processes. A big advantage of the AUD2IT-algorithm is that it can be used as a structure for the entire treatment process and also is entirely usable as a handover protocol within this process to make sure, that the existing state of knowledge is ensured at each point of a team-timeout. PR-E-(AUD2IT)-algorithm makes it possible to document a treatment process that, in principle, does not have to be limited to the field of emergency medicine. Also, in the outpatient treatment the PR-E-(AUD2IT)-algorithm could be used and further developed. One example could be the preparation and allocation of needed resources at the general practitioner. The algorithm is a standardised tool that can be used by healthcare professionals of any level of training. It gives the user a sense of security in their daily work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Organizing visions for data-centric management: how Norwegian policy documents construe the use of data in health organizations
- Author
-
Solberg, Mads, Kirchhoff, Ralf, Oksavik, Jannike Dyb, and Wessel, Lauri
- Published
- 2024
- Full Text
- View/download PDF
43. Creating the baseline: data relations and frictions of UK City of Culture evaluation
- Author
-
Ashton, Daniel, Gowland-Pryde, Ronda, Roth, Silke, and Sturt, Fraser
- Published
- 2024
- Full Text
- View/download PDF
44. Technology enablement of the skills ecosystem
- Author
-
Boyer, Naomi Rose and Griffith, Margo Leanne
- Published
- 2023
- Full Text
- View/download PDF
45. A Basic Paper Handling Task Experiment Using Tri-axial Tactile Data.
- Author
-
Sugiman, Kenji, Ohka, Masahiro, and Jusoh, Mohammad Azzeim bin Mat
- Subjects
TACTILE sensors ,DETECTORS ,DATA ,STATISTICS ,STIFFNESS (Engineering) - Abstract
In advanced missions, handling thin and soft membranes is difficult for robots. This task is composed of several sub-tasks, including turning and removing a sheet from a pile of papers on a table, folding paper, and sticking two pieces of paper together. In this study, a hand robot turns and removes a sheet from a pile of papers, and ensures that only one sheet is grasped. To perform this task, two robotic fingers compress the piled papers in the normal direction of the table with a specific force as they close. We therefore incorporated position-based force control into our system; for the force controller, we adopted stiffness control that calculates position modification proportionally to the difference between the target force and an external force measured by the tactile sensor. With this controller, the robot was able to remove a sheet from a pile of papers and ensure that only one sheet was removed. Using this discrimination, the maximum-minimum shearing forces are better classified than with normal force distribution deviation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. Just a Few Expert Constraints Can Help: Humanizing Data-Driven Subgoal Detection for Novice Programming
- Author
-
Marwan, Samiha, Shi, Yang, Menezes, Ian, Chi, Min, Barnes, Tiffany, and Price, Thomas W.
- Abstract
Feedback on how students progress through completing subgoals can improve students' learning and motivation in programming. Detecting subgoal completion is a challenging task, and most learning environments do so either with "expert-authored" models or with "data-driven" models. Both models have advantages that are complementary -- expert models encode domain knowledge and achieve reliable detection but require "extensive authoring efforts" and often cannot capture all students' possible solution strategies, while data-driven models can be easily scaled but may be less accurate and interpretable. In this paper, we take a step towards achieving the best of both worlds -- utilizing a data-driven model that can intelligently detect subgoals in students' correct solutions, while benefiting from human expertise in editing these data-driven subgoal rules to provide more accurate feedback to students. We compared our hybrid "humanized" subgoal detectors, built from data-driven subgoals modified with expert input, against an existing data-driven approach and baseline supervised learning models. Our results showed that the hybrid model outperformed all other models in terms of overall accuracy and F1-score. Our work advances the challenging task of automated subgoal detection during programming, while laying the groundwork for future hybrid expert-authored/data-driven systems. [For the full proceedings, see ED615472.]
- Published
- 2021
47. The Topologies of Data Practices: A Methodological Introduction
- Author
-
Decuypere, Mathias
- Abstract
This paper offers a methodological framework to research data practices in education critically. Data practices are understood in the generic sense of the word here, i.e., as the actions, performances, and the resulting consequences, of introducing data-producing technologies in everyday educational situations. The paper first distinguishes between data infrastructures, datafication and data points as three distinct, yet interrelated, phenomena. In order to investigate their concrete doings and specificities, the paper proposes a topological methodology that allows disentangling the relational nature and interwovenness of data practices. Based on this methodology, the paper proceeds with outlining a methodical toolbox that can be employed in studying data practices. Starting from nascent work on digital education platforms as a worked example, the toolbox allows researchers to investigate data practices as consisting of four unique topological dimensions: the Interface of a data practice, its actual Usage, its concrete Design, and its Ecological embeddedness - IUDE.
- Published
- 2021
48. Missing Data: An Update on the State of the Art
- Author
-
Enders, Craig K.
- Abstract
The year 2022 is the 20th anniversary of Joseph Schafer and John Graham's paper titled "Missing data: Our view of the state of the art," currently the most highly cited paper in the history of "Psychological Methods." Much has changed since 2002, as missing data methodologies have continually evolved and improved; the range of applications that are possible with modern missing data techniques has increased dramatically, and software options are light years ahead of where they were. This article provides an update on the state of the art that catalogs important innovations from the past two decades of missing data research. The paper addresses topics described in the original paper, including developments related to missing data theory, full information maximum likelihood, Bayesian estimation, multiple imputation, and models for missing not at random processes. The paper also describes newer factored regression specifications and missing data handling for multilevel models, both of which have been a focus of recent research. The paper concludes with a summary of the current software landscape and a discussion of several practical issues. [This paper will be published in "Psychological Methods."]
- Published
- 2023
- Full Text
- View/download PDF
49. Mapping Representations in Qualitative Case Studies: Can We Adapt Boisot's I-Space Model?
- Author
-
Spinuzzi, Clay
- Abstract
Purpose: This paper aims to consider ways to visually model data generated by qualitative case studies, pointing out a need for visualizations that depict both synchronic relations across representations and how those relations change diachronically. To develop an appropriate modeling approach, the paper critically examines Max Boisot's I-Space model, a conceptual model for understanding the interplay among knowledge assets used by a population. I-Space maps information in three dimensions (abstraction, codification and diffusion). It is not directly adoptable for case study methodology due to three fundamental disjunctures: in theory, methodology and unit of analysis. However, it can be adapted for qualitative research by substituting analogues for abstraction, codification and diffusion. Design/methodology/approach: Using an example from early-stage technology entrepreneurship, this paper first reviews network, flow and matrix models used to systematically visualize case study data. It then presents Boisot's I-Space model and critiques it from the perspective of qualitative workplace studies. Finally, it adapts the model using measures that have been used in qualitative case studies. Findings: This paper notes three limitations of the I-Space model when applied to empirical cases of workplace learning. Its theory of information does not account well for how people use representations synchronically for learning. It is a conceptual framework, and the tentative attempts to use it for mapping representations have been used in workshops, not for systematically collected data. It does not adequately bound a case for analysis. Thus, it can be applied analogically but not directly for mapping representations in qualitative case studies. Practical implications: This paper identifies a possible way to develop I-Space for strategically mapping representations in qualitative case studies, using measures analogous to the I-Space axes to reflect observable behavior. Originality/value: In providing a methodological critique for one model of knowledge management, this paper also develops criteria for appropriate modeling of meaningful artifacts in the context of qualitative studies of workplaces.
- Published
- 2023
- Full Text
- View/download PDF
50. Is Household Income a Reliable Measure When Assessing Educational Outcomes? A Jigsaw of Two Datasets (Next Steps and National Pupil Database) for Understanding Indicators of Disadvantage
- Author
-
Siddiqui, Nadia and Gorard, Stephen
- Abstract
Robust indicators are important for identifying disadvantaged pupils in education, and for ensuring that they are rightly receiving relevant state-funded assistance. This paper compares the quality and completeness of data from England on student eligibility for free school meals (FSM) based on an administrative census, with more all-encompassing household income measures, from a smaller sample of young people. The first measure comes from the National Pupil Database (NPD), and the second from Next Steps (NS). The two datasets are linked at the individual student level. In this restricted group, FSM data is more complete (97%) than household income (47%). The bias created by missing data on income in NS calls into question its more general usefulness for analysts. FSM cannot be read neatly from income, such as referring to an income below a certain level, and vice versa. Many reportedly low-income children are not listed as FSM-eligible. However, the two values are linked, while each also provides unique information. Both measures predict attainment at school, to some extent. The paper concludes that FSM is the more practical measure at present, but also considers how access to limited income data could be made more widespread while maintaining individual data rights.
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.