695 results
Search Results
2. Factors associating with or predicting more cited or higher quality journal articles: An Annual Review of Information Science and Technology (ARIST) paper.
- Author
-
Kousha, Kayvan and Thelwall, Mike
- Subjects
- *
ABSTRACTING , *PUBLISHING , *READABILITY (Literary style) , *SERIAL publications , *METADATA , *BIBLIOGRAPHY , *CONFERENCES & conventions , *REGRESSION analysis , *MACHINE learning , *CITATION analysis , *INFORMATION science , *BIBLIOGRAPHICAL citations , *INTERPROFESSIONAL relations , *PERIODICAL articles , *IMPACT factor (Citation analysis) , *INFORMATION technology , *ABSTRACTING & indexing services , *MEDICAL research - Abstract
Identifying factors that associate with more cited or higher quality research may be useful to improve science or to support research evaluation. This article reviews evidence for the existence of such factors in article text and metadata. It also reviews studies attempting to estimate article quality or predict long‐term citation counts using statistical regression or machine learning for journal articles or conference papers. Although the primary focus is on document‐level evidence, the related task of estimating the average quality scores of entire departments from bibliometric information is also considered. The review lists a huge range of factors that associate with higher quality or more cited research in some contexts (fields, years, journals) but the strength and direction of association often depends on the set of papers examined, with little systematic pattern and rarely any cause‐and‐effect evidence. The strongest patterns found include the near universal usefulness of journal citation rates, author numbers, reference properties, and international collaboration in predicting (or associating with) higher citation counts, and the greater usefulness of citation‐related information for predicting article quality in the medical, health and physical sciences than in engineering, social sciences, arts, and humanities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Analysis of handmade paper by Raman spectroscopy combined with machine learning.
- Author
-
Yan, Chunsheng, Cheng, Zhongyi, Luo, Si, Huang, Chen, Han, Songtao, Han, Xiuli, Du, Yuandong, and Ying, Chaonan
- Subjects
- *
MACHINE learning , *RAMAN spectroscopy , *SUPPORT vector machines , *K-nearest neighbor classification , *PRINCIPAL components analysis , *RANDOM forest algorithms , *SPECTRAL imaging , *MULTISPECTRAL imaging - Abstract
Handmade paper is a major carrier and restoration material of traditional Chinese ancient books, calligraphies, and paintings. In this study, we carried out a Raman spectroscopy analysis of 18 types of handmade paper samples. The main components of the handmade paper were cellulose and lignin, according to the wavenumber and Raman vibration assignment. We divided its Raman spectrum into eight subbands. Five machine learning models were employed: principal component analysis (PCA), partial least squares (PLS), support vector machine (SVM), k‐nearest neighbors (KNN), and random forest (RF). The Raman spectral data were normalized, and the fluorescence envelope was subtracted using the airPLS algorithm to obtain four types of data, raw, normalized, defluorescence, and fluorescence data. An RF variable importance analysis of data processing showed that data normalization eliminated the intensity differences of fluorescence signals caused by lignin, which contained important information of raw materials and papermaking technology, let alone the data defluorescence. The data processing also reduced the importance of the average variables in almost all spectral bands. Nevertheless, the data processing is worthwhile because it significantly improves the accuracy of machine learning, and the information loss does not affect the prediction. Using the machine learning models of PCA, PLS, and SVM combined with linear regression (LR), KNN, and RF, the classification and prediction of handmade paper samples were realized. For almost all processed data, including the fluorescence data, PCA‐LR had the highest classification and prediction accuracy (R2 = 1) in almost all spectral bands. PLS‐LR and SVM‐LR had the second‐highest accuracies (R2 = 0.4–0.9), whereas KNN and RF had the lowest accuracies (R2 = 0.1–0.4) for full band spectral data. Our results suggest that the abundant information contained in Raman spectroscopy combined with powerful machine learning models could inspire further studies on handmade paper and related cultural relics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Editorial: Best papers of the 14th International Conference on Software and System Processes (ICSSP 2020) and 15th International Conference on Global Software Engineering (ICGSE 2020).
- Author
-
Steinmacher, Igor, Clarke, Paul, Tuzun, Eray, and Britto, Ricardo
- Subjects
- *
SOFTWARE engineering , *SOFTWARE engineers , *SYSTEMS software , *CONFERENCES & conventions , *SOFTWARE maintenance , *COMPUTER software development - Abstract
Today's software industry is global, virtual, and depending more than ever on strong and reliable processes. Stakeholders and infrastructure are distributed across the globe, posing challenges that go beyond those with co‐located teams and servers. Software Engineering continues to be a complex undertaking, with projects challenged to meet expectations, especially regarding costs. We know that Software Engineering is an ever‐changing discipline, with the result that firms and their employees must regularly embrace new methods, tools, technologies, and processes. In 2020, the International Conference on Global Software Engineering (ICGSE) and the International Conference on Systems and Software Processes (ICSSP) joined forces aiming to create a holistic understanding of the software landscape both from the perspective of human and infrastructure distribution and also the processes to support software development. Unfortunately, these challenges have become even more personal to many more in 2020 due to the disruption introduced by the COVID‐19 pandemic, which forced both conferences to be held virtually. As an outcome of the joint event, we selected a set of the best papers from the two conferences, which were invited to submit extended versions to this Special Issue in the Journal of Software: Maintenance and Evolution. Dedicated committees were established to identify the best papers. Eight papers were invited and ultimately, seven of these invited papers have made it into this Special Issue. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Spotlights are papers selected by editors published in peer‐reviewed journals that may be more regionally specific or appearing in languages other than English.
- Subjects
- *
ELECTRONIC journals , *MACHINE learning , *ENGLISH language , *HUMAN fingerprints - Abstract
This document highlights two studies published in the Asian Journal of Ecotoxicology. The first study focuses on the development of machine learning models to screen chemicals with hepatotoxicity, or liver toxicity. The models were trained using a dataset of 4014 chemicals and achieved good performance in predicting hepatotoxicity. The second study explores the use of machine learning methods to screen chemicals that induce autonomic dysfunction, a condition affecting the autonomic nervous system. The study developed a model using a dataset of 466 positive and 427 negative samples and identified structural alerts associated with autonomic dysfunction. Both studies provide valuable tools for screening and evaluating toxic chemicals. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
6. Introduction to the virtual collection of papers on Artificial neural networks: applications in X‐ray photon science and crystallography.
- Author
-
Ekeberg, Tomas
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *CRYSTALLOGRAPHY , *ARTIFICIAL intelligence , *MACHINE learning , *PHOTONS - Abstract
Artificial intelligence is more present than ever, both in our society in general and in science. At the center of this development has been the concept of deep learning, the use of artificial neural networks that are many layers deep and can often reproduce human‐like behavior much better than other machine‐learning techniques. The articles in this collection are some recent examples of its application for X‐ray photon science and crystallography that have been published in Journal of Applied Crystallography. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Celebrating our success over the holiday period: 40 papers to read over the holiday.
- Author
-
Eltaybani, Sameh
- Subjects
- *
RESEARCH , *INTENSIVE care nursing , *NURSING , *CLINICAL trials , *SERIAL publications , *ARTIFICIAL intelligence , *MACHINE learning , *NURSING practice , *MEDICAL research , *SUCCESS - Abstract
An editorial is presented the journal's success with an impact factor of 3.0 and offers readers an extensive collection of publications in critical care nursing research; it discusses the capabilities and limitations of large language models (LLMs) in research, presents various research articles with diverse topics and methodologies, emphasizing the use of LLMs in healthcare research, and highlights the importance of domain-specific expertise when utilizing LLMs.
- Published
- 2023
- Full Text
- View/download PDF
8. Large language models and artificial intelligence: the coming storm for academia.
- Author
-
Murali, Mayur and Wiles, Matthew D.
- Subjects
- *
GENERATIVE artificial intelligence , *NATURAL language processing , *LANGUAGE models , *MACHINE learning , *ARTIFICIAL intelligence - Abstract
The article explores the use of large language models (LLMs) and artificial intelligence (AI) in academia and research. LLMs are AI programs that can generate content in response to human language questions and have various applications in research. While LLMs offer benefits such as improved productivity, they also pose risks, including the potential for fraudulent research papers. Journals have implemented policies to address the use of AI in manuscripts, and efforts are being made to detect papers with unacknowledged LLM use. However, detecting AI-generated text can be challenging for human reviewers, and current software programs designed to detect AI-generated text have variable performance. The academic publishing industry is working on developing its own AI detectors, but their availability is limited. Ensuring the appropriate use of AI in research is crucial for maintaining the integrity of healthcare research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
9. Synthetic Aperture Radar for Geosciences.
- Author
-
Meng, Lingsheng, Yan, Chi, Lv, Suna, Sun, Haiyang, Xue, Sihan, Li, Quankun, Zhou, Lingfeng, Edwing, Deanna, Edwing, Kelsea, Geng, Xupu, Wang, Yiren, and Yan, Xiao‐Hai
- Subjects
- *
SYNTHETIC aperture radar , *MACHINE learning , *SURFACE of the earth , *ENVIRONMENTAL sciences , *DEEP learning - Abstract
Synthetic Aperture Radar (SAR) has emerged as a pivotal technology in geosciences, offering unparalleled insights into Earth's surface. Indeed, its ability to provide high‐resolution, all‐weather, and day‐night imaging has revolutionized our understanding of various geophysical processes. Recent advancements in SAR technology, that is, developing new satellite missions, enhancing signal processing techniques, and integrating machine learning algorithms, have significantly broadened the scope and depth of geosciences. Therefore, it is essential to summarize SAR's comprehensive applications for geosciences, especially emphasizing recent advancements in SAR technologies and applications. Moreover, current SAR‐related review papers have primarily focused on SAR technology or SAR imaging and data processing techniques. Hence, a review that integrates SAR technology with geophysical features is needed to highlight the significance of SAR in addressing challenges in geosciences, as well as to explore SAR's potential in solving complex geoscience problems. Spurred by these requirements, this review comprehensively and in‐depth reviews SAR applications for geosciences, broadly including various aspects in air‐sea dynamics, oceanography, geography, disaster and hazard monitoring, climate change, and geosciences data fusion. For each applied field, the scientific advancements produced because of SAR are demonstrated by combining the SAR techniques with characteristics of geophysical phenomena and processes. Further outlooks are also explored, such as integrating SAR data with other geophysical data and conducting interdisciplinary research to offer comprehensive insights into geosciences. With the support of deep learning, this synergy will enhance the capability to model, simulate, and forecast geophysical phenomena with greater accuracy and reliability. Plain Language Summary: Synthetic aperture radar (SAR) uses microwaves to remotely see the Earth's surface under all weather conditions, day and night. SAR has been providing high‐resolution images for many decades and they have been applied to many fields in geosciences. Several SAR sensors have been launched in recent years, significantly increasing the SAR data volume and leading to great developments in SAR technology, thereby improving our understanding of geophysical phenomena and processes. This work comprehensively overviews the application of SAR in geosciences, including oceanography, geography, geodesy, climatology, seismology, meteorology, and environmental science. Moreover, this review paper highlights the significance of SAR in various aspects of geosciences, summarizes recent advancements in SAR technology, and demonstrates unique insights and important contributions of SAR in understanding and solving geophysical questions. Future directions and outlooks include integrating SAR with other geophysical data and interdisciplinary applications for complex questions. This review serves as an up‐to‐date guide to the cutting‐edge uses of SAR technology in comprehensive geophysical studies. It is aimed at researchers and practitioners in geosciences, as well as policymakers and stakeholders interested in leveraging SAR for geosciences. Key Points: Synthetic Aperture Radar (SAR) for geosciences is comprehensively reviewed broadly including oceanography, geography, hazards, and climate changeScientific advances contributed by SAR techniques for each topic are overviewed in‐depth with recent developments and frontiers highlightedData, techniques, and scientific insights of SAR are summarized and prospected, highlighting the role of machine learning [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Comprehensive review of hydrothermal liquefaction data for use in machine‐learning models.
- Author
-
Haarlemmer, Geert, Matricon, Lucie, and Roubaud, Anne
- Subjects
- *
BIOMASS liquefaction , *SCIENTIFIC literature , *HYDROTHERMAL deposits , *SEWAGE sludge , *DATABASES , *ORGANIC wastes , *LIGNOCELLULOSE - Abstract
Hydrothermal liquefaction is a new, sustainable pathway to generate biogenic liquids from organic resources. The technology is compatible with a wide variety of resources such as lignocellulosic resources, organic waste, algae, and sewage sludge. The chemistry is complex and predictions of yields are notoriously difficult. Understanding and modeling of hydrothermal liquefaction is currently mostly based on a simplified biochemical analysis and product yield data. This paper presents a large dataset of 2439 experiments in batch reactors that were extracted from 171 publications in the scientific literature. The data include biochemical composition data such as fiber content and composition, proteins, lipids, carbohydrates, and ash. The experimental conditions are recorded for each experiment as well as the reported yields. The objective of this paper is to make a large database available to the scientific community. This database is analyzed with machine‐learning tools. The results show that there is no consensus on the analysis techniques, experimental procedures, and reported data. There are many inconsistencies across the literature that should be improved by the scientific community. Machine‐learning tools with a large dataset allow the generation of reliable yield production tools with a large application field. Given the accuracy of the data, the overall precision of prediction in an extrapolation to new results can be expected to be around 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Data‐driven plasma science: A new perspective on modeling, diagnostics, and applications through machine learning.
- Author
-
He, Mengbing, Bai, Ruihang, Tan, Shihao, Liu, Dawei, and Zhang, Yuantao
- Subjects
- *
ATMOSPHERIC pressure plasmas , *PLASMA diagnostics , *PLASMA dynamics , *PLASMA confinement , *MACHINE learning - Abstract
This paper comprehensively explores the integration of machine learning (ML) with atmospheric pressure plasma, highlighting its transformative impact in areas, such as modeling, diagnostics, and applications. The paper delves into the application of neural networks and deep learning models in simulating complex plasma dynamics, enhancing prediction accuracy, and reducing computational demands. We also examine the application of ML in plasma diagnostics, including real‐time data analysis and process optimization, demonstrating advancements in monitoring and controlling plasma systems. The article discusses the challenges encountered in this integration process, such as data quality, computational resources, and model interpretability. Finally, we outline future development directions, emphasizing the potential of ML in revolutionizing plasma research, improving operational efficiency, and opening new avenues in plasma technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Statistical inference for generative adversarial networks and other minimax problems.
- Author
-
Meitz, Mika
- Subjects
- *
GENERATIVE adversarial networks , *MACHINE learning , *INFERENTIAL statistics , *PROBABILISTIC generative models - Abstract
This paper studies generative adversarial networks (GANs) from the perspective of statistical inference. A GAN is a popular machine learning method in which the parameters of two neural networks, a generator and a discriminator, are estimated to solve a particular minimax problem. This minimax problem typically has a multitude of solutions and the focus of this paper are the statistical properties of these solutions. We address two key statistical issues for the generator and discriminator network parameters, consistent estimation and confidence sets. We first show that the set of solutions to the sample GAN problem is a (Hausdorff) consistent estimator of the set of solutions to the corresponding population GAN problem. We then devise a computationally intensive procedure to form confidence sets and show that these sets contain the population GAN solutions with the desired coverage probability. Small numerical experiments and a Monte Carlo study illustrate our results and verify our theoretical findings. We also show that our results apply in general minimax problems that may be nonconvex, nonconcave, and have multiple solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Unlocking the potential: A review of artificial intelligence applications in wind energy.
- Author
-
Dörterler, Safa, Arslan, Seyfullah, and Özdemir, Durmuş
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *WIND power , *ENERGY industries , *RENEWABLE energy sources - Abstract
This paper presents a comprehensive review of the most recent papers and research trends in the fields of wind energy and artificial intelligence. Our study aims to guide future research by identifying the potential application and research areas of artificial intelligence and machine learning techniques in the wind energy sector and the knowledge gaps in this field. Artificial intelligence techniques offer significant benefits and advantages in many sub‐areas, such as increasing the efficiency of wind energy facilities, estimating energy production, optimizing operation and maintenance, providing security and control, data analysis, and management. Our research focuses on studies indexed in the Web of Science library on wind energy between 2000 and 2023 using sub‐branches of artificial intelligence techniques such as artificial neural networks, other machine learning methods, data mining, fuzzy logic, meta‐heuristics, and statistical methods. In this way, current methods and techniques in the literature are examined to produce more efficient, sustainable, and reliable wind energy, and the findings are discussed for future studies. This comprehensive evaluation is designed to be helpful to academics and specialists interested in acquiring a current and broad perspective on the types of uses of artificial intelligence in wind energy and seeking what research subjects are needed in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. The PIM.3 process improvement process—Part of the iNTACS certified process expert training.
- Author
-
Messnarz, Richard, Djordjevic, Vesna, Grémen, Viktor, Menezes, Winifred, Alborae, Ahmed, Dreves, Rainer, Norimatsu, So, Wegner, Thomas, and Sechser, Bernhard
- Subjects
- *
PROCESS capability , *MACHINE learning , *APPRAISERS , *INTERNET security , *ENGINEERING - Abstract
This paper documents the results of the PIM.3 (Process Improvement Management) working group in INTACS (International Assessor Certification Schema) supported by the VDA‐QMC (Verband der Deutschen Automobilindustrie/German Automotive Association–Quality Management Center). INTACS promotes Automotive SPICE, which is an international standard that allows process capability assessment of projects, which implement systems that integrate mechanics, electronics, and software including optionally cybersecurity, functional safety, and machine learning. The paper outlines that for the first time since more than 20 years, the INTACS and VDA‐QMC included a process like PIM.3 Process Improvement Management in the scope for the assessor training. Before that, the assessments focused on the management, engineering, and support processes of series projects, while the improvement management has not been trained or assessed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Applying machine learning to primate bioacoustics: Review and perspectives.
- Author
-
Cauzinille, Jules, Favre, Benoit, Marxer, Ricard, and Rey, Arnaud
- Subjects
- *
BIG data , *ANIMAL communication , *COMPARATIVE linguistics , *BIOACOUSTICS , *SIGNAL processing , *DEEP learning , *MACHINE learning - Abstract
This paper provides a comprehensive review of the use of computational bioacoustics as well as signal and speech processing techniques in the analysis of primate vocal communication. We explore the potential implications of machine learning and deep learning methods, from the use of simple supervised algorithms to more recent self‐supervised models, for processing and analyzing large data sets obtained within the emergence of passive acoustic monitoring approaches. In addition, we discuss the importance of automated primate vocalization analysis in tackling essential questions on animal communication and highlighting the role of comparative linguistics in bioacoustic research. We also examine the challenges associated with data collection and annotation and provide insights into potential solutions. Overall, this review paper runs through a set of common or innovative perspectives and applications of machine learning for primate vocal communication analysis and outlines opportunities for future research in this rapidly developing field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Composite learning control for strict feedback systems with neural network based on selective memory.
- Author
-
Hu, Zhiyu, Fei, Yiming, Li, Jiangang, and Li, Yanan
- Subjects
- *
MACHINE learning , *RADIAL basis functions , *FEEDBACK control systems , *NONLINEAR equations , *INFORMATION storage & retrieval systems , *ITERATIVE learning control - Abstract
This paper addresses the high‐precision control problem for nonlinear strict feedback systems with external time‐varying disturbances and proposes a novel composite learning control algorithm. Unlike previous research that only uses tracking errors for neural network updates, this paper prioritizes the accuracy of neural network learning. The article uses a selective memory recursive least squares algorithm to construct system information prediction errors, which are combined with tracking errors to update the neural network weights. A new composite learning control algorithm is developed to design dynamic surface control and neural network disturbance observers, which achieves high‐precision control of nonlinear strict feedback systems under external time‐varying disturbance conditions. Lyapunov's method demonstrates the stability of the closed‐loop system and the boundedness of errors. The simulation results show that the proposed control algorithm can effectively estimate system nonlinearity and suppress the impact of disturbances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Scoping review on natural language processing applications in counselling and psychotherapy.
- Author
-
Laricheva, Maria, Liu, Yan, Shi, Edward, and Wu, Amery
- Subjects
- *
NATURAL language processing , *COUNSELING , *PSYCHOTHERAPY , *LINGUISTICS , *MACHINE learning - Abstract
Recent years have witnessed some rapid and tremendous progress in natural language processing (NLP) techniques that are used to analyse text data. This study endeavours to offer an up‐to‐date review of NLP applications by examining their use in counselling and psychotherapy from 1990 to 2021. The purpose of this scoping review is to identify trends, advancements, challenges and limitations of these applications. Among the 41 papers included in this review, 4 primary study purposes were identified: (1) developing automated coding; (2) predicting outcomes; (3) monitoring counselling sessions; and (4) investigating language patterns. Our findings showed a growing trend in the number of papers utilizing advanced machine learning methods, particularly neural networks. Unfortunately, only a third of the articles addressed the issues of bias and generalizability. Our findings provided a timely systematic update, shedding light on concerns related to bias, generalizability and validity in the context of NLP applications in counselling and psychotherapy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics.
- Author
-
Wong, Lung‐Hsiang, Park, Hyejin, and Looi, Chee‐Kit
- Subjects
- *
GENERATIVE artificial intelligence , *SERIAL publications , *EDUCATIONAL outcomes , *TEACHING methods , *INFORMATION resources , *DESCRIPTIVE statistics , *CITATION analysis , *ALTMETRICS , *BIBLIOMETRICS , *MEDICAL research , *LITERATURE reviews , *COMPUTER assisted instruction , *MACHINE learning - Abstract
Background: The emergence of ChatGPT in the education literature represents a transformative phase in educational technology research, marked by a surge in publications driven by initial research interest in new topics and media hype. While these publications highlight ChatGPT's potential in education, concerns arise regarding their quality, methodology, and uniqueness. Objective: Our study employs unconventional methods by combining altmetrics and bibliometrics to explore ChatGPT in education comprehensively. Methods: Two scholarly databases, Web of Science and Altmetric, were adopted to retrieve publications with citations and those mentioned on social media, respectively. We used a search query, "ChatGPT," and set the publication date between November 30th, 2022, and August 31st, 2023. Both datasets were within the education‐related domains. Through a filtering process, we identified three publication categories: 49 papers with both altmetrics and citations, 60 with altmetrics only, and 66 with citations only. Descriptive statistical analysis was conducted on all three lists of papers, further dividing the entire collection into three distinct periods. All the selected papers underwent detailed coding regarding open access, paper types, subject domains, and learner levels. Furthermore, we analysed the keywords occurring and visualized clusters of the co‐occurring keywords. Results and Conclusions: An intriguing finding is the significant correlation between media/social media mentions and academic citations in ChatGPT in education papers, underscoring the transformative potential of ChatGPT and the urgency of its incorporation into practice. Our keyword analysis also reveals distinctions between the themes of the papers that received both mentions and citations and those that received only citations but no mentions. Additionally, we noticed a limitation that authors' choice of keywords might be influenced by individual subjective judgements, potentially skewing results in thematic analysis based solely on author‐assigned keywords such as keyword co‐occurrence analysis. Henceforth, we advocate for developing a standardized keyword taxonomy in the educational technology field and integrating Large Language Models to enhance keyword analysis in altmetric and bibliometric tools. This study reveals that ChatGPT in education literature is evolving from rapid publication to rigorous research. Lay Description: What is currently known about this topic?: ChatGPT in education has seen a surge in publications driven by media hype.Early publications tend to lack rigour and reiterate known advantages/disadvantages.Literature reviews on ChatGPT in education have limitations in scope and depth.Some studies have explored altmetrics and bibliometrics in other fields. What does this paper add?: Combines altmetrics and bibliometrics to analyse publications of ChatGPT in education.Addresses challenges in the discourse by offering unconventional analysis methods.Identifies publication trends and investigates the relationship between media attention and citations.Determines key themes in the literature through keyword co‐occurrence analysis. Implications for practice/or policy: Expectations of continued growth in ChatGPT literature but with evolving publication trends.Distinctive characteristics of ChatGPT in education challenge keyword analysis.Proposes the development of a unified keyword taxonomy for clarity in the field.Suggests enhancing altmetrics and bibliometrics tools using Large Language Models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Review on machine learning techniques for the assessment of the fatigue response of additively manufactured metal parts.
- Author
-
Centola, Alessio, Tridello, Andrea, Ciampaglia, Alberto, Berto, Filippo, and Paolino, Davide Salvatore
- Subjects
- *
ALLOY fatigue , *MACHINE learning , *ARTIFICIAL neural networks , *ALLOYS , *SELECTIVE laser melting - Abstract
The present review paper addresses the increasing interest in the application of machine learning (ML) algorithms in the assessment of the fatigue response of additively manufactured (AM) metal alloys. This review aims to systematically collect, categorize, and analyze relevant research papers in this domain. The most commonly used ML algorithms are presented, discussing their specific relevance to the fatigue modeling of AM metal alloys. A detailed analysis of the most relevant input features used in the literature to predict the main parameters related to the fatigue response is provided. Each work has been analyzed to highlight its strengths and peculiarities, thereby offering insights into novel methodologies and approaches for addressing critical challenges within this field. Particular attention is dedicated to the role of defects and the related size‐effect, as they strongly influence the fatigue response. In conclusion, this review not only synthesizes existing knowledge but also offers forward‐looking recommendations for future research directions, providing a valuable resource for researchers in the domain of ML‐assisted fatigue assessment for AM metal alloys. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Quantifying social capital creation in post‐disaster recovery aid in Indonesia: methodological innovation by an AI‐based language model.
- Author
-
Marutschke, Daniel Moritz, Nurdin, Muhammad Riza, and Hirono, Miwa
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *SOCIAL capital , *NATURAL language processing , *DISASTER relief , *ETHNOLOGY research , *DISASTER resilience - Abstract
Smooth interaction with a disaster‐affected community can create and strengthen its social capital, leading to greater effectiveness in the provision of successful post‐disaster recovery aid. To understand the relationship between the types of interaction, the strength of social capital generated, and the provision of successful post‐disaster recovery aid, intricate ethnographic qualitative research is required, but it is likely to remain illustrative because it is based, at least to some degree, on the researcher's intuition. This paper thus offers an innovative research method employing a quantitative artificial intelligence (AI)‐based language model, which allows researchers to re‐examine data, thereby validating the findings of the qualitative research, and to glean additional insights that might otherwise have been missed. This paper argues that well‐connected personnel and religiously‐based communal activities help to enhance social capital by bonding within a community and linking to outside agencies and that mixed methods, based on the AI‐based language model, effectively strengthen text‐based qualitative research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. How well do collaboration quality estimation models generalize across authentic school contexts?
- Author
-
Chejara, Pankaj, Kasepalu, Reet, Prieto, Luis P., Rodríguez‐Triana, María Jesús, Ruiz Calleja, Adolfo, and Schneider, Bertrand
- Subjects
- *
VIDEO blogs , *COLLABORATIVE learning , *CLASSROOM activities , *ART exhibitions , *RESEARCH personnel , *VIDEO compression - Abstract
Multimodal learning analytics (MMLA) research has made significant progress in modelling collaboration quality for the purpose of understanding collaboration behaviour and building automated collaboration estimation models. Deploying these automated models in authentic classroom scenarios, however, remains a challenge. This paper presents findings from an evaluation of collaboration quality estimation models. We collected audio, video and log data from two different Estonian schools. These data were used in different combinations to build collaboration estimation models and then assessed across different subjects, different types of activities (collaborative‐writing, group‐discussion) and different schools. Our results suggest that the automated collaboration model can generalize to the context of different schools but with a 25% degradation in balanced accuracy (from 82% to 57%). Moreover, the results also indicate that multimodality brings more performance improvement in the case of group‐discussion‐based activities than collaborative‐writing‐based activities. Further, our results suggest that the video data could be an alternative for understanding collaboration in authentic settings where higher‐quality audio data cannot be collected due to contextual factors. The findings have implications for building automated collaboration estimation systems to assist teachers with monitoring their collaborative classrooms. Practitioners notesWhat is already known about this topicMultimodal learning analytics researchers have established several features as potential indicators for collaboration quality, e.g., speaking time or joint visual attention.The current state of the art has shown the feasibility of building automated collaboration quality models.Recent research has provided preliminary evidence of the generalizability of developed automated models across contexts different in terms of given task and subject.What does this paper addThis paper offers collaboration indicators for different types of collaborative learning activities in authentic classroom settings.The paper includes a systematic investigation into collaboration quality automated model's generalizability across different tasks, types of tasks and schools.This paper also offers a comparison between different modalities' potential to estimate collaboration quality in authentic settings.Implications for practiceThe findings inform the development of automated collaboration monitoring systems for authentic classroom settings.This paper provides evidence on across‐school generalizability capabilities of collaboration quality estimation models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Supervised machine learning for the prediction of post‐operative clinical outcomes of hip and knee replacements: a review.
- Author
-
Ghadirinejad, Khashayar, Milimonfared, Roohollah, Taylor, Mark, Solomon, Lucian B., Graves, Stephen, Pratt, Nicole, de Steiger, Richard, and Hashemi, Reza
- Subjects
- *
TOTAL knee replacement , *TOTAL hip replacement , *SUPERVISED learning , *MACHINE learning , *DATA analytics - Abstract
Prediction models are being increasingly used in the medical field to identify risk factors and possible outcomes. Some of these are presently being used to develop guidelines for improving clinical practice. The application of machine learning (ML), comprising a powerful set of computational tools for analysing data, has been clearly expanding in the role of predictive modelling. This paper reviews the latest developments of supervised ML techniques that have been used to analyse data related to post‐operative total hip and knee replacements. The aim was to review the most recent findings of relevant published studies by outlining the methodologies employed (most‐widely used supervised ML techniques), data sources, domains, limitations of predictive analytics and the quality of predictions. This paper reviews the latest developments of supervised machine learning (ML) techniques that have been used to analyse data related to post‐operative total hip and knee replacements. The aim was to review the most recent findings of relevant published studies by outlining the methodologies employed (most widely used supervised ML techniques), data sources, domains, limitations of predictive analytics and the quality of predictions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Increasing the Reproducibility and Replicability of Supervised AI/ML in the Earth Systems Science by Leveraging Social Science Methods.
- Author
-
Wirz, Christopher D., Sutter, Carly, Demuth, Julie L., Mayer, Kirsten J., Chapman, William E., Cains, Mariana Goodall, Radford, Jacob, Przybylo, Vanessa, Evans, Aaron, Martin, Thomas, Gaudet, Lauriana C., Sulia, Kara, Bostrom, Ann, Gagne, David John, Bassill, Nick, Schumacher, Andrea, and Thorncroft, Christopher
- Subjects
- *
EARTH system science , *SUPERVISED learning , *ARTIFICIAL intelligence , *SOCIAL science research , *ARTIFICIAL hands - Abstract
Artificial intelligence (AI) and machine learning (ML) pose a challenge for achieving science that is both reproducible and replicable. The challenge is compounded in supervised models that depend on manually labeled training data, as they introduce additional decision‐making and processes that require thorough documentation and reporting. We address these limitations by providing an approach to hand labeling training data for supervised ML that integrates quantitative content analysis (QCA)—a method from social science research. The QCA approach provides a rigorous and well‐documented hand labeling procedure to improve the replicability and reproducibility of supervised ML applications in Earth systems science (ESS), as well as the ability to evaluate them. Specifically, the approach requires (a) the articulation and documentation of the exact decision‐making process used for assigning hand labels in a "codebook" and (b) an empirical evaluation of the reliability" of the hand labelers. In this paper, we outline the contributions of QCA to the field, along with an overview of the general approach. We then provide a case study to further demonstrate how this framework has and can be applied when developing supervised ML models for applications in ESS. With this approach, we provide an actionable path forward for addressing ethical considerations and goals outlined by recent AGU work on ML ethics in ESS. Plain Language Summary: Artificial intelligence and machine learning can make it hard to do science in a way that can be repeated. This can mean redoing a study in the exact same way to see if you can get the same or similar results (reproducibility) or trying to use the same study design on a new problem to see if the results are the same or similar (replicability). These types of scientific repetitions is important for developing robust knowledge, but is hard to do with certain types of machine learning that rely on data that were categorized by researchers. The researchers have to make decisions and categorize their data, which the machine learning algorithm then uses as a guide to make its own decisions. Generally, there is not enough information shared by the researchers about how these decisions were made to repeat the science or evaluate how good it is. In this paper, we provide a way to address these shortcomings. The approach and example we offer illustrates how to (a) create a rulebook that can be shared for how to make decisions and (b) quantitatively measure how consistent the researchers are at using that rulebook to make their decisions. Key Points: We provide a rigorous hand labeling procedure to improve the replicability and reproducibility of supervised machine learning (ML)Our case study and step‐by‐step guide clearly outline how the procedure can be appliedThe procedure is an actionable path forward for addressing ethical considerations and goals for ML development in Earth systems science [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. From leaf to harvest: achieving sustainable agriculture through advanced disease prediction with DBN‐EKELM.
- Author
-
Rajasekar, Deepa, Moorthy, Vaishnavi, Rajadurai, Priscilla, and Ravikumar, Sethuraman
- Subjects
- *
SUSTAINABLE agriculture , *OPTIMIZATION algorithms , *MACHINE learning , *DEEP learning , *PLANT identification , *SELF-tuning controllers , *DATA mining - Abstract
Background Method Result Conclusion In the agricultural sector, the early identification of plant diseases presents a pressing challenge. Throughout the growing season, plants remain vulnerable to an array of diseases. Failure to detect these diseases at their early stages can significantly compromise the overall yield, thereby reducing profitability for farmers. To address this issue, several researchers have introduced standard methods that leverage machine learning and deep learning techniques. However, many of these methods offer limited classification accuracy and often necessitate extensive training parameter adjustments.The objective of this study is to develop a new deep learning‐based technique for detecting and classifying plant diseases at earlier stages. Thus, this paper introduces a novel technique known as the deep belief network‐based enhanced kernel extreme learning machine (DBN‐EKELM) that identifies a disease automatically and performs effective classification. The initial phase involves data preprocessing to enhance quality of plant leaf images, facilitating the extraction of critical information. With the goal of achieving superior classification accuracy, this paper proposes the use of the DBN‐EKELM technique for optimal plant leaf disease detection. Given that KELM parameters are highly sensitive to minor variations, proper parameter tuning is essential and introduces a novel binary gaining sharing knowledge‐based optimization algorithm (NBGSK).The efficacy of the proposed DBN‐EKELM method is evaluated by comparing its performance with other conventional methods, considering various measures like accuracy, precision, specificity, sensitivity and F‐measure.Experimental analyses demonstrate that the DBN‐EKELM technique achieves an impressive rate of approximately 98.2%, 97%, 98.1%, 97.4% as well as 97.8%, surpassing other standard methods. © 2024 Society of Chemical Industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A review of asset management using artificial intelligence‐based machine learning models: Applications for the electric power and energy system.
- Author
-
Rajora, Gopal Lal, Sanz‐Bobi, Miguel A., Tjernberg, Lina Bertling, and Urrea Cabus, José Eduardo
- Subjects
- *
ARTIFICIAL intelligence , *ASSET management , *ASSET protection , *MACHINE learning , *DEEP learning , *SUSTAINABILITY - Abstract
Power system protection and asset management present persistent technical challenges, particularly in the context of the smart grid and renewable energy sectors. This paper aims to address these challenges by providing a comprehensive assessment of machine learning applications for effective asset management in power systems. The study focuses on the increasing demand for energy production while maintaining environmental sustainability and efficiency. By harnessing the power of modern technologies such as artificial intelligence (AI), machine learning (ML), and deep learning (DL), this research explores how ML techniques can be leveraged as powerful tools for the power industry. By showcasing practical applications and success stories, this paper demonstrates the growing acceptance of machine learning as a significant technology for current and future business needs in the power sector. Additionally, the study examines the barriers and difficulties of large‐scale ML deployment in practical settings while exploring potential opportunities for these tactics. Through this overview, insights into the transformative potential of ML in shaping the future of power system asset management are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Environment‐aware intelligent numerology control approach for 5G and beyond systems.
- Author
-
Sazak, Halenur and Yazar, Ahmet
- Subjects
- *
INTELLIGENT control systems , *MACHINE learning , *5G networks , *SMART cities , *WIRELESS channels - Abstract
Summary: 5G and beyond (5GB) systems show a flexible characteristic to meet different user communications requirements in parallel. Multi‐numerology waveform design is an important part of this human centric characteristic, however, it is necessary to make multi‐numerology planning. In this paper, the integrated sensing and communications (ISAC) concept is used with machine learning (ML) techniques to develop an intelligent numerology control mechanism for 5GB. Thanks to the ISAC capabilities, an environment‐aware system is designed, and several useful sensing information (USI) that affect the wireless channel characteristic is utilized. It is assumed that the USI is obtained through 5GB smart city networks. At that point, synthetic data generation is performed to form a new dataset with USI‐based features. Additionally, a novel feature control algorithm is proposed in the paper. Lastly, simulation results are presented for different type of ML models including several ensemble methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Novel statistical method for data drift detection in satellite telemetry.
- Author
-
Praveen, M. V. Ramachandra, kuchhal, Piyush, and Choudhury, Sushabhan
- Subjects
- *
SATELLITE telemetry , *LOW earth orbit satellites , *MACHINE learning , *SPACE environment , *ELECTRIC power , *STATISTICS - Abstract
Summary: Autonomy is becoming a prime requirement for satellite mission control operations. Data‐driven methods like Machine Learning models are playing a key role in bringing in autonomy. Health keeping data from satellite telemetry is a key ingredient in these data‐driven methods. In real‐world satellite operations, the health‐keeping telemetry data gradually drifts due to adverse space weather effects and wear and tear of electronic and mechanical components. The key question that arises is how to detect and quantify the data drift which is generally a gradual phenomenon. This paper discusses a novel statistical method for detecting data drift occurring in satellite telemetry. For the purpose of experimental work in this paper, an actual telemetry data set of the BUS CURRENT sensor which is part of the Electrical Power System of a Low Earth Orbit Satellite was considered. Data drift detection test was carried out using this sensor data using the developed novel statistical method and with Kolmogorov Smirnov test which is a probabilistic method. Both results are analysed and compared. Thereafter novel statical method was used to check its efficacy using a synthetic data set with induced drift. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Explainable artificial intelligence for medical imaging: Review and experiments with infrared breast images.
- Author
-
Raghavan, Kaushik, Balasubramanian, Sivaselvan, and Veezhinathan, Kamakoti
- Subjects
- *
BREAST , *ARTIFICIAL intelligence , *COMPUTER-assisted image analysis (Medicine) , *INFRARED imaging , *MACHINE learning , *BREAST imaging , *DEEP learning , *ASSISTIVE technology - Abstract
There is a growing trend of using artificial intelligence, particularly deep learning algorithms, in medical diagnostics, revolutionizing healthcare by improving efficiency, accuracy, and patient outcomes. However, the use of artificial intelligence in medical diagnostics comes with the critical need to explain the reasoning behind artificial intelligence‐based predictions and ensure transparency in decision‐making. Explainable artificial intelligence has emerged as a crucial research area to address the need for transparency and interpretability in medical diagnostics. Explainable artificial intelligence techniques aim to provide insights into the decision‐making process of artificial intelligence systems, enabling clinicians to understand the factors the algorithms consider in reaching their predictions. This paper presents a detailed review of saliency‐based (visual) methods, such as class activation methods, which have gained popularity in medical imaging as they provide visual explanations by highlighting the regions of an image most influential in the artificial intelligence's decision. We also present the literature on non‐visual methods, but the focus will be on visual methods. We also use the existing literature to experiment with infrared breast images for detecting breast cancer. Towards the end of this paper, we also propose an "attention guided Grad‐CAM" that enhances the visualizations for explainable artificial intelligence. The existing literature shows that explainable artificial intelligence techniques are not explored in the context of infrared medical images and opens up a wide range of opportunities for further research to make clinical thermography into assistive technology for the medical community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Nowcasting Earthquakes With Stochastic Simulations: Information Entropy of Earthquake Catalogs.
- Author
-
Rundle, John B., Baughman, Ian, and Zhang, Tianjian
- Subjects
- *
EARTHQUAKES , *EARTHQUAKE aftershocks , *ENTROPY (Information theory) , *MACHINE learning , *EARTHQUAKE hazard analysis , *RECEIVER operating characteristic curves , *CATALOGS , *ENTROPY - Abstract
Earthquake nowcasting has been proposed as a means of tracking the change in large earthquake potential in a seismically active area. The method was developed using observable seismic data, in which probabilities of future large earthquakes can be computed using Receiver Operating Characteristic methods. Furthermore, analysis of the Shannon information content of the earthquake catalogs has been used to show that there is information contained in the catalogs, and that it can vary in time. So an important question remains, where does the information originate? In this paper, we examine this question using stochastic simulations of earthquake catalogs. Our catalog simulations are computed using an Earthquake Rescaled Aftershock Seismicity ("ERAS") stochastic model. This model is similar in many ways to other stochastic seismicity simulations, but has the advantage that the model has only 2 free parameters to be set, one for the aftershock (Omori‐Utsu) time decay, and one for the aftershock spatial migration away from the epicenter. Generating a simulation catalog and fitting the two parameters to the observed catalog such as California takes only a few minutes of wall clock time. While clustering can arise from random, Poisson statistics, we show that significant information in the simulation catalogs arises from the "non‐Poisson" power‐law aftershock clustering, implying that the practice of de‐clustering observed catalogs may remove information that would otherwise be useful in forecasting and nowcasting. We also show that the nowcasting method provides similar results with the ERAS model as it does with observed seismicity. Plain Language Summary: Earthquake nowcasting was proposed as a means of tracking the change in the potential for large earthquakes in a seismically active area, using the record of small earthquakes. The method was developed using observed seismic data, in which probabilities of future large earthquakes can be computed using machine learning methods that were originally developed with the advent of radar in the 1940s. These methods are now being used in the development of machine learning and artificial intelligence models in a variety of applications. In recent times, methods to simulate earthquakes using the observed statistical laws of earthquake seismicity have been developed. One of the advantages of these stochastic models is that it can be used to analyze the various assumptions that are inherent in the analysis of seismic catalogs of earthquakes. In this paper, we analyze the importance of the space‐time clustering that is often observed in earthquake seismicity. We find that the clustering is the origin of information that makes the earthquake nowcasting methods possible. We also find that a common practice of "aftershock de‐clustering", often used in the analysis of these catalogs, removes information about future large earthquakes. Key Points: Earthquake nowcasting tracks the change in the potential for large earthquakes, using information contained in seismic catalogsWe analyze the information contained in the space‐time clustering that is observed in earthquake seismicityWe find that "aftershock de‐clustering" of catalogs removes information about future large earthquakes that the nowcasting method uses [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Assessing changes in reliability methods over time: An unsupervised text mining approach.
- Author
-
Brown, Charles K. and Cameron, Bruce G.
- Abstract
Reliability engineering faces many of the same challenges today that it did at its inception in the 1950s. The fundamental issue remains uncertainty in system representation, specifically related to performance model structure and parameterization. Details of a design are unavailable early in the development process and therefore performance models must either account for the range of possibilities or be wrong. Increasing system complexity has compounded this uncertainty. In this work, we seek to understand how the reliability engineering literature has shifted over time. We exe cute a systematic literature review of 30,543 reliability engineering papers (covering roughly a third of the reliability papers indexed by Elsevier's Engineering Village. Topic modeling was performed on the abstracts of those papers to identify 279 topics. The hierarchical topic reduction resulted in the identification of eight top‐level method topics (prognostics, statistics, maintenance, quality control, management, physics of failure, modeling, and risk assessment) as well as three domain‐specific topics (nuclear, infrastructure, and software). We found that topics more associated with later phases in the development process (such as prognostics, maintenance, and quality control) have increased in popularity over time relative to other topics. We propose that this is a response to the challenges posed by model uncertainty and increasing complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Prediction of mechanical properties of high‐performance concrete and ultrahigh‐performance concrete using soft computing techniques: A critical review.
- Author
-
Kumar, Rakesh, Rai, Baboo, and Samui, Pijush
- Abstract
A cement‐based material that meets the general goals of mechanical properties, workability, and durability as well as the ever‐increasing demands of environmental sustainability is produced by varying the type and quantity of individual constituents in high‐performance concrete (HPC) and ultrahigh‐performance concrete (UHPC). Expensive and time‐consuming laboratory experiments can be used to estimate the properties of concrete mixtures and elements. As an alternative, these attributes can be approximated by means of predictive models created through the application of artificial intelligence (AI) methodologies. AI approaches are among the most effective ways to solve engineering problems due to their capacity for pattern recognition and knowledge processing. Machine learning (ML) and deep learning (DL) are a subfield of AI that is gaining popularity across many scientific domains as a result of its many benefits over statistical and experimental models. These include, but are not limited to, better accuracy, faster performance, greater responsiveness in complex environments, and lower economic costs. In order to assess the critical features of the literature, a comprehensive review of ML and DL applications for HPC and UHPC was conducted in this study. This paper offers a thorough explanation of the fundamental terms and ideas of ML and DL algorithms that are frequently used to predict mechanical properties of HPC and UHPC. Engineers and researchers working with construction materials will find this paper useful in helping them choose accurate and appropriate methods for their needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. The analysis of green advertisement communication strategy based on deep factorization machine deep learning model under supply chain management.
- Author
-
Yu, Xue, Zhu, Yunfei, Jia, Congcong, Lu, Wanqiu, and Xu, Hao
- Subjects
- *
DEEP learning , *MACHINE learning , *SUPPLY chain management , *COMMUNICATION strategies , *FACTORIZATION , *PERCEPTION (Philosophy) - Abstract
Artificial intelligence (AI) technology has brought new reconstruction opportunities for the intelligence of the advertisement industry through the help of AI technologies such as machine learning and deep learning. First, the relationship between AI and the attractiveness of green advertisements is investigated, and the influence of different AI technologies in green advertisements on consumers' perception of the attractiveness of green advertisements is summarized. Second, based on the green advertisement dissemination rate data set, the data visualization exploration is carried out, and the data deletion and coding processing are carried out aiming at different characteristic variables. Finally, according to the problems existing in the current green advertisement communication and the high‐dimensional and sparse characteristics of the communication rate data set. In this paper, based on Deep FM (Factorization Machine), Gradient Boost Decision Tree (GBDT) is added to assist the experiment, and the prediction performance of green advertising communication is tested. The results are as follows. (1) Different AI expressions in green advertisements will affect consumers' perception of the attractiveness of green advertisements. (2) The prediction ability of Deep FM model after feature engineering is better than that of data cleaning only. The prediction effect of the model is obviously improved. The purpose of this paper is to integrate green advertising media communication into the ecological concept of harmonious coexistence between man and nature, strengthen the political belief of ecological civilization construction, and conform to the communication trend of today's severe ecological situation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Research on the improvement path of international competitiveness of China's agricultural product supply chain from the perspective of machine learning.
- Author
-
Lei, Juan
- Subjects
- *
FARM produce , *FARM supplies , *SUPPLY chains , *SUPPLY chain management , *AGRICULTURAL processing , *MACHINE learning , *FARM mechanization - Abstract
In order to explore the international competitiveness of China's agricultural product supply chain, this paper combines machine learning algorithms to optimize the data processing process of agricultural product supply chain, and proposes a data processing method for agricultural product supply chain based on the intra‐column rules and inter‐column rules. According to the problems of my country's agricultural product supply chain, this paper establishes a new agricultural product supply chain management model based on the various modules of the supply chain link, and constructs a corresponding intelligent analysis model. Moreover, this paper combines machine learning algorithms to study the improvement path of the international competitiveness of China's agricultural product supply chain, and builds an intelligent model to improve the international competitiveness of China's agricultural products. Finally, after constructing the model, this paper obtains the path to improve the international competitiveness of China's agricultural products through the simulation operation model, which also verifies the effectiveness of the intelligent algorithm of this paper from the side. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
- Author
-
PL, Rini and KS, Gayathri
- Subjects
- *
DIAGNOSIS of dementia , *COGNITION disorders diagnosis , *SPEECH evaluation , *CROSS-sectional method , *PREDICTION models , *TASK performance , *DESCRIPTIVE statistics , *NATURAL language processing , *LINGUISTICS , *EXPERIMENTAL design , *DEEP learning , *COMPUTER-aided diagnosis , *LATENT semantic analysis , *NEUROPSYCHOLOGICAL tests , *RESEARCH , *SEMANTIC memory , *EARLY diagnosis , *COMPARATIVE studies , *MACHINE learning , *FACTOR analysis , *ALGORITHMS , *DEMENTIA patients - Abstract
Background: Dementia is a cognitive decline that leads to the progressive deterioration of an individual's ability to perform daily activities independently. As a result, a considerable amount of time and resources are spent on caretaking. Early detection of dementia can significantly reduce the effort and resources needed for caretaking. Aims: This research proposes an approach for assessing cognitive decline by analysing speech data, specifically focusing on speech relevance as a crucial indicator for memory recall. Methods & Procedures: This is a cross‐sectional, online, self‐administered. The proposed method used deep learning architecture based on transformers, with BERT (Bidirectional Encoder Representations from Transformers) and Sentence‐Transformer to derive encoded representations of speech transcripts. These representations provide contextually descriptive information that is used to analyse the relevance of sentences in their respective contexts. The encoded information is then compared using cosine similarity metrics to measure the relevance of uttered sequences of sentences. The study uses the Pitt Corpus Dementia dataset for experimentation, which consists of speech data from individuals with and without dementia. The accuracy of the proposed multi‐QA‐MPNet (Multi‐Query Maximum Inner Product Search Pretraining) model is compared with other pretrained transformer models of Sentence‐Transformer. Outcomes & Results: The results show that the proposed approach outperforms the other models in capturing context level information, particularly semantic memory. Additionally, the study explores the suitability of different similarity measures to evaluate the relevance of uttered sequences of sentences. The experimentation reveals that cosine similarity is the most appropriate measure for this task. Conclusions & Implications: This finding has significant implications for the early warning signs of dementia, as it suggests that cosine similarity metrics can effectively capture the semantic relevance of spoken language. The persistent cognitive decline over time acts as one of the indicators for prevalence of dementia. Additionally early dementia could be recognised by analysis on other modalities like speech and brain images. WHAT THIS PAPER ADDS: What is already known on this subject: It is already known that speech‐ and language‐based detection methods can be useful for dementia diagnosis, as language difficulties are often early signs of the disease. Additionally, deep learning algorithms have shown promise in detecting and diagnosing dementia through analysing large datasets, particularly in speech‐ and language‐based detection methods. However, further research is needed to validate the performance of these algorithms on larger and more diverse datasets and to address potential biases and limitations. What this paper adds to existing knowledge: This study presents a unique and effective approach for cognitive decline assessment through analysing speech data. The study provides valuable insights into the importance of context and semantic memory in accurately detecting the potential in dementia and demonstrates the applicability of deep learning models for this purpose. The findings of this study have important clinical implications and can inform future research and development in the field of dementia detection and care. What are the potential or actual clinical implications of this work?: The proposed approach for cognitive decline assessment using speech data and deep learning models has significant clinical implications. It has the potential to improve the accuracy and efficiency of dementia diagnosis, leading to earlier detection and more effective treatments, which can improve patient outcomes and quality of life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Predicting redox potentials by graph‐based machine learning methods.
- Author
-
Jia, Linlin, Brémond, Éric, Zaida, Larissa, Gaüzère, Benoit, Tognetti, Vincent, and Joubert, Laurent
- Subjects
- *
GRAPH neural networks , *MACHINE learning , *OXIDATION-reduction reaction , *PHYSICAL & theoretical chemistry , *DENSITY functional theory - Abstract
The evaluation of oxidation and reduction potentials is a pivotal task in various chemical fields. However, their accurate prediction by theoretical computations, which is a complementary task and sometimes the only alternative to experimental measurement, may be often resource‐intensive and time‐consuming. This paper addresses this challenge through the application of machine learning techniques, with a particular focus on graph‐based methods (such as graph edit distances, graph kernels, and graph neural networks) that are reviewed to enlighten their deep links with theoretical chemistry. To this aim, we establish the ORedOx159 database, a comprehensive, homogeneous (with reference values stemming from density functional theory calculations), and reliable resource containing 318 one‐electron reduction and oxidation reactions and featuring 159 large organic compounds. Subsequently, we provide an instructive overview of the good practice in machine learning and of commonly utilized machine learning models. We then assess their predictive performances on the ORedOx159 dataset through extensive analyses. Our simulations using descriptors that are computed in an almost instantaneous way result in a notable improvement in prediction accuracy, with mean absolute error (MAE) values equal to 5.6 kcal mol−1 for reduction and 7.2 kcal mol−1 for oxidation potentials, which paves a way toward efficient in silico design of new electrochemical systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Artificial intelligence in cosmetic dermatology.
- Author
-
Kania, Barbara, Montecinos, Karen, and Goldberg, David J.
- Subjects
- *
COSMETIC dermatology , *ARTIFICIAL intelligence , *PATIENT experience , *PATIENT satisfaction , *PERSONAL beauty - Abstract
Background: Cosmetic dermatology is a growing field as more patients are seeking treatments for esthetic concerns. Traditionally, practitioners and patients utilize their own perceptions, current beauty standards, and manual observation to determine their satisfaction with cosmetic interventions. Artificial intelligence (AI) can be introduced into cosmetic dermatology to provide objective data‐driven recommendations to both dermatologists and patients. Objective: The purpose of this paper is to compose a unified review that illustrates the various facets of artificial intelligence and formulate a hypothesis regarding the new implications of artificial intelligence in cosmetic dermatology specifically. Methods: A comprehensive search on PubMed was conducted to identify the available information related to AI in cosmetic dermatology. The search was conducted using a combination of keywords including "cosmetic dermatology" and "artificial intelligence." Results: The current literature indicates that AI models offer personalized, efficient, and result‐driven outputs that can enhance cosmetic outcomes, patient satisfaction, and overall experience. Conclusion: Artificial intelligence integration in cosmetic dermatology shows a promising future, offering the ability to analyze vast data sets and deliver a tailored patient experience. By incorporating AI into cosmetic dermatology, there is an opportunity to balance evidence‐based decision‐making with the artistic human touch of cosmetic dermatologists. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Image‐based crop disease detection using machine learning.
- Author
-
Dolatabadian, Aria, Neik, Ting Xiang, Danilevicz, Monica F., Upadhyaya, Shriprabha R., Batley, Jacqueline, and Edwards, David
- Subjects
- *
MACHINE learning , *PLANT diseases , *ARTIFICIAL intelligence , *DATA analytics , *AGRICULTURE - Abstract
Crop disease detection is important due to its significant impact on agricultural productivity and global food security. Traditional disease detection methods often rely on labour‐intensive field surveys and manual inspection, which are time‐consuming and prone to human error. In recent years, the advent of imaging technologies coupled with machine learning (ML) algorithms has offered a promising solution to this problem, enabling rapid and accurate identification of crop diseases. Previous studies have demonstrated the potential of image‐based techniques in detecting various crop diseases, showcasing their ability to capture subtle visual cues indicative of pathogen infection or physiological stress. However, the field is rapidly evolving, with advancements in sensor technology, data analytics and artificial intelligence (AI) algorithms continually expanding the capabilities of these systems. This review paper consolidates the existing literature on image‐based crop disease detection using ML, providing a comprehensive overview of cutting‐edge techniques and methodologies. Synthesizing findings from diverse studies offers insights into the effectiveness of different imaging platforms, contextual data integration and the applicability of ML algorithms across various crop types and environmental conditions. The importance of this review lies in its ability to bridge the gap between research and practice, offering valuable guidance to researchers and agricultural practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Prediction of stock prices with automated reinforced learning algorithms.
- Author
-
Yasin, Said, Paschke, Adrian, and Al Qundus, Jamal
- Subjects
- *
STOCK price forecasting , *DEEP reinforcement learning , *MACHINE learning , *STOCK prices , *STOCKS (Finance) - Abstract
Predicting stock price movements remains a major challenge in time series analysis. Despite extensive research on various machine learning techniques, few models have consistently achieved success in automated stock trading. One of the main challenges in stock price forecasting is that the optimal model changes over time due to market dynamics. This paper aims to predict stock prices using automated reinforcement learning algorithms and to analyse their efficiency compared with conventional methods. We automate DQN models and their variants, known for their adaptability, by continuously retraining them using recent data to capture market dynamics. We demonstrate that our dynamic models improve the accuracy of predicting the directions of various DAX stocks from 50.00% to approximately 60.00%, compared with conventional methods. Additionally, we conclude that dynamic models should be updated in response to shifts rather than at fixed intervals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Resource allocation strategy of space cloud network based on resource clustering.
- Author
-
Liu, Jun, Wang, Yufei, Dai, Fucheng, and Wang, Chuang
- Subjects
- *
MACHINE learning , *REINFORCEMENT learning , *EXTRATERRESTRIAL resources , *RESOURCE allocation , *INFORMATION networks - Abstract
Summary: Space information network (SIN) is difficult to fully utilize the limited on‐board resource due to its dynamic and heterogeneous nature, while the traditional resource management methods cannot adapt to the increasingly diverse task requirements. Space cloud network architecture is an effective technology to reduce the pressure on satellite resources. To effectively manage the space cloud network resources, we design a resources allocation strategy based on resource clustering. Firstly, we propose the space cloud network architecture. Then, we propose a genetic algorithm to cluster the space cloud resources. Finally, we propose a dynamic resource allocation method based on reinforcement learning for the dynamic characteristics of space cloud resources. The method improves the reinforcement learning algorithm through dynamic objective optimization to complete the optimization of multiple objectives in the process of space cloud resources allocation. The simulation results show that the algorithm proposed in this paper reduces the task execution delay by an average of 10.5% compared with the original DQN algorithm and increases the execution success rate by 2.17%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Enhanced Hybrid Algorithms for Segmentation and Reconstruction of Granular Grains From X‐Ray Micro Computed‐Tomography Images.
- Author
-
Li, Ruidong, Zhang, Pin, Yin, Zhen‐Yu, and Sheil, Brian
- Subjects
- *
SOIL testing , *MACHINE learning , *SOIL granularity , *SOIL sampling , *IMAGE analysis , *RICE quality - Abstract
ABSTRACT Accurate three‐dimensional (3D) reconstruction of granular grains from x‐ray micro‐computed tomography (µCT) images is a long‐standing challenge, particularly for dense soil samples. This study develops a machine learning (ML) enhanced approach to automatically reconstruct granular grains from µCT images. The novel academic contributions of this paper include (a) a hierarchical strategy based on parameter‐independent polygonal approximation, area, and concavity analysis, for the first time, to identify and eliminate both intergranular and intragranular voids; (b) incorporation of a recursive segmentation scheme and ML‐based grain classifier to avoid over‐segmentation; (c) novel modifications on the determination of splitting paths to enhance segmentation accuracy; and (d) an effective approach of assigning initial level set functions for reconstructing granular grains automatically. The hybrid ML algorithm is applied to µCT images of dense Mojave Mars Simulant. The results indicate that the proposed method can accurately segment grain clumps with unclear boundaries. The new automatic reconstruction algorithm eliminates ineffective operations and achieves a three‐fold increase in computational speed than previous methods documented in the literature. Ninety‐one percent of grains with distinct boundaries can be reconstructed and the reconstruction ratio reaches 81% even for grains without distinct boundaries. The overall reconstruction ratio of grains increases by 20% compared with previous methods, achieving a step‐change improvement for one‐to‐one mapping of real soil samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. STP‐CNN: Selection of transfer parameters in convolutional neural networks.
- Author
-
Mallouk, Otmane, Joudar, Nour‐Eddine, and Ettaouil, Mohamed
- Subjects
- *
MACHINE learning , *CONVOLUTIONAL neural networks , *DEEP learning , *TRANSFER of training - Abstract
Nowadays, transfer learning has shown promising results in many applications. However, most deep transfer learning methods such as parameter sharing and fine‐tuning are still suffering from the lack of parameters transmission strategy. In this paper, we propose a new optimization model for parameter‐based transfer learning in convolutional neural networks named STP‐CNN. Indeed, we propose a Lasso transfer model supported by a regularization term that controls transferability. Moreover, we opt for the proximal gradient descent method to solve the proposed model. The suggested technique allows, under certain conditions, to control exactly which parameters, in each convolutional layer of the source network, which will be used directly or adjusted in the target network. Several experiments prove the performance of our model in locating the transferable parameters as well as improving the data classification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Machine Learning Algorithm Guides Catalyst Choices for Magnesium‐Catalyzed Asymmetric Reactions.
- Author
-
Baczewska, Paulina, Kulczykowski, Michał, Zambroń, Bartosz, Jaszczewska‐Adamczak, Joanna, Pakulski, Zbigniew, Roszak, Rafał, Grzybowski, Bartosz A., and Mlynarski, Jacek
- Subjects
- *
MACHINE learning , *CATALYSIS , *MAGNESIUM , *CATALYSTS , *DEMAND forecasting - Abstract
Organic‐chemical literature encompasses large numbers of catalysts and reactions they can effect. Many of these examples are published merely to document the catalysts' scope but do not necessarily guarantee that a given catalyst is "optimal"—in terms of yield or enantiomeric excess—for a particular reaction. This paper describes a Machine Learning model that aims to improve such catalyst‐reaction assignments based on the carefully curated literature data. As we show here for the case of asymmetric magnesium catalysis, this model achieves relatively high accuracy and offers out of‐the‐box predictions successfully validated by experiment, e.g. in synthetically demanding asymmetric reductions or Michael additions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Machine Learning Algorithm Guides Catalyst Choices for Magnesium‐Catalyzed Asymmetric Reactions.
- Author
-
Baczewska, Paulina, Kulczykowski, Michał, Zambroń, Bartosz, Jaszczewska‐Adamczak, Joanna, Pakulski, Zbigniew, Roszak, Rafał, Grzybowski, Bartosz A., and Mlynarski, Jacek
- Subjects
- *
MACHINE learning , *CATALYSTS , *ASYMMETRIC synthesis , *CATALYSIS , *MAGNESIUM - Abstract
Organic‐chemical literature encompasses large numbers of catalysts and reactions they can effect. Many of these examples are published merely to document the catalysts' scope but do not necessarily guarantee that a given catalyst is "optimal"—in terms of yield or enantiomeric excess—for a particular reaction. This paper describes a Machine Learning model that aims to improve such catalyst‐reaction assignments based on the carefully curated literature data. As we show here for the case of asymmetric magnesium catalysis, this model achieves relatively high accuracy and offers out of‐the‐box predictions successfully validated by experiment, e.g. in synthetically demanding asymmetric reductions or Michael additions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Interpretable Machine Learning for Investigating the Molecular Mechanisms Governing the Transparency of Colorless Transparent Polyimide for OLED Cover Windows.
- Author
-
Zhang, Songyang, He, Xiaojie, Xiao, Peng, Xia, Xuejian, Zheng, Feng, Xiang, Shuangfei, and Lu, Qinghua
- Subjects
- *
MACHINE learning , *POLYIMIDE films , *FLEXIBLE display systems , *BAND gaps , *DENSITY functional theory - Abstract
With the rapid development of flexible displays and wearable electronics, there are a substantial demand for colorless transparent polyimide (CPI) films with different properties. Traditional trial‐and‐error experimental methods are time‐consuming and costly, and density functional theory based prediction of HOMO‐LUMO gap energy also takes time and is prone to varying degrees of error. Inspired by machine learning (ML) applications in molecular and materials science, this paper proposed a data‐driven ML strategy to study the correlation between microscopic molecular mechanisms and macroscopic optical properties. Based on varying degrees of impact of various molecular features on the cutoff wavelength (
λ cutoff), the ML algorithm is first used to quickly and accurately predict theλ cutoff of CPI. Several new CPI films are then designed and prepared based on the key molecular features, and the predicted values of theirλ cutoff are effectively verified within the experimental error range. The interpretability provided by the model allows to establish correlations between the nine key descriptors identified and their physicochemical meanings. The contributions are also analyzed to the transparency of polyimide films, thereby giving insight into the molecular mechanisms underlying transparency modulation for CPIs. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
45. A Dual Cascaded Deep Theoretic Learning Approach for the Segmentation of the Brain Tumors in MRI Scans.
- Author
-
Sreedhar, Jinka, Dara, Suresh, Srinivasulu, C. H., Katari, Butchi Raju, Alkhayyat, Ahmed, Vidyarthi, Ankit, and Alsulami, Mashael M.
- Subjects
- *
MAGNETIC resonance imaging , *COMPUTER-assisted image analysis (Medicine) , *BRAIN tumors , *DIAGNOSTIC imaging , *BRAIN imaging , *DEEP learning - Abstract
Accurate segmentation of brain tumors from magnetic resonance imaging (MRI) is crucial for diagnosis, treatment planning, and monitoring of patients with neurological disorders. This paper proposes an approach for brain tumor segmentation employing a cascaded architecture integrating L‐Net and W‐Net deep learning models. The proposed cascaded model leverages the strengths of U‐Net as a baseline model to enhance the precision and robustness of the segmentation process. In the proposed framework, the L‐Net excels in capturing the mask, while the W‐Net focuses on fine‐grained features and spatial information to discern complex tumor boundaries. The cascaded configuration allows for a seamless integration of these complementary models, enhancing the overall segmentation performance. To evaluate the proposed approach, extensive experiments were conducted on the datasets of BraTs and SMS Medical College comprising multi‐modal MRI images. The experimental results demonstrate that the cascaded L‐Net and W‐Net model consistently outperforms individual models and other state‐of‐the‐art segmentation methods. The performance metrics such as the Dice Similarity Coefficient value achieved indicate high segmentation accuracy, while Sensitivity and Specificity metrics showcase the model's ability to correctly identify tumor regions and exclude healthy tissues. Moreover, the low Hausdorff Distance values confirm the model's capability to accurately delineate tumor boundaries. In comparison with the existing methods, the proposed cascaded scheme leverages the strengths of each network, leading to superior performance compared to existing works of literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Deep SVBRDF Acquisition and Modelling: A Survey.
- Author
-
Kavoosighafi, Behnaz, Hajisharif, Saghi, Miandji, Ehsan, Baravdish, Gabriel, Cao, Wen, and Unger, Jonas
- Subjects
- *
GENERATIVE artificial intelligence , *MACHINE learning , *REFLECTANCE measurement , *RESEARCH & development , *REFLECTANCE , *DEEP learning - Abstract
Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine‐learning‐driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high‐quality measurements of bi‐directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi‐directional Reflectance Distribution Functions (SVBRDFs). Learning‐based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State‐of‐the‐Art Report (STAR) presents an in‐depth overview of the state‐of‐the‐art in machine‐learning‐driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real‐world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Detection of circulating tumor cells in blood using two‐step random forest.
- Author
-
Wei, Hua, Natori, Takahiro, Tanaka, Tomohiro, Aoki, Shin, Kuriyama, Sho, Yamada, Takeshi, and Aikawa, Naoyuki
- Subjects
- *
CELL morphology , *RANDOM forest algorithms , *INSPECTION & review , *IMAGE processing , *BLOOD cells - Abstract
Cancer has been the leading cause of death among Japanese since 1981, and many people die from it every year worldwide. While various measures have been taken to reduce the mortality rate of cancer, circulating tumor cells (CTCs) in the blood have been attracting attention in recent years. In the past, CTCs were detected by visual inspection by a physician or by an expensive machine, but these methods required much effort by the physician and required only EpCAM‐expressing cells to be detected. In addition, detection by image processing has been used, but it has the problem that the area of interest is only a part of the area and there are many false positives. In this paper, we propose a two‐step classification method that focuses on the shape and surface of cells. In the proposed method, multiple shape and surface features are obtained for four types of cells in blood images: Clusters, CTCs, Normal Cells, and Vertical Cells. Based on the features, cells are classified using a two‐step Random Forest and their accuracy is evaluated. Furthermore, the effectiveness of the proposed method is demonstrated by comparing it with conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. BPNN‐based flow classification and admission control for software defined IIoT.
- Author
-
Wang, Cheng, Xue, Hai, and Huan, Zhan
- Subjects
- *
SOFTWARE-defined networking , *INTELLIGENT networks , *BACK propagation , *MACHINE learning , *INTERNET of things - Abstract
Flow admission control (FAC) aims to efficiently manage the service requests while maximizing the network utilization. With multiple connection requests, access delay or even service interruption may occur. This paper proposes a novel FAC approach to reduce the contention between the end nodes and ensure high utilization of the networking resources for software defined IIoT. First, incoming flows are classified into different priorities using back propagation neural network based on selected features representing the current network status. Second, with the designed flow admission policies, bandwidth and buffer size are estimated with stochastic network calculus model. Finally, the thresholds of the proposed FAC scheme are dynamically decided based on the above two parameters. Various flows are admitted or rejected via the proposed FAC to maintain real time processing. Unlike traditional FAC schemes rely on static priority systems, the proposed scheme leverages machine learning technique for dynamic flow prioritization and the stochastic network calculus model for precise estimation. Computer simulation reveals that the proposed scheme accurately classifies the flows, and substantially decreases the transmission delay and improves the network utilization compared to the existing FAC schemes. This highlights the superiority of the proposed scheme meeting the demands of software defined IIoT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Exploring influential factors in peer upvoting within social annotation.
- Author
-
Li, Shan, Huang, Xiaoshan, Lin, Lijia, and Chen, Fu
- Subjects
- *
COMMUNITY of inquiry , *WORD frequency , *SOCIAL interaction , *TEXT mining , *COLLABORATIVE learning - Abstract
Upvotes serve important purposes in online social annotation environments. However, limited studies have explored the influential factors affecting peer upvoting in online collaborative learning. In this study, we analysed the factors influencing students' upvotes received from their peers as 91 participants utilized Perusall, an online social annotation system, for collaborative reading. The participants were asked to collaboratively annotate 29 reading materials in a semester. We collected student reading behaviours and analysed their annotations with a text‐mining tool of Linguistic Inquiry and Word Count (LIWC). Moreover, conditional inference tree was used to determine the relative importance of explanatory factors to the upvotes students received. The results showed that the high‐upvote group made significantly more annotations, posted more responses to others' annotations and displayed fewer negative emotions in annotations than those who did not receive upvotes. The two groups of students had no significant differences in the upvotes given to others, as well as cognitive activities and positive emotions involved in annotations. Moreover, the number of annotations was the determining factor in predicting the upvotes that one could receive in social annotation activities. This study has significant practical implications regarding providing interventions in social annotation‐based collaborative reading.Practitioner notesWhat is already known about this topicSocial annotations enhance students' reading experience, facilitate knowledge sharing and collaboration, promote high‐quality learning interactions and ultimately lead to improved performance.In social annotation environments, receiving upvotes from peers is not only a type of feedback but also a form of motivation, social interaction and social validation.No study has explored the influential factors in peer upvoting within social annotation‐based learning.What this paper addsThis study was the first to examine social annotations through the lens of the community of inquiry framework.We investigated the relationships between students' cognitive and social presence in their annotations and the upvotes they received in an online social annotation environment.Our study revealed the strategies for obtaining upvotes from peers in social annotation‐based learning environments.Implications for practice and/or policyThe high‐upvote group made significantly more annotations, posted more responses to others' annotations and displayed fewer negative emotions in annotations compared to the low‐upvote group.The two groups of students did not show significant differences in the upvotes they gave to others or in the cognitive activities and positive emotions involved in annotations.The number of annotations was the primary factor predicting the number of upvotes received in the collaborative reading.This study could inform the design of future online social annotation systems to better support collaborative learning and peer interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Assessing implicit computational thinking in game‐based learning: A logical puzzle game study.
- Author
-
Liu, Tongxi
- Subjects
- *
IMPLICIT learning , *SEQUENTIAL analysis , *LEARNING , *MACHINE learning , *RESEARCH personnel , *KNOWLEDGE acquisition (Expert systems) - Abstract
To date, extensive work has been devoted to incorporating computational thinking in K‐12 education. Recognizing students' computational thinking stages in game‐based learning environments is essential to capture unproductive learning and provide appropriate scaffolding. However, few reliable and valid computational thinking measures have been developed, especially in games, where computational knowledge acquisition and computational skill construction are implicit. This study introduced an innovative approach to explore students' implicit computational thinking through various explicit factors in game‐based learning, with a specific focus on Zoombinis, a logical puzzle‐based game designed to enhance students' computational thinking skills. Our results showed that factors such as duration, accuracy, number of actions and puzzle difficulty were significantly related to students' computational thinking stages, while gender and grade level were not. Besides, findings indicated gameplay performance has the potential to reveal students' computational thinking stages and skills. Effective performance (shorter duration, fewer actions and higher accuracy) indicated practical problem‐solving strategies and systematic computational thinking stages (eg, Algorithm Design). This work helps simplify the process of implicit computational thinking assessment in games by observing the explicit factors and gameplay performance. These insights will serve to enhance the application of gamification in K‐12 computational thinking education, offering a more efficient method to understanding and fostering students' computational thinking skills. Practitioner notesWhat is already known about this topic Game‐based learning is a pedagogical framework for developing computational thinking in K‐12 education.Computational thinking assessment in games faces difficulties because students' knowledge acquisition and skill construction are implicit.Qualitative methods have widely been used to measure students' computational thinking skills in game‐based learning environments.What this paper adds Categorize students' computational thinking experiences into distinct stages and analyse recurrent patterns employed at each stage through sequential analysis. This approach serves as inspiration for advancing the assessment of stage‐based implicit learning with machine learning methods.Gameplay performance and puzzle difficulty significantly relate to students' computational thinking skills. Researchers and instructors can assess students' implicit computational thinking by observing their real‐time gameplay actions.High‐performing students can develop practical problem‐solving strategies and exhibit systematic computational thinking stages, while low‐performing students may need appropriate interventions to enhance their computational thinking practices.Implications for practice and/or policy Introduce a practical method with the potential for generalization across various game‐based learning to better understand learning processes by analysing significant correlations between certain gameplay variables and implicit learning stages.Allow unproductive learning detection and timely intervention by modelling the reflection of gameplay variables in students' implicit learning processes, helping improve knowledge mastery and skill construction in games.Further investigations on the causal relationship between gameplay performance and implicit learning skills, with careful consideration of more performance factors, are expected. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.