20 results
Search Results
2. A STATE-OF-THE-ART REVIEW OF THE BWM METHOD AND FUTURE RESEARCH AGENDA.
- Author
-
ECER, Fatih
- Subjects
- *
EVIDENCE gaps , *DATABASES , *COMPUTER science , *MULTIPLE criteria decision making , *BIBLIOMETRICS - Abstract
The superiority of BWM over other weighting methods for obtaining the weight values of the attributes is that it achieves high-confidence results with a reasonable number of pairwise comparisons. Although the best-worst method (BWM) is a well-known multi-criteria decision-making (MCDM) method that has been successfully utilized in almost all scientific areas to solve challenging real-life problems, no research has comprehensively examined the state-of-the-art in this regard. The present study depicts a detailed overview of publications concerned with BWM during the period 2015–2022. Based on the information obtained from the Scopus database, this work presents a big picture of current research on BWM. In other words, this paper analyzes the existing literature about BWM and identifies thematic contexts, application areas, emerging trends, and remaining research gaps to shed light on future research agendas aligning with those gaps. Further, the most recent BWM research is analyzed in the top ten scientific areas, from engineering to materials science. “Engineering”, “computer science”, and “business, management, and accounting” are the hottest fields of BWM research. China is the most active country regarding “engineering” and “computer science”, whereas India is the leader in “business, management, and accounting”. The study also reveals that there are still many research gaps in BWM research. The big picture taken in this study will not only showcase the current situation of BWM research but will also positively impact the direction and quality of new research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Emerging opportunities of using large language models for translation between drug molecules and indications.
- Author
-
Oniani, David, Hilsman, Jordan, Zang, Chengxi, Wang, Junmei, Cai, Lianjin, Zawala, Jan, and Wang, Yanshan
- Subjects
- *
LANGUAGE models , *GENERATIVE artificial intelligence , *DRUG discovery , *MOLECULES , *EVIDENCE gaps - Abstract
A drug molecule is a substance that changes an organism's mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications (which describes the disease, condition or symptoms for which the drug is used), or vice versa. Addressing this challenge could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. REFORMS: Consensus-based Recommendations for Machine-learning-based Science.
- Author
-
Kapoor, Sayash, Cantrell, Emily M., Kenny Peng, Thanh Hien Pham, Bail, Christopher A., Gundersen, Odd Erik, Hofman, Jake M., Hullman, Jessica, Lones, Michael A., Malik, Momin M., Nanayakkara, Priyanka, Poldrack, Russell A., Raji, Inioluwa Deborah, Roberts, Michael, Salganik, Matthew J., Serra-Garcia, Marta, Stewart, Brandon M., Vandewiele, Gilles, and Narayanan, Arvind
- Subjects
- *
SCIENCE journalism , *REFORMS , *MEDICAL sciences , *RESEARCH personnel , *MAXIMA & minima , *DATA science , *COMPUTER science - Abstract
Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear recommendations for conducting and reporting ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (recommendations for machine-learning-based science). It consists of 32 questions and a paired set of guidelines. REFORMS was developed on the basis of a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Interpolation Once Binary Search over a Sorted List.
- Author
-
Lin, Jun-Lin
- Subjects
- *
INTERPOLATION , *TIME complexity , *COMPUTER science - Abstract
Searching over a sorted list is a classical problem in computer science. Binary Search takes at most log 2 n + 1 tries to find an item in a sorted list of size n. Interpolation Search achieves an average time complexity of O (log log n ) for uniformly distributed data. Hybrids of Binary Search and Interpolation Search are also available to handle data with unknown distributions. This paper analyzes the computation cost of these methods and shows that interpolation can significantly affect their performance—accordingly, a new method, Interpolation Once Binary Search (IOBS), is proposed. The experimental results show that IOBS outperforms the hybrids of Binary Search and Interpolation Search for nonuniformly distributed data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. South African research contributions to Lecture Notes in Computer Science, 1973-2022.
- Author
-
Naudé, Filistéa and Kroeze, Jan H.
- Subjects
- *
COMPUTER science , *ARTIFICIAL intelligence , *RESEARCH personnel , *AUTHORSHIP collaboration , *PERIODICAL articles - Abstract
Lecture Notes in Computer Science (LNCS) is a globally recognised publication outlet for the field of Computer Science, including in South Africa. In this study, spanning from 1973 to 2022, we investigated the research participation of South African based authors in LNCS. The publication output and citation impact of these authors were compared to the global Computer Science and LNCS output. The authorship patterns and collaborative behaviour of South African LNCS papers were explored, and a keyword or topic analysis also conducted. Of the total of 518 662 LNCS papers published globally between 1973 and 2022, South African based researchers contributed 1150 papers (0.22%). The LNCS papers from South Africa exhibit a strong collaborative publication culture, with 1043 (91%) co-authored and 107 (9%) singleauthored works. Local LNCS researchers prefer institutional collaboration (43%), followed by international (37%) and national collaboration (11%). Europe emerged as the most significant collaboration partner for LNCS researchers in South Africa. Of the 1150 papers, 836 (73%) had received citations, while 314 (27%) had not. On average, papers published by South African based authors received 6.05 citations, compared to the global LNCS average of 9.49 citations per paper. A keyword analysis revealed that the majority of papers by South African authors focus on artificial intelligence. The results indicate that, although LNCS serves as a reputable dissemination platform for Computer Science research output both globally and locally, South African authors should consider publishing more journal articles to build and improve their researcher profiles. Significance: * The study shows that LNCS is the most frequent publication outlet for Computer Science researchers, globally and in South Africa. * The study offers insight into the publication output, authorship patterns, collaborative behaviour and citation impact of South African based Computer Science researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A Comprehensive Review of Behavior Change Techniques in Wearables and IoT: Implications for Health and Well-Being.
- Author
-
Del-Valle-Soto, Carolina, López-Pimentel, Juan Carlos, Vázquez-Castillo, Javier, Nolazco-Flores, Juan Arturo, Velázquez, Ramiro, Varela-Aldás, José, and Visconti, Paolo
- Subjects
- *
INTERNET of things , *DATABASES , *WELL-being , *COMPUTER science , *STATISTICAL measurement - Abstract
This research paper delves into the effectiveness and impact of behavior change techniques fostered by information technologies, particularly wearables and Internet of Things (IoT) devices, within the realms of engineering and computer science. By conducting a comprehensive review of the relevant literature sourced from the Scopus database, this study aims to elucidate the mechanisms and strategies employed by these technologies to facilitate behavior change and their potential benefits to individuals and society. Through statistical measurements and related works, our work explores the trends over a span of two decades, from 2000 to 2023, to understand the evolving landscape of behavior change techniques in wearable and IoT technologies. A specific focus is placed on a case study examining the application of behavior change techniques (BCTs) for monitoring vital signs using wearables, underscoring the relevance and urgency of further investigation in this critical intersection of technology and human behavior. The findings shed light on the promising role of wearables and IoT devices for promoting positive behavior modifications and improving individuals' overall well-being and highlighting the need for continued research and development in this area to harness the full potential of technology for societal benefit. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A survey on IoT trust model frameworks.
- Author
-
Ferraris, Davide, Fernandez-Gago, Carmen, Roman, Rodrigo, and Lopez, Javier
- Subjects
- *
TRUST , *INTERNET of things , *COMPUTER science - Abstract
Trust can be considered as a multidisciplinary concept, which is strongly related to the context and it falls in different fields such as Philosophy, Psychology or Computer Science. Trust is fundamental in every relationship, because without it, an entity will not interact with other entities. This aspect is very important especially in the Internet of Things (IoT), where many entities produced by different vendors and created for different purposes have to interact among them through the internet often under uncertainty. Trust can overcome this uncertainty, creating a strong basis to ease the process of interaction among these entities. We believe that considering trust in the IoT is fundamental, and in order to implement it in any IoT entity, it is fundamental to consider it through the whole System Development Life Cycle. In this paper, we propose an analysis of different works that consider trust for the IoT. We will focus especially on the analysis of frameworks that have been developed in order to include trust in the IoT. We will make a classification of them providing a set of parameters that we believe are fundamental in order to properly consider trust in the IoT. Thus, we will identify important aspects to be taken into consideration when developing frameworks that implement trust in the IoT, finding gaps and proposing possible solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Bibliometric Review of the Ordered Weighted Averaging Operator.
- Author
-
Figuerola-Wischke, Anton, Merigó, José M., Gil-Lafuente, Anna M., and Boria-Reverter, Josefa
- Subjects
- *
BIBLIOMETRICS , *DATABASES , *COMPUTER science , *AGGREGATION operators , *DATA visualization , *SOFTWARE measurement - Abstract
The ordered weighted averaging (OWA) operator was proposed by Yager back in 1988 and constitutes a parameterized family of aggregation functions between the minimum and the maximum. The purpose of this paper is to perform a bibliometric review of this aggregation operator during the last 35 years through the Web of Science (WoS) Core Collection database and the Visualization of Similarities (VOS) viewer software. The results show that the OWA operator is an increasingly popular aggregation operator, especially in Computer Science. The results also allow the assertion that Yager, as expected, is still the most influential and productive author. Moreover, the study reveals that institutions from over 80 countries have contributed to OWA research, highlighting the high presence of Chinese universities and the emergence of Pakistani ones. Other interesting findings are presented to provide a comprehensive and up-to-date analysis of the OWA operator literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Multi-Dimensional Data Analysis Platform (MuDAP): A Cognitive Science Data Toolbox.
- Author
-
Li, Xinlin, Wang, Yiming, Bi, Xiaoyu, Xu, Yalu, Ying, Haojiang, and Chen, Yiyang
- Subjects
- *
COGNITIVE science , *DATA science , *ARTIFICIAL intelligence , *PRINCIPAL components analysis , *RESEARCH personnel - Abstract
Researchers in cognitive science have long been interested in modeling human perception using statistical methods. This requires maneuvers because these multiple dimensional data are always intertwined with complex inner structures. The previous studies in cognitive sciences commonly applied principal component analysis (PCA) to truncate data dimensions when dealing with data with multiple dimensions. This is not necessarily because of its merit in terms of mathematical algorithm, but partly because it is easy to conduct with commonly accessible statistical software. On the other hand, dimension reduction might not be the best analysis when modeling data with no more than 20 dimensions. Using state-of-the-art techniques, researchers in various research disciplines (e.g., computer vision) classified data with more than hundreds of dimensions with neural networks and revealed the inner structure of the data. Therefore, it might be more sophisticated to process human perception data directly with neural networks. In this paper, we introduce the multi-dimensional data analysis platform (MuDAP), a powerful toolbox for data analysis in cognitive science. It utilizes artificial intelligence as well as network analysis, an analysis method that takes advantage of data symmetry. With the graphic user interface, a researcher, with or without previous experience, could analyze multiple dimensional data with great ease. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Automated Quality Evaluation of Large-Scale Benchmark Datasets for Vision-Language Tasks.
- Author
-
Zhao, Ruibin, Xie, Zhiwei, Zhuang, Yipeng, and L. H. Yu, Philip
- Subjects
- *
COMPUTER science , *ARTIFICIAL intelligence , *DEEP learning , *UNIVERSITY research , *BENCHMARK problems (Computer science) - Abstract
Large-scale benchmark datasets are crucial in advancing research within the computer science communities. They enable the development of more sophisticated AI models and serve as "golden" benchmarks for evaluating their performance. Thus, ensuring the quality of these datasets is of utmost importance for academic research and the progress of AI systems. For the emerging vision-language tasks, some datasets have been created and frequently used, such as Flickr30k, COCO, and NoCaps, which typically contain a large number of images paired with their ground-truth textual descriptions. In this paper, an automatic method is proposed to assess the quality of large-scale benchmark datasets designed for vision-language tasks. In particular, a new cross-modal matching model is developed, which is capable of automatically scoring the textual descriptions of visual images. Subsequently, this model is employed to evaluate the quality of vision-language datasets by automatically assigning a score to each 'ground-truth' description for every image picture. With a good agreement between manual and automated scoring results on the datasets, our findings reveal significant disparities in the quality of the ground-truth descriptions included in the benchmark datasets. Even more surprising, it is evident that a small portion of the descriptions are unsuitable for serving as reliable ground-truth references. These discoveries emphasize the need for careful utilization of these publicly accessible benchmark databases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Fractal feature selection model for enhancing high-dimensional biological problems.
- Author
-
Alsaeedi, Ali Hakem, Al-Mahmood, Haider Hameed R., Alnaseri, Zainab Fahad, Aziz, Mohammad R., Al-Shammary, Dhiah, Ibaida, Ayman, and Ahmed, Khandakar
- Subjects
- *
FEATURE selection , *ARTIFICIAL intelligence , *MACHINE learning , *STANDARD deviations , *COMPUTER science - Abstract
The integration of biology, computer science, and statistics has given rise to the interdisciplinary field of bioinformatics, which aims to decode biological intricacies. It produces extensive and diverse features, presenting an enormous challenge in classifying bioinformatic problems. Therefore, an intelligent bioinformatics classification system must select the most relevant features to enhance machine learning performance. This paper proposes a feature selection model based on the fractal concept to improve the performance of intelligent systems in classifying high-dimensional biological problems. The proposed fractal feature selection (FFS) model divides features into blocks, measures the similarity between blocks using root mean square error (RMSE), and determines the importance of features based on low RMSE. The proposed FFS is tested and evaluated over ten high-dimensional bioinformatics datasets. The experiment results showed that the model significantly improved machine learning accuracy. The average accuracy rate was 79% with full features in machine learning algorithms, while FFS delivered promising results with an accuracy rate of 94%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Progressive Collapse Analysis of the Champlain Towers South in Surfside, Florida.
- Author
-
Pellecchia, Cosimo, Cardoni, Alessandro, Cimellaro, Gian Paolo, Domaneschi, Marco, Ansari, Farhad, and Khalil, Ahmed Amir
- Subjects
- *
BUILDING failures , *PROGRESSIVE collapse , *STRUCTURAL failures , *COLUMNS , *CIVIL engineering , *COMPUTER science - Abstract
Since the Ronan Point collapse in the UK in 1968, the progressive collapse analysis of residential buildings has gradually drawn the attention of civil engineers and the scientific community. Recent advances in computer science and the development of new numerical methodologies allow us to perform high-fidelity collapse simulations. This paper assesses different scenarios that could have hypothetically caused the collapse of the Champlain Tower South Condo in Surfside, Florida, in 2021, one of the most catastrophic progressive collapse events that has ever occurred. The collapse analysis was performed using the latest developments in the Applied Element Method (AEM). A high-fidelity numerical model of the building was developed according to the actual structural drawings. Several different collapse hypotheses were examined, considering both column failures and degradation scenarios. The analyses showed that the failure of deep beams at the pool deck level, directly connected to the perimeter columns of the building, could have led to the columns' failure and subsequent collapse of the eastern wing of the building. The simulated scenario highlights the different stages of the collapse sequence and appears to be consistent with what can be observed in the footage of the actual collapse. To improve the performance of the structure against progressive collapse, two modifications to the original design of the building were introduced. From the analyses, it was found that disconnecting the pool deck beam from the perimeter columns could have been effective in preventing the local collapse of the pool deck slab from propagating to the rest of the building. Moreover, these analyses indicate that enhancing the torsional strength and stiffness of the core could have prevented the collapse of the eastern part of the building, given the assumptions and initiation scenarios considered. Building catastrophic collapses can cause significant lives and economic losses. Poor design and maintenance, in combination with aging, will more likely increase, in the next years, the number of buildings potentially vulnerable to the risk of collapse, due to either seismic, accidental, or degradation actions. This research focuses on the analysis of the Champlain Tower South condo collapse, which occurred in Surfside, Florida, in 2021. Different hypothetical collapse scenarios were simulated, comparing the analysis results with the actual evidence of the collapse. The analyses have shown that the degradation of the pool deck slab, due to corrosion, may have contributed to the collapse of the building. Finally, two different minor revisions of the original design of the building were analyzed to reduce the risk of failure and understand how the collapse of similar residential buildings could be prevented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Exploring the evolution of machine scheduling through a computational approach.
- Author
-
Yazdani, Maziar and Haghani, Milad
- Subjects
- *
OPERATIONS research , *SCHEDULING , *MACHINERY , *COMPUTER science , *FLOW shops - Abstract
Since 2000, the field of machine scheduling—an integral part of computer science and operations research—has seen significant advancements. This paper explores the dynamic progression of machine scheduling, offering a detailed overview of its past advancements, current practices, and future directions. Anchoring the research in robust data analysis and statistical methodologies, the paper reveals the subtle yet impactful changes that have characterized the field in the last two decades. It examines the prominence of various scheduling problems, identifies leading research journals, and highlights international contributions and collaborations, thereby offering a thorough guide to the machine scheduling ecosystem. The study delves into specific problem characteristics and assesses performance criteria and solution methods to provide an in-depth view of the field's multifaceted nature. Ultimately, this paper captures the essence of machine scheduling's evolution and suggests new paths for exploration. The insights gained contribute significantly to academic discussions and equip practitioners with a comprehensive understanding of the dynamic landscape of machine scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Special Issue on Control and Applications of Multi-Agent Systems.
- Author
-
Osuka, Koichi, Tsunoda, Yusuke, Imahayashi, Wataru, and Aotani, Takumi
- Subjects
- *
MULTIAGENT systems , *COMPUTER science , *SOCIAL systems , *MACHINE learning , *GRAPH theory , *REINFORCEMENT learning - Abstract
"Multi-agent systems (MAS)" have been extensively studied across various fields, including robotics, economics, biology, and computer science. A distinctive feature of these systems is the ability of multiple agents, each with different characteristics, to perform system-wide tasks through local bottom-up interactions. Furthermore, design and control methods for system networks based on graph theory are being developed. Recent applications of these methods include autonomous driving technology, smart grids, and understanding social systems. This special issue aims to deepen the understanding of MAS, focusing on their control and applications. It features 16 papers, including one review paper. The accepted papers cover a wide range of topics, including reinforcement learning, autonomous mobility systems, and machine learning, presenting the latest research findings on MAS. These studies provide valuable insights into various aspects and potential applications of MAS. We hope that this issue will be beneficial to our readers and contribute to the advancement of future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. ArZiGo: A recommendation system for scientific articles.
- Author
-
Pinedo, Iratxe, Larrañaga, Mikel, and Arruarte, Ana
- Subjects
- *
RECOMMENDER systems , *HYBRID systems , *OPEN scholarship , *COMPUTER science , *INTERNET searching - Abstract
The large number of scientific publications around the world is increasing at a rate of approximately 4%–5% per year. This fact has resulted in the need for tools that deal with relevant and high-quality publications. To address this necessity, search and reference management tools that include some recommendation algorithms have been developed. However, many of these solutions are proprietary tools and the full potential of recommender systems is rarely exploited. There are some solutions which provide recommendations for specific domains, by using ad-hoc resources. Furthermore, some other systems do not consider any personalization strategy to generate the recommendations. This paper presents ArZiGo , a web-based full prototype system for the search, management, and recommendation of scientific articles, which feeds on the Semantic Scholar Open Research Corpus, a corpus that is growing continually with more than 190M papers from all fields of science so far. ArZiGo combines different recommendation approaches within a hybrid system, in a configurable way, to recommend those papers that best suit the preferences of the users. A group of 30 human experts has participated in the evaluation of 500 recommendations in 10 research areas, 7 of which belong to the area of Computer Science and 3 to the area of Medicine, obtaining quite satisfactory results. Besides the appropriateness of the articles recommended, the execution time of the implemented algorithms has also been analyzed. • A web system for the search, management, and recommendation of scientific articles. • A hybrid and multidisciplinary scientific article recommendation system. • A modular and scalable recommendation system. • Use of the Semantic Scholar Open Research Corpus as a corpus of articles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Cube query interestingness: Novelty, relevance, peculiarity and surprise.
- Author
-
Gkitsakis, Dimos, Kaloudis, Spyridon, Mouselli, Eirini, Peralta, Veronika, Marcel, Patrick, and Vassiliadis, Panos
- Subjects
- *
MULTIDIMENSIONAL databases , *HUMAN behavior , *COMPUTER science , *HUMAN experimentation , *ALGORITHMS - Abstract
In this paper, we discuss methods to assess the interestingness of a query in an environment of data cubes. We assume a hierarchical multidimensional database, storing data cubes and level hierarchies. We start with a comprehensive review of related work in the fields of human behavior studies and computer science. We define the interestingness of a query as a vector of scores along different aspects, like novelty, relevance, surprise and peculiarity and complement this definition with a taxonomy of the information that can be used to assess each of these aspects of interestingness. We provide both syntactic (result-independent) and extensional (result-dependent) checks, measures and algorithms for assessing the different aspects of interestingness in a quantitative fashion. We also report our findings from a user study that we conducted, analyzing the significance of each aspect, its evolution over time and the behavior of the study's participants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A data-driven analysis to discover research hotspots and trends of technologies for PFAS removal.
- Author
-
Fang, Xiaoya, Jin, Lili, Sun, Xiangzhou, Huang, Hui, Wang, Yanru, and Ren, Hongqiang
- Subjects
- *
FLUOROALKYL compounds , *TECHNOLOGICAL innovations , *CONSTRUCTED wetlands , *METAL-organic frameworks , *COMPUTER science , *MOLECULAR switches - Abstract
The frequent detection of persistent per- and polyfluoroalkyl substances (PFAS) in organisms and environment coupled with surging evidence for potential detrimental impacts, have attracted widespread attention throughout the world. In order to reveal research hotspots and trends of technologies for PFAS removal, herein, we performed a data-driven analysis of 3975 papers and 436 patents from Web of Science Core Collection and Derwent Innovation Index databases up to 2023. The results showed that China and the USA led the way in the research of PFAS removal with outstanding contributions to publications. The progression generally transitioned from accidental discovery of decomposition, to experimentation with removal effects and mechanisms of existing methods, and finally to enhanced defluorination and mechanism-driven design approaches. The keywords co-occurrence network and technology classification together revealed the main knowledge framework, which was constructed and correlated through contaminants, substrates, materials, processes and properties. Moreover, adsorption was demonstrated to be the dominant removal process among the current studies. Subsequently, we concluded the principles, advances and drawbacks of enrichment and separation, biological methods, advanced oxidation and reduction processes. Further exploration indicated the hotspots such as alternatives and precursors for PFAS ("genx": 1.258, "f-53b": 0.337), degradable mineralization technologies ("photocatalytic degrad": 0.529, "hydrated electron": 0.374), environment-friendly remediation technologies ("phytoremedi": 0.939, "constructed wetland": 0.462) and combination with novel materials ("metal-organic framework": 1.115, "layered double hydroxid": 0.559) as well as computer science ("molecular dynamics simul": 0.559, "machine learn"). Furthermore, the future direction of technological innovation might lie in high-performance processes that minimize secondary pollution, the development of recyclable and renewable treatment agents, and collaborative control strategies for multiple pollutants. Overall, this study offers comprehensive and objective review for researchers and industry professionals in this field, enabling rapid access to knowledge guidance and insights into research frontiers. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Transformer based approach to electricity load forecasting.
- Author
-
Chan, Jun Wei and Yeo, Chai Kiat
- Subjects
- *
TRANSFORMER models , *RECURRENT neural networks , *DEEP learning , *CONVOLUTIONAL neural networks , *ELECTRICITY , *NATURAL language processing , *FORECASTING , *PHOTOVOLTAIC power generation - Abstract
In natural language processing (NLP), transformer based models have surpassed recurrent neural networks (RNN) as state of the art, being introduced specifically to address the limitations of RNNs originating from its sequential nature. As a similar sequence modeling problem, transformer methods can be readily adapted for deep learning time series prediction. This paper proposes a sparse transformer based approach for electricity load prediction. The layers of a transformer addresses the shortcomings of RNNs and CNNs by applying the attention mechanism on the entire time series, allowing any data point in the input to influence any location in the output of the layer. This allows transformers to incorporate information from the entire sequence in a single layer. Attention computations can also be parallelized. Thus, transformers can achieve faster speeds, or trade this speed for more layers and increased complexity. In experiments on public datasets, the sparse transformer attained comparable accuracy to an RNN-based SOTA method (Liu et al., 2022) while being up to 5× faster during inference. Moreover, the proposed model is general enough to forecast the load from individual households to city levels as shown in the extensive experiments conducted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A survey of approaches for event sequence analysis and visualization.
- Author
-
Yeshchenko, Anton and Mendling, Jan
- Subjects
- *
DATA visualization , *SEQUENCE analysis , *PROCESS mining , *COMPUTER science , *TRANSACTION records - Abstract
Event sequence data is increasingly available. Many business operations are supported by information systems that record transactions, events, state changes, message exchanges, and similar elements. This observation also applies to various industries, including production, logistics, healthcare, financial services, and education. The variety of application areas explains that techniques for event sequence data analysis have been developed rather independently in different fields of computer science. Most prominent are contributions from information visualization and from process mining. So far, the contributions from these two fields have neither been compared nor have they been mapped to an integrated framework. Such intransparency is problematic since it bears the risk of opportunities of integration are missed and concepts established in one field are independently reinvented in the other one. In this paper, we develop the Event Sequence Visualization framework (ESeVis) that gives due credit to the traditions of both fields. Our mapping study provides an integrated perspective on both fields and identifies potential for synergies for future research. • We propose the ESeVis Framework to categorize visualizations of event sequence data. • We review the literature of event sequence data visualizations. • Instance representations of event sequences are mostly from information visualization. • Process mining focuses on generating formal models. • Potential for synergy of process mining and information visualization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.