113,559 results on '"Electronic data processing"'
Search Results
2. Enhancing ferroelectric characterization at nanoscale: A comprehensive approach for data processing in spectroscopic piezoresponse force microscopy.
- Author
-
Valloire, H., Quéméré, P., Vaxelaire, N., Kuentz, H., Le Rhun, G., and Borowik, Ł.
- Subjects
- *
PIEZORESPONSE force microscopy , *NANOMECHANICS , *ELECTRONIC data processing , *FERROELECTRIC thin films , *POTASSIUM niobate , *MACHINE learning , *HYSTERESIS loop - Abstract
Switching Spectroscopy Piezoresponse Force Microscopy (SSPFM) stands out as a powerful method for probing ferroelectric properties within materials subjected to incremental polarization induced by an external electric field. However, the dense data processing linked to this technique is a critical factor influencing the quality of obtained results. Furthermore, meticulous exploration of various artifacts, such as electrostatics, which may considerably influence the signal, is a key factor in obtaining quantitative results. In this paper, we present a global methodology for SSPFM data processing, accessible in open-source with a user-friendly Python application called PySSPFM. A ferroelectric thin film sample of potassium sodium niobate has been probed to illustrate the different aspects of our methodology. Our approach enables the reconstruction of hysteresis nano-loops by determining the PR as a function of applied electric field. These hysteresis loops are then fitted to extract characteristic parameters that serve as measures of the ferroelectric properties of the sample. Various artifact decorrelation methods are employed to enhance measurement accuracy, and additional material properties can be assessed. Performing this procedure on a grid of points across the surface of the sample enables the creation of spatial maps. Furthermore, different techniques have been proposed to facilitate post-treatment analysis, incorporating algorithms for machine learning (K-means), phase separation, and mapping cross correlation, among others. Additionally, PySSPFM enables a more in-depth investigation of the material by studying the nanomechanical properties during poling, through the measurement of the resonance properties of the cantilever–tip–sample surface system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Unmanned Aerial Vehicle (UAV) path planning and control assisted by Augmented Reality (AR): the case of indoor drones.
- Author
-
Mourtzis, Dimitris, Angelopoulos, John, and Panopoulos, Nikos
- Subjects
DRONE aircraft ,AUGMENTED reality ,ENGINEERING design ,INDUSTRY 4.0 ,ELECTRONIC data processing - Abstract
Following the recent advances in Industry 4.0 and the upcoming Industry 5.0, the use of multiple UAVs for indoor tasks has risen, particularly in real-time remote monitoring, wireless coverage, and remote sensing. As a result, UAVs can be viewed as proactive problem solvers and can support Internet of Things (IoT) platforms by collecting and monitoring data cost-effectively and efficiently, leading to better decision-making. Moreover, sophisticated drone operations require specialised software and data processing abilities. However, the utilisation of drones has been mainly focused on outdoor environments, thus creating a literature gap regarding indoor navigation and operation. Therefore, the design and development of a method for remote planning and control of drones based on the utilisation of AR is presented in this paper. The proposed method is based on the utilisation of drones for remote monitoring. The suggested approach involves engineers designing a sequence of actions and transmitting them wirelessly to the drone, eliminating the need for human intervention. Thus, the proposed method contributes towards enabling engineers visualise the drone path with the use of Augmented Reality and provides the flexibility of adding multiple way points. The applicability of the developed framework is tested in a laboratory-based machine shop. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Deep learning-based data processing method for transient thermoreflectance measurements.
- Author
-
Mao, Yali, Zhou, Shaojie, Tang, Weiyuan, Wu, Mei, Zhang, Haochen, Sun, Haiding, and Yuan, Chao
- Subjects
- *
DEEP learning , *ELECTRONIC data processing , *THERMOPHYSICAL properties , *GLOBAL optimization - Abstract
Pump–probe thermoreflectance has been commonly applied for characterizing the thermal properties of materials. Generally, a reliable and efficient non-linear fitting process is often implemented to extract unknown thermal parameters during the pump–probe thermoreflectance characterizations. However, when it comes to processing large amounts of data acquired from similar structural samples, non-linear fitting process appears to be very time-consuming and labor-intensive to search for the best fitting for every testing curve. Herein, we propose to apply deep learning (DL) approach to nanosecond transient thermoreflectance technique for high-throughput experimental data processing. We first investigated the effect of training set parameters (density and bounds) on the predictive performance of the DL model, providing a guidance to optimize the DL model. Then, the DL model is further verified in the measurement of the bulk sapphire, SiC, diamond samples, and GaN-based multilayer structures, demonstrating its capability of analyzing the results with high accuracy. Compared to the conventional non-linear fitting method (such as Global Optimization), the computation time of the new model is 1000 times lower. Such a data-driven DL model enables the faster inference and stronger fitting capabilities and is particularly efficient and effective in processing data acquired from wafer-level measurements with similar material structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Co-Developing Programs and Their Proof of Correctness.
- Author
-
Chapman, Roderick, Dross, Claire, Matthews, Stuart, and Moy, Yannick
- Subjects
- *
SPARK (Computer program language) , *PROGRAMMING languages , *ELECTRONIC data processing , *COMPUTER programming , *COMPUTER software development , *COMPUTER software correctness - Abstract
The article focuses on the auto-active approach for co-developing programs and their proof of correctness, specifically the open source SPARK technology. The authors discuss the key design and technological choices for SPARK, which made it successful within the industry, and explore the possible future of SPARK and other analyzers of the same family.
- Published
- 2024
- Full Text
- View/download PDF
6. Empirical issues concerning studies of firm entry.
- Author
-
Coad, Alex, Kato, Masatoshi, and Srhoj, Stjepan
- Subjects
VALUE (Economics) ,ENDOWMENTS ,ELECTRONIC data processing - Abstract
We discuss that entry can be considered from various levels of analysis: entrepreneur-level, firm-level, and also at higher levels of aggregation, such as the industry-level and country-level. We also formulate a list of six challenges for econometric studies of firm entry, highlighting the data sources, typical empirical setups, potential sources of bias, and appropriate econometric techniques. While progress can be made with sophisticated econometric estimators, a pressing need for entry studies concerns detailed data on the gestation process, entry modes, and the value of resource endowments and knowledge endowments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Trends in Data Curation: Competencies and Tools.
- Author
-
Murillo, Angela P. and Yoon, Ayoung
- Subjects
- *
DATA curation , *INFORMATION science , *LIBRARY science , *TEACHING aids , *ELECTRONIC data processing - Abstract
Library and information science has led data curation research and education for the last two decades, providing data curation education, professional development programs, and a robust professional opportunity. To keep current with the latest trends in competencies and tools needed to conduct data curation activities, this research conducted a systematic literature review of literature that captures competencies and tools to build a framework of current trends in data curation work that educators can utilize to ensure up‐to‐date educational materials for the next generation of data curation professionals. This poster presents the preliminary findings of this data curation competencies and tools analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Thematic Trends in Data Curation Literature.
- Author
-
Murillo, Angela P. and Yoon, Ayoung
- Subjects
- *
DATA curation , *TECHNOLOGICAL innovations , *BIBLIOMETRICS , *DATA analysis , *ELECTRONIC data processing - Abstract
The field of data curation is rapidly changing due to new developments in technologies and techniques for conducting data work. As the field of data curation evolves, researchers, practitioners, and educators need to be able to respond to these developments. One way to understand trends in a field is by examining published literature. This study first gathered data curation literature through a modified systematic literature review with the framing question, 'What competencies, skill sets, and proficiencies are needed to conduct data curation activities?'. These literatures were then analyzed using bibliometric analysis, visual analysis of the citation data, and topic modeling to understand trends in the data curation field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Investigating Co‐Authorship Networks of Academic and Industry Researchers in Artificial Intelligence.
- Author
-
Liang, Lizhen
- Subjects
- *
ARTIFICIAL intelligence , *AUTHORSHIP collaboration , *ELECTRONIC data processing , *SOCIAL capital , *EIGENVECTORS , *SOCIAL network analysis - Abstract
Research teams from the industry, especially big technology companies, have been pushing impactful research work in the field of artificial intelligence (AI), changing the prospects of the field and the careers of many researchers. Research teams from big technology companies usually possess more data, bigger computing infrastructure, and research talent, granting them the advantages in advancing AI research. While most previous work focuses on investigating the advantages the industry has in the field of AI, and how their research publication is different from those published by academic teams, few research has been done to investigate whether working as an industry researcher is beneficial at the individual level. In this work, by analyzing co‐authorship networks of researchers published in AI conferences, we investigate whether working in the industry gives researchers advantages in "intangible" forms, such as social capital, represented by the collaborative relationships they gained or maintained. Our result shows that the many advantages industry researchers possess correlate with the social capital they have, measured by degree centrality, eigenvector centrality, betweenness centrality, and effective size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Discursive AI Infrastructures: Envisioned and Overlooked Museum Futures.
- Author
-
Kist, Cassandra
- Subjects
- *
ARTIFICIAL intelligence , *MUSEUMS , *CRITICAL thinking , *ELECTRONIC data processing , *DISCOURSE - Abstract
Prompted by recent innovations, artificial intelligence (AI) is increasingly being discussed across the museum sector regarding its implications for institutional roles and practices. However, AI in particular, is an ambiguous term, a 'black box' which is capable of containing and reflecting numerous values and ideals (Crawford, 2021). This paper positions discourse around AI as a 'discursive' infrastructure, capable of not only embodying ideals but also shaping and justifying certain institutional practices and roles. This paper thematically analyses 115 pieces of grey literature produced and shared by professional governance bodies in the museum sector from 1995–2023, mainly across Canada, the United Kingdom, and the United States. In doing so, it identifies four preliminary themes encompassing shifts in discourse over time which give shape to a contemporary discursive infrastructure. This prompts timely critical reflections of museum professionals and stakeholders on both imagined and overlooked public roles, responsibilities, and practices of the museum in relation to AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Exploratory Search in Digital Humanities: A Study of Visual Keyword/Result Linking.
- Author
-
Hoeber, Orland, Harvey, Morgan, Momeni, Milad, Pirmoradi, Abbas, and Gleeson, David
- Subjects
- *
DIGITAL humanities , *DIGITAL libraries , *INFORMATION retrieval , *ACADEMIC libraries , *ELECTRONIC data processing - Abstract
While searching within digital humanities collections is an important aspect of digital humanities research, the search features provided are usually more suited to lookup search than exploratory search. This limits the ability of digital humanities scholars to undertake complex search scenarios. Drawing upon recent studies on supporting exploratory search in academic digital libraries, we implemented two visual keyword/result linking approaches for searching within the Europeana collection; one that keeps the keywords linked to the search results and another that aggregates the keywords over the search result set. Using a controlled laboratory study, we assessed these approaches in comparison to the existing Europeana search mechanisms. We found that both visual keyword/result linking approaches were improvements over the baseline, with some differences between the new approaches that were dependent on the stage of the exploratory search process. This work illustrates the value of providing advanced search functionality within digital humanities collections to support exploratory search processes, and the need for further design and study of digital humanities search tools that support complex search scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. ПРОЗОРІСТЬ ТА ЗГОДА ЯК ОСНОВНІ ПРИНЦИПИ ЗАХИСТУ ПЕРСОНАЛЬНИХ ДАНИХ В УКРАЇНІ.
- Author
-
Н. Т., Головацький
- Subjects
GENERAL Data Protection Regulation, 2016 ,DATA protection ,PERSONALLY identifiable information ,ELECTRONIC data processing ,TRUST ,INFORMATION resources management - Abstract
The rapid digitization of the modern world has led to an increase in the collection and processing of personal data. The principles of transparency and consent have become crucial for citizens to understand how their data is being used and to be able to give their voluntary consent to its processing. Ukraine is actively adapting its legislation to EU standards in the field of personal data protection. In particular, the General Data Protection Regulation (GDPR) affects approaches to data collection and processing. Examining transparency and consent in the context of these changes can help identify which aspects of the law need special attention. Citizens are becoming increasingly cautious about who they trust with their data. This article analyzes and evaluates the role and impact of the principles of transparency and consent on the effective mechanism of personal data protection in Ukraine, taking into account modern technological and legislative challenges. Studying these principles demonstrates their importance and relevance in today’s digital society. The principle of transparency determines the need for accessible and understandable information for data subjects regarding the processing of their data. Openness is a fundamental requirement for maintaining trust between organizations and citizens. Providing the ability to control and regulate their data allows data subjects to play a more active role in the processing and use of their personal data. The principle of consent, in turn, emphasizes the voluntary and informed nature of consent to data processing. The consent of the data subject determines the basis for the legal processing of personal data, ensuring harmony between the rights of citizens and the interests of organizations. As technology advances and legislation changes, transparency and consent become even more important aspects. They help build trust between organizations and citizens, ensure responsible use of personal data, and empower individuals to control their own information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Automatic seismic first‐break picking based on multi‐view feature fusion network.
- Author
-
Wu, Yinghe, Pan, Shulin, Lan, Haiqiang, Badal, José, Wei, Ze, and Chen, Yaojie
- Subjects
- *
ARTIFICIAL intelligence , *WORK design , *ELECTRONIC data processing , *GENERALIZATION , *ALGORITHMS - Abstract
Automatic first‐break picking is a basic step in seismic data processing, so much so that the quality of the picking largely determines the effect of subsequent processing. To a certain extent, artificial intelligence technology has solved the shortcomings of traditional first‐break picking algorithms, such as poor applicability and low efficiency. However, some problems still remain for seismic data, with a low signal‐to‐noise ratio and large first‐break change leading to inaccurate picking and poor generalization of the network. In order to improve the accuracy of the automatic first‐break picking results of the above seismic data, we propose a multi‐view automatic first‐break picking method driven by multi‐network. First, we analysed the single‐trace boundary characteristics and the two‐dimensional boundary characteristics of the first break. Based on these two characteristics of the first break, we used the Long Short‐Term Memory and the ResNet attention gate UNet (resudual attention gate UNet) networks to extract the characteristics of the first arrival and its location from the seismic data, respectively. Then, we introduced the idea of multi‐network learning in the first‐break picking work and designed a feature fusion network. Finally, the multi‐view first‐break features extracted by the Long Short‐Term Memory and resudual attention gate UNet networks are fused, which effectively improves the picking accuracy. The results obtained after applying the method to field seismic data show that the accuracy of the first break detected by a feature fusion network is higher than that given by the above two networks alone and has good applicability and resistance to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Self-supervised modal optimization transformer for image captioning.
- Author
-
Wang, Ye, Li, Daitianxia, Liu, Qun, Liu, Li, and Wang, Guoyin
- Subjects
- *
ELECTRONIC data processing , *GENERALIZATION - Abstract
In multimodal data processing of image captioning, data from different modalities usually exhibit distinct feature distributions. The gap in unimodal representation makes capturing cross-modal mappings in multimodal learning challenging. Current image captioning models transform images into captions directly. However, this approach results in large data requirements and limited performance on small quantities of multimodal data. In this paper, we introduce a novel self-supervised modal optimization transformer (SMOT) for image captioning. Specifically, we leverage self-supervised learning to propose a cross-modal feature optimizer. This optimizer aims to optimize the distribution of semantic information in images by leveraging raw images and their corresponding paired captions, ultimately approaching the semantic object of the caption. The optimized image features inherit information from both modalities, reducing the disparity in feature distribution between modalities and decreasing reliance on extensive training data. Furthermore, we fuse the features with image grid features and text features, using their complementary information to bridge the differences between features, providing more comprehensive semantic guidance for image captioning. Experimental results demonstrate that our proposed SMOT outperforms state-of-the-art models when trained on limited data, showing efficient learning and good generalization capabilities on small training datasets. Additionally, it also exhibits competitive performance on the MSCOCO dataset, further highlighting its efficacy and potential in the field of image captioning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Exploitation of healthcare IoT–fog-based smart e-health gateways: a resource optimization approach.
- Author
-
Wen, Bo, Li, Shanzhi, and Motevalli, Hooman
- Subjects
- *
PARTICLE swarm optimization , *INTERNET of things , *ELECTRONIC data processing , *ENERGY consumption , *RESOURCE allocation - Abstract
In the domains of health and medicine, current technology facilitates the quicker identification of effective solutions. Smart electronic health networks based on IoT–fog are one of these technologies. It combines the Internet of Things with computing in a fog environment to enable fast and accurate health data processing, transfer, and collecting from patient devices and sensors for caregivers. To reduce fog computing burden and enhance resource allocation, the concept of combining fog computing with the Internet of Things (IoT) has been put out. This research provides a novel method of applying inertial weighted multi-objective particle swarm optimization to optimize simulated e-health smart networks. The term "IoT–fog SEH" refers to this specific technique. The IoT–fog SEH's (Smart E-Health) notable importance of inertia weight makes it easier to modify the search space's dimensions and get the best answer. The IoT–fog SEH approach is used to compare the Cloud-HMS algorithm, Throttled method, and HGWDE algorithm. In terms of reaction time, IoT–fog SEH beats Cloud-HMS, Throttled, and HGWDE algorithms, with improvements of 52.86, 81.02, and 80.44 ms, respectively. IoT–fog SEH beats Cloud-HMS, Throttled, and HGWDE by 51.87, 80.12, and 79.64 ms, respectively, in processing time. The HGWDE algorithm performs better in terms of cost efficiency than the IoT–fog SEH method. It is important to keep in mind that there is no statistically significant difference between these two approaches. The investigated approach was evaluated with the iFogSim program, and the results were contrasted with those obtained with the current methodology. Experimental results show a significant reduction in latency, energy consumption, and network bandwidth use when comparing this study's methodology to previous research endeavors. Specifically, the recommended method leads to a 25% reduction in network bandwidth usage, a 37% reduction in energy consumption, and a 45% reduction in delay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Leveraging Local Density Decision Labeling and Fuzzy Dependency for Semi-supervised Feature Selection.
- Author
-
Zhang, Gangqiang, Hu, Jingjing, and Zhang, Pengfei
- Subjects
FEATURE selection ,ROUGH sets ,FUZZY sets ,ELECTRONIC data processing ,BIG data - Abstract
In real-world scenarios, datasets often lack full supervision due to the high cost associated with acquiring decision labels. Completing datasets by filling in missing labels is essential for preserving the valuable feature information of individual samples. Furthermore, in the era of big data, datasets tend to exhibit high dimensionality, which adds complexity to subsequent data processing. In this study, a new semi-supervised feature selection technique is introduced. Firstly, a fully supervised dataset is created by utilizing a local density decision-labeling algorithm to fill in missing decision labels within the semi-supervised dataset. Next, a fuzzy dependency-based feature selection approach is presented to find and keep the most pertinent characteristics for the finished datasets. Finally, the effectiveness and reliability of our proposed method are validated through a series of rigorous experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. The Encrypted Truth is Already Out There.
- Author
-
GOLDMAN, ERIC H.
- Subjects
PUBLIC key cryptography ,INFORMATION technology auditing ,ELECTRONIC data processing ,GENERAL Data Protection Regulation, 2016 ,INFORMATION technology security ,INTERNET forums ,QUANTUM computers - Abstract
The article discusses the impact of quantum computing on current cryptographic systems and the need for organizations to prepare for the transition to quantum-safe cryptography. It highlights the risks of impersonation and forgery that may arise with quantum computing and emphasizes the importance of early awareness and education at all levels of an organization. The article also addresses the need for thorough planning, risk evaluation, and contingency plans to address potential data exposure or impersonation due to migration gaps. Additionally, it mentions the importance of understanding the implications of quantum computing on cybersecurity insurance and the need for organizations to work with insurers to ensure adequate coverage. [Extracted from the article]
- Published
- 2024
18. DATA MINING TECHNOLOGY FOR SMART CAMPUS IN BEHAVIOR ASSOCIATION ANALYSIS OF COLLEGE STUDENTS.
- Author
-
JUN ZHANG, YUNXIN KUANG, and JIAN ZHOU
- Subjects
ASSOCIATION rule mining ,DATA mining ,ELECTRONIC data processing ,CLUSTER analysis (Statistics) ,PSYCHOLOGY of students - Abstract
The data in smart campuses is complex and massive, with insufficient utilization, and existing data processing methods have many limitations. Therefore, in order to improve the efficiency of data processing in universities and assist in student management, a data processing method integrating cluster analysis and association rule mining is proposed. The proposed method is divided into two parts. Firstly, an improved K-Means model based on information entropy and density optimization is constructed for clustering analysis of student consumption, learning, and other data; Secondly, use the improved Mapping Apriori to obtain the correlation between student grades, consumption records, and learning behavior. The clustering results on student consumption data show that the average accuracy of ED-K-Means clustering is 97.41%, which is 12.8%, 8.5% and 4.0% higher than the comparison algorithm. The result of the correlation between consumption level and achievement shows that when the amount of consumption is less than 1500 yuan, the student's achievement is directly proportional to the amount of consumption. Therefore, the proposed method can effectively mine and analyze student behavior data, which has important practical significance for intelligent management in universities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. OPTIMIZING HADOOP DATA LOCALITY: PERFORMANCE ENHANCEMENT STRATEGIES IN HETEROGENEOUS COMPUTING ENVIRONMENTS.
- Author
-
SI-YEONG KIM and TAI-HOON KIM
- Subjects
MACHINE learning ,HETEROGENEOUS distributed computing ,HETEROGENEOUS computing ,DISTRIBUTED computing ,ELECTRONIC data processing - Abstract
As organizations increasingly harness big data for analytics and decision-making, the efficient processing of massive datasets becomes paramount. Hadoop, a widely adopted distributed computing framework, excels in processing large-scale data. However, its performance is contingent on effective data locality, which becomes challenging in heterogeneous computing environments comprising diverse hardware resources. This research addresses the imperative of enhancing Hadoop's data locality performance in heterogeneous computing environments. The study explores strategies to optimize data placement and task scheduling, considering the diverse characteristics of nodes within the infrastructure. Through a comprehensive analysis of Hadoop's data locality algorithms and their impact on performance, this work proposes novel approaches to mitigate challenges associated with disparate hardware capabilities. Weighted Extreme Learning Machine Technique (Weighted ELM) with the Firefly Algorithm (WELM-FF) is used in the proposed work. The integration of Weighted Extreme Learning Machine (WELM) with the Firefly Algorithm holds promise for enhancing machine learning models in the context of large-scale data processing. The research employs a combination of theoretical analysis and practical experiments to evaluate the effectiveness of the proposed enhancements. Factors such as network latency, disk I/O, and CPU capabilities are taken into account to develop a holistic framework for improving data locality and, consequently, overall Hadoop performance. The findings presented in this study contribute valuable insights to the field of distributed computing, offering practical recommendations for organizations seeking to maximize the efficiency of their Hadoop deployments in heterogeneous computing environments. By addressing the intricacies of data locality, this research strives to enhance the scalability and performance of Hadoop clusters, thereby facilitating more effective utilization of big data resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Digital transformation and governance heterogeneity as determinants of CSR disclosure: insights from Chinese A-share companies.
- Author
-
Jin, Xiaoyan, Mirza, Sultan Sikandar, Huang, Chengming, and Zhang, Chengwei
- Subjects
SOCIAL accounting ,DIGITAL transformation ,ELECTRONIC data processing ,SOCIAL responsibility ,GOVERNMENT business enterprises ,SOCIAL responsibility of business - Abstract
Purpose: In this fast-changing world, digitization has become crucial to organizations, allowing decision-makers to alter corporate processes. Companies with a higher corporate social responsibility (CSR) level not only help encourage employees to focus on their goals, but they also show that they take their social responsibility seriously, which is increasingly important in today's digital economy. So, this study aims to examine the relationship between digital transformation and CSR disclosure of Chinese A-share companies. Furthermore, this research investigates the moderating impact of governance heterogeneity, including CEO power and corporate internal control (INT) mechanisms. Design/methodology/approach: This study used fixed effect estimation with robust standard errors to examine the relationship between digital transformation and CSR disclosure and the moderating effect of governance heterogeneity among Chinese A-share companies from 2010 to 2020. The whole sample consists of 17,266 firms, including 5,038 state-owned enterprise (SOE) company records and 12,228 non-SOE records. The whole sample data is collected from the China Stock Market and Accounting Research, the Chinese Research Data Services and the WIND databases. Findings: The regression results lead us to three conclusions after classifying the sample into non-SOE and SOE groups. First, Chinese A-share businesses with greater levels of digitalization have lower CSR disclosures. Both SOE and non-SOE are consistent with these findings. Second, increasing CEO authority creates a more centralized company decision-making structure (Breuer et al., 2022; Freire, 2019), which improves the negative association between digitalization and CSR disclosure. These conclusions, however, also apply to non-SOE. Finally, INT reinforces the association between corporate digitization and CSR disclosure, which is especially obvious in SOEs. These findings are robust to alternative HEXUN CSR disclosure index. Heterogeneity analysis shows that the negative relationship between corporate digitalization and CSR disclosures is more pronounced in bigger, highly levered and highly financialized firms. Originality/value: Digitalization and CSR disclosure are well studied, but few have examined their interactions from a governance heterogeneity perspective in China. Practitioners and policymakers may use these insights to help business owners implement suitable digital policies for firm development from diverse business perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A Ramanujan subspace and dynamic time warping and adaptive singular value decomposition combined denoising method for low signal‐to‐noise ratio surface microseismic monitoring data in hydraulic fracturing.
- Author
-
Wang, Xu‐Lin, Zhang, Jian‐Zhong, and Huang, Zhong‐Lai
- Subjects
- *
HILBERT-Huang transform , *HYDRAULIC fracturing , *BANDPASS filters , *ELECTRONIC data processing , *DATA quality , *RANDOM noise theory , *SINGULAR value decomposition - Abstract
Surface microseismic monitoring is widely used in hydraulic fracturing. Real‐time monitoring data collected during fracturing can be used to perform surface‐microseismic localization, which aids in assessing the effects of fracturing and provides guidance for the process. The accuracy of localization critically depends on the quality of monitoring data. However, the signal‐to‐noise ratio of the data is often low due to strong coherent and random noise, making denoising essential for processing surface monitoring data. To suppress noise more effectively, this paper introduces a novel denoising method that integrates the Ramanujan subspace with dynamic time warping and adaptive singular value decomposition. The new method consists of two steps: First, a Ramanujan subspace is constructed to suppress periodic noise. Then, dynamic time warping and adaptive singular value decomposition are applied to eliminate remaining coherent and random noise. The method has been evaluated using both synthetic and field data, and its performance is compared with traditional microseismic denoising techniques, including bandpass filtering and empirical mode decomposition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Design and development of STCF offline software.
- Author
-
Ai, Xiaocong, Huang, Xingtao, Li, Teng, Qi, Binbin, and Qin, Xiaoshuai
- Subjects
- *
ELECTRONIC data processing , *COLLIDERS (Nuclear physics) , *DIGITIZATION , *DATA analysis , *LUMINOSITY - Abstract
The Super Tau-Charm Facility (STCF) is a next-generation positron–electron collider experiment, designed to study various physics topics in the tau-charm energy region. The designed peak luminosity of STCF is 0.5×1035cm−2s−1, which is almost two orders of magnitude higher than the current tau-charm factory, BESIII. To implement the offline data processing tasks, as well as tackling the great challenge posed by the huge data volume, the offline software of STCF (OSCAR) is designed to implement the detector simulation, digitization, calibration, reconstruction as well as to provide a common platform for data analysis. This paper presents the status and progress of the OSCAR system, including the design, implementation as well as the preliminary performance of the core software, detector simulation, digitization and background mixing and reconstruction algorithms for all sub-detectors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An optimized method for dose–effect prediction of traditional Chinese medicine based on 1D-ResCNN-PLS.
- Author
-
Xiong, Wangping, Pan, Jiasong, Liu, Zhaoyang, Du, Jianqiang, Zhu, Yimin, Luo, Jigen, Yang, Ming, and Zhou, Xian
- Subjects
- *
CONVOLUTIONAL neural networks , *CHINESE medicine , *LEAST squares , *ELECTRONIC data processing - Abstract
AbstractWe introduce a one-dimensional (1D) residual convolutional neural network with Partial Least Squares (1D-ResCNN-PLS) to solve the covariance and nonlinearity problems in traditional Chinese medicine dose–effect relationship data. The model combines a 1D convolutional layer with a residual block to extract nonlinear features and employs PLS for prediction. Tested on the Ma Xing Shi Gan Decoction datasets, the model significantly outperformed conventional models, achieving high accuracies, sensitivities, specificities, and AUC values, with considerable reductions in mean square error. Our results confirm its effectiveness in nonlinear data processing and demonstrate potential for broader application across public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Blockchain for Edge Computing in Smart Environments: Use Cases, Issues, and Challenges.
- Author
-
Prabadevi, B., Deepa, N., Sudhagara Rajan, S., and Srivastava, Gautam
- Subjects
- *
DIGITAL transformation , *ELECTRONIC data processing , *EDGE computing , *CENOZOIC Era , *DIGITAL technology , *BLOCKCHAINS - Abstract
The Cenozoic era is the digital age where people, things, and any device with network capabilities can communicate with each other, and the Internet of Things (IoT) paves the way for it. Almost all domains are adopting IoT from smart home appliances, smart healthcare, smart transportation, Industrial IoT and many others. As the adoption of IoT increases, the accretion of data also grows. Furthermore, digital transformations have led to more security vulnerabilities, resulting in data breaches and cyber-attacks. One of the most prominent issues in smart environments is a delay in data processing while all IoT smart environments store their data in the cloud and retrieve them for every transaction. With increased data accumulations on the cloud, most smart applications face unprecedented delays. Thus, data security and low-latency response time are mandatory to deploy a robust IoT-based smart environment. Blockchain, a decentralized and immutable distributed ledger technology, is an essential candidate for ensuring secured data transactions, but it has a variety of challenges in accommodating resource-constrained IoT devices. Edge computing brings data storage and computation closer to the network’s edge and can be integrated with blockchain for low-latency response time in data processing. Integrating blockchain with edge computing will ensure faster and more secure data transactions, thus reducing the computational and communicational overhead concerning resource allocation, data transaction and decision-making. This paper discusses the seamless integration of blockchain and edge computing in IoT environments, various use cases, notable blockchain-enabled edge computing architectures in the literature, secured data transaction frameworks, opportunities, research challenges, and future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Enhancing the data processing speed of a deep-learning-based three-dimensional single molecule localization algorithm (FD-DeepLoc) with a combination of feature compression and pipeline programming.
- Author
-
Guo, Shuhao, Lin, Jiaxun, Zhang, Yingjun, and Huang, Zhen-Li
- Subjects
- *
REAL-time computing , *IMAGE processing , *ONLINE algorithms , *ELECTRONIC data processing , *SINGLE molecules , *DEEP learning , *SADDLEPOINT approximations - Abstract
Three-dimensional (3D) single molecule localization microscopy (SMLM) plays an important role in biomedical applications, but its data processing is very complicated. Deep learning is a potential tool to solve this problem. As the state of art 3D super-resolution localization algorithm based on deep learning, FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing, even though it has greatly improved the data processing throughput. In this paper, a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM. This new algorithm uses the feature compression method to reduce the parameters of the model, and combines it with pipeline programming to accelerate the inference process of the deep learning model. The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy, which can realize real-time processing of 256×256 pixels size images. The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering, and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A unified longitudinal trajectory dataset for automated vehicle.
- Author
-
Zhou, Hang, Ma, Ke, Liang, Shixiao, Li, Xiaopeng, and Qu, Xiaobo
- Subjects
DATA scrubbing ,AUTONOMOUS vehicles ,MOTOR vehicle driving ,RESEARCH personnel ,ELECTRONIC data processing - Abstract
Automated Vehicles (AVs) promise significant advances in transportation. Critical to these improvements is understanding AVs' longitudinal behavior, relying heavily on real-world trajectory data. Existing open-source trajectory datasets of AV, however, often fall short in refinement, reliability, and completeness, hindering effective performance metrics analysis and model development. This study addresses these challenges by creating a Unified longitudinal trajectory dataset for AVs (Ultra-AV) to analyze their microscopic longitudinal driving behaviors. This dataset compiles data from 14 distinct sources, encompassing various AV types, test sites, and experiment scenarios. We established a three-step data processing: 1. extraction of longitudinal trajectory data, 2. general data cleaning, and 3. data-specific cleaning to obtain the longitudinal trajectory data and car-following trajectory data. The validity of the processed data is affirmed through performance evaluations across safety, mobility, stability, and sustainability, along with an analysis of the relationships between variables in car-following models. Our work not only furnishes researchers with standardized data and metrics for longitudinal AV behavior studies but also sets guidelines for data collection and model development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Speedy Component Resolution Using Spatially Encoded Diffusion NMR Data.
- Author
-
Lorandel, Benjamin, Rocha, Hugo, Cazimajou, Oksana, Mishra, Rituraj, Bernard, Aurélie, Bowyer, Paul, Nilsson, Mathias, and Dumez, Jean‐Nicolas
- Subjects
- *
MULTIVARIATE analysis , *LEAST squares , *DIFFUSION coefficients , *ELECTRONIC data processing , *DATA analysis - Abstract
ABSTRACT Diffusion‐ordered NMR spectroscopy (DOSY) is a powerful tool to analyse mixtures. Spatially encoded (SPEN) DOSY enables recording a full DOSY dataset in just one scan by performing spatial parallelisation of the gradient dimension. The simplest and most widely used approach to processing DOSY data is to fit each peak in the spectrum with a single or multiple exponential decay. However, when there is peak overlap, and/or when the diffusion decays of the contributing components are too similar, this method has limitations. Multivariate analysis of DOSY data, which is an attractive alternative, consists of decomposing the experimental data, into compound‐specific diffusion decays and 1D NMR spectra. Multivariate analysis has been very successfully used for conventional DOSY data, but its use for SPEN DOSY data has only recently been reported. Here, we present a comparison, for SPEN DOSY data, of two widely used algorithms, SCORE and OUTSCORE, that aim at unmixing the spectra of overlapped species through a least square fit or a cross‐talk minimisation, respectively. Data processing was performed with the General NMR Analysis Toolbox (GNAT), with custom‐written code elements that now expands the capabilities, and makes it possible to import and process SPEN DOSY data. This comparison is demonstrated on three different two‐component mixtures, each with different characteristics in terms of signal overlap, diffusion coefficient similarity, and component concentration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Enhancing the implementation and integration of mHealth interventions in resource-limited settings: a scoping review.
- Author
-
Tumuhimbise, Wilson, Theuring, Stefanie, Kaggwa, Fred, Atukunda, Esther C., Rubaihayo, John, Atwine, Daniel, Sekandi, Juliet N., and Musiimenta, Angella
- Subjects
- *
RESOURCE-limited settings , *MONETARY incentives , *MOBILE health , *UNFUNDED mandates , *ELECTRONIC data processing - Abstract
Background: Although mobile health (mHealth) interventions have shown promise in improving health outcomes, most of them rarely translate to scale. Prevailing mHealth studies are largely small-sized, short-term and donor-funded pilot studies with limited evidence on their effectiveness. To facilitate scale-up, several frameworks have been proposed to enhance the generic implementation of health interventions. However, there is a lack of a specific focus on the implementation and integration of mHealth interventions in routine care in low-resource settings. Our scoping review aimed to synthesize and develop a framework that could guide the implementation and integration of mHealth interventions. Methods: We searched the PubMed, Google Scholar, and ScienceDirect databases for published theories, models, and frameworks related to the implementation and integration of clinical interventions from 1st January 2000 to 31st December 2023. The data processing was guided by a scoping review methodology proposed by Arksey and O'Malley. Studies were included if they were i) peer-reviewed and published between 2000 and 2023, ii) explicitly described a framework for clinical intervention implementation and integration, or iii) available in full text and published in English. We integrated different domains and constructs from the reviewed frameworks to develop a new framework for implementing and integrating mHealth interventions. Results: We identified eight eligible papers with eight frameworks composed of 102 implementation domains. None of the identified frameworks were specific to the integration of mHealth interventions in low-resource settings. Two constructs (skill impartation and intervention awareness) related to the training domain, four constructs (technical and logistical support, identifying committed staff, supervision, and redesigning) from the restructuring domain, two constructs (monetary incentives and nonmonetary incentives) from the incentivize domain, two constructs (organizational mandates and government mandates) from the mandate domain and two constructs (collaboration and routine workflows) from the integrate domain. Therefore, a new framework that outlines five main domains—train, restructure, incentivize, mandate, and integrate (TRIMI)—in relation to the integration and implementation of mHealth interventions in low-resource settings emerged. Conclusion: The TRIMI framework presents a realistic and realizable solution for the implementation and integration deficits of mHealth interventions in low-resource settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Earthquake prediction using satellite data: Advances and ahead challenges.
- Author
-
Akhoondzadeh, Mehdi
- Subjects
- *
ARTIFICIAL intelligence , *ELECTRONIC data processing , *EMERGENCY management , *SEISMIC event location , *CHEMICAL precursors - Abstract
• Using remote sensing data, the statistical studies related to earthquake precursors increased significantly. • Most of them emphasized the existence of abnormal behaviors in precursors. • An attempt has been made to address the advances and challenges in the way of earthquake prediction. Due to the considerable loss of life and damages caused by powerful Earthquakes, along with the strengthening of structures and disaster management systems, many researches are being conducted associated with the possibility of creating Earthquake warning systems. However, no reliable scientific report on successful Earthquake prediction has been published so far. Although there are a couple of published scientific papers indicating the detection of unusual changes in physical and chemical parameters obtained from ground stations, but due to the limitations of in-situ measurement in terms of number, location, time and cost, there was no noticeable progress in Earthquake warning. After the advent of remote sensing satellites, the number of statistical studies related to Earthquake precursors increased significantly, and most of them emphasized the existence of abnormal behaviors in physical and chemical precursors around the time (about 1–30 days before) and locations of powerful Earthquakes. In this study, an attempt has been made to address the advances and challenges in the way of Earthquake prediction using satellite data and provide a perspective of the future of these researches. It should be noted that the increase in the number of satellites designed and launched specially for Earthquake studies, the variety of available Earthquake precursors (multi-precursor analysis), the development of classic and intelligent anomaly detection and predictor algorithms (multi-method analysis), the provision of intelligent systems for the integration of various precursors (fusion and decision systems), the creation of cloud storing and processing data services (Google Earth Engine, Giovanni, etc.) and the development of intelligent user interfaces for public, have made researchers more hopeful for the appearance of low-uncertainty Earthquake warning systems in the near future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. DC_OCEAN: an open-source algorithm for identification of duplicates in ocean databases.
- Author
-
Xinyi Song, Zhetao Tan, Locarnini, Ricardo, Simoncelli, Simona, Cowley, Rebecca, Kizu, Shoichi, Boyer, Tim, Reseghetti, Franco, Castelao, Guilherme, Gouretski, Viktor, and Lijing Cheng
- Subjects
ROBUST statistics ,PRINCIPAL components analysis ,DATABASES ,DATA management ,ELECTRONIC data processing ,METADATA - Abstract
A high-quality hydrographic observational database is essential for ocean and climate studies and operational applications. Because there are numerous global and regional ocean databases, duplicate data continues to be an issue in data management, data processing and database merging, posing a challenge on effectively and accurately using oceanographic data to derive robust statistics and reliable data products. This study aims to provide algorithms to identify the duplicates and assign labels to them. We propose first a set of criteria to define the duplicate data; and second, an open-source and semi-automatic system to detect duplicate data and erroneous metadata. This system includes several algorithms for automatic checks using statistical methods (such as Principal Component Analysis and entropy weighting) and an additional expert (manual) check. The robustness of the system is then evaluated with a subset of the World Ocean Database (WOD18) with over 600,000 in-situ temperature and salinity profiles. This system is an open-source Python package (named DC_OCEAN) allowing users to effectively use the software. Users can customize their settings. The application result from the WOD18 subset also forms a benchmark dataset, which is available to support future studies on duplicate checks, metadata error identification, and machine learning applications. This duplicate checking system will be incorporated into the International Quality-controlled Ocean Database (IQuOD) data quality control system to guarantee the uniqueness of ocean observation data in this product. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Mining and evaluation of adverse event signals for capmatinib based on the FAERS database.
- Author
-
Xinnan Chen, Ying Jiang, Haohao Zhu, and Man Tian
- Subjects
DATABASES ,DRUG labeling ,VOCAL cords ,DATABASE searching ,ELECTRONIC data processing ,INNER ear - Abstract
Objective: To conduct a comprehensive data analysis based on the FDA's Adverse Event Reporting System (FAERS) to mine possible adverse event (AE) signals of Capmatinib, providing valuable references for its clinical application. Methods: Capmatinib was the primary suspected drug in the search of FAERS database from the second quarter of 2020 to the fourth quarter of 2023. Data processing, screening, and classification were performed using methods such as the Reporting Odds Ratio (ROR), Proportional Reporting Ratio (PRR), Bayesian Confidence Propagation Neural Network (BCPNN), and Multi-item Gamma Poisson Shrinker (MGPS). Results: A total of 1,991 AE reports directly related to Capmatinib were screened, identifying 269 Preferred Terms (PTs) involving 26 System Organ Classes (SOCs). Besides the AEs recorded in the drug label (such as edema, nausea, fatigue, and dyspnea), the study unearthed other high-risk AEs not listed in the label, including Renal and urinary disorders, Vocal cord paralysis, and Ear and labyrinth disorders. Among these, renal and urinary disorders, and ear and labyrinth disorders had a higher frequency and intensity of signals, suggesting that their mechanisms of occurrence could be a future research direction. Conclusion: This study uncovered new potential AEs of Capmatinib based on the FAERS database, providing reference for its safe clinical use. Special attention should be given to the occurrence of ear and labyrinth disorders and renal and urinary disorders, primarily presenting as pseudo-acute kidney injury, during treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. One‐dimensional deep learning inversion of marine controlled‐source electromagnetic data.
- Author
-
Li, Pan, Du, Zhijun, Li, Yuguo, and Wang, Jianhua
- Subjects
- *
CONVOLUTIONAL neural networks , *RECURRENT neural networks , *MACHINE learning , *ELECTRONIC data processing , *ELECTRICAL resistivity , *DEEP learning - Abstract
This paper explores the application of machine learning techniques, specifically deep learning, to the inverse problem of marine controlled‐source electromagnetic data. A novel approach is proposed that combines the convolutional neural network and recurrent neural network architectures to reconstruct layered electrical resistivity variation beneath the seafloor from marine controlled‐source electromagnetic data. The approach leverages the strengths of both convolutional neural network and recurrent neural network, where convolutional neural network is used for recognizing and classifying features in the data, and recurrent neural network is used to capture the contextual information in the sequential data. We have built a large synthetic dataset based on one‐dimensional forward modelling of a large number of resistivity models with different levels of electromagnetic structural complexity. The combined learning of convolutional neural network and recurrent neural network is used to construct the mapping relationship between the marine controlled‐source electromagnetic data and the resistivity model. The trained network is then used to predict the distribution of resistivity in the model by feeding it with marine controlled‐source electromagnetic responses. The accuracy of the proposed approach is examined using several synthetic scenarios and applied to a field dataset. We explore the sensitivity of deep learning inversion to different electromagnetic responses produced by resistive targets distributed at different depths and with varying levels of noise. Results from both numerical simulations and field data processing consistently demonstrate that deep learning inversions reliably reconstruct the subsurface resistivity structures. Moreover, the proposed method significantly improves the efficiency of electromagnetic inversion and offers significant performance advantages over traditional electromagnetic inversion methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A hybrid transformer with domain adaptation using interpretability techniques for the application to the detection of risk situations.
- Author
-
Mallick, Rupayan, Benois-Pineau, Jenny, Zemmari, Akka, Guerda, Kamel, Mansencal, Boris, Amieva, Helene, and Middleton, Laura
- Subjects
ARTIFICIAL neural networks ,CONVOLUTIONAL neural networks ,ELECTRONIC data processing ,TIME series analysis - Abstract
Multimedia approaches are strongly required in multi-modal data processing for the detection and recognition of specific events in the data. Hybrid architectures with time series and image/video inputs in the framework of twin CNNs have shown increased performances compared to mono-modal approaches. Pre-trained models have been used in transfer learning to fine-tune the last few layers in the network. This often leads to distribution shifts in the domain. In a real-world scenario, the distribution shifts between the source and target domains can yield poor classification results. With interpretable techniques used in deep neural networks, important features can be highlighted not only for trained models but also reinforced in the training process. Hence the initialization of the target domain model can be performed with improved weights. During data transfer between datasets, the dimensions of the data are also different. We propose a method for model transfer with the adaptation of data dimension and improved initialization with interpretability approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Deprivation and NHS General Ophthalmic Service sight testing activity in England in 2022–2023.
- Author
-
Harper, Robert A., Hooper, Jeremy, Parkins, David J., Fenerty, Cecilia H., Roach, James, and Bowen, Michael
- Subjects
- *
ELECTRONIC data processing , *VISION testing , *ODDS ratio , *PRIMARY care , *POSTAL codes - Abstract
Purpose Methods Results Conclusion Socioeconomic deprivation is associated with an increased incidence of sight‐loss. To inform potential developments in eyecare, General Ophthalmic Service (GOS) sight‐testing activity was explored in relation to deprivation for GOS contractors submitting National Health Service (NHS) claims in England.Data on NHS sight‐test claims for the financial year 2022–2023 were sought from NHS England (NHSE), including number of sight‐tests by GOS contractors, their unique Organisation Data Service codes and postcodes and age‐bands of patients accessing sight‐testing. Deprivation scores were assigned to contractor practices using the Index of Multiple Deprivation (IMD) and the average number of sight‐tests for all contractors within each IMD decile calculated, allowing rate of sight‐testing per 1000 population per decile of deprivation to be estimated using Office of National Statistics (ONS) Lower Layer Super Output Area mid‐year population estimates. Inequality was examined using the Odds Ratio (OR) and slope and relative index of inequality measures (SSI and RII).Overall, 12.94 million NHS sight‐tests were provided by 5622 GOS contractors in England in 2022–2023. Most affluent decile GOS contractors undertook an average ~2200 NHS sight‐tests, while in the most deprived decile, average NHS sight‐tests per contractor was ~1100. Rate of sight‐testing per 1000 population in the most deprived decile was one quarter of that in the most affluent, with an OR of 5.29 (95% CI 5.27–5.30), indicating those in the most affluent areas were ~five times more likely to access NHS sight‐tests. Overall, SII and RII were 333.5 (95% CI 333.52–333.53) and 6.4 (95% CI 6.39–6.40), respectively, findings reflective of substantial inequality in uptake.There remains substantial unwarranted variation in uptake of NHS sight‐testing, with those in more affluent areas accessing sight‐testing substantially more than those in more deprived areas. Strategies are required to facilitate primary care optometry to provide more equitable access to eyecare. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. CoGTEx: Unscaled system-level coexpression estimation from GTEx data forecast novel functional gene partners.
- Author
-
Cortes-Guzman, Miguel-Angel and Treviño, Víctor
- Subjects
- *
GENE expression , *COMPUTER workstation clusters , *GENETIC transcription regulation , *HUMAN genes , *ELECTRONIC data processing - Abstract
Motivation: Coexpression estimations are helpful for analysis of pathways, cofactors, regulators, targets, and human health and disease. Ideally, coexpression estimations should consider as many diverse cell types as possible and consider that available data is not uniform across tissues. Importantly, the coexpression estimations accessible today are performed on a "tissue level", which is based on cell type standardized formulations. Little or no attention is paid to overall gene expression levels. The tissue-level estimation assumes that variance expression levels are more important than mean expression levels. Here, we challenge this assumption by estimating a coexpression calculation at the "system level", which is estimated without standardization by tissue, and show that it provides valuable information. We made available a resource to view, download, and analyze both, tissue- and system-level coexpression estimations from GTEx human data. Methods: GTEx v8 expression data was globally normalized, batch-processed, and filtered. Then, PCA, clustering, and tSNE stringent procedures were applied to generate 42 distinct and curated tissue clusters. Coexpression was estimated from these 42 tissue clusters computing the correlation of 33,445 genes by sampling 70 samples per tissue cluster to avoid tissue overrepresentation. This process was repeated 20 times, extracting the minimum value provided as a robust estimation. Three metrics were calculated (Pearson, Spearman, and G-statistic) in two data processing modes, at the system-level (TPM scale) and tissue levels (z-score scale). Results: We first validate our tissue-level estimations compared with other databases. Then, by specific analyses in several examples and literature validations of predictions, we show that system-level coexpression estimation differs from tissue-level estimations and that both contain valuable information reflected in biological pathways. We also show that coexpression estimations are associated to transcriptional regulation. Finally, we present CoGTEx, a valuable resource for viewing and analyzing coexpressed genes in human adult tissues from GTEx v8 data. We introduce our web resource to list, view and explore the coexpressed genes from GTEx data. Conclusion: We conclude that system-level coexpression is a novel and interesting coexpression metric capable of generating plausible predictions and biological hypotheses; and that CoGTEx is a valuable resource to view, compare, and download system- and tissue- level coexpression estimations from GTEx data. Availability: The web resource is available at http://bioinformatics.mx/cogtex. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Optimal experiment design for inverse problems via selective normalization and zero-shift times.
- Author
-
Chassain, Clément, Kusiak, Andrzej, Krause, Kevin, and Battaglia, Jean-Luc
- Subjects
- *
INVERSE problems , *PARAMETER identification , *PARAMETER estimation , *SIGNAL processing , *ELECTRONIC data processing , *TIKHONOV regularization - Abstract
Inverse problems are commonly used in many fields as they enable the estimation of parameters that cannot be experimentally measured. However, the complex nature of inverse problems requires a strong background in data and signal processing. Moreover, ill-posed problems yield parameters that have a strong linear dependence on the problem. The ill-posed nature of these problems lead to many errors in numerical computations that can make parameter identification nearly impossible. In this paper, a new data processing tool is proposed to maximize the sensitivity of the model to the parameters of interest, while reducing the correlation between them. The effectiveness of the toll is demonstrated through a given inverse problem example using Periodically Pulsed Photothermal Radiometry (PPTR). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Signal reconstruction with infrared thermographic data in the application of rail defect detection.
- Author
-
Ranting Cui, Chaojun Wei, Keping Zhang, Yuning Wu, and Xuan Zhu
- Subjects
- *
SIGNAL reconstruction , *NONDESTRUCTIVE testing , *IRON & steel plates , *THERMOGRAPHY , *ELECTRONIC data processing - Abstract
Thermographic techniques are widely used to characterise material damage, porosity and moisture in structures. Due to the thermographic signal reconstruction (TSR) technique, active thermography has made significant progress. In this study, the TSR technique was applied to identify a rubber layer that mimics an internal defect, which is attached to the back of the steel plate, after a 1.0 min thermal stimulation. The peaks of the first and second derivative logarithmic curves correspond to the important time point revealing the position of the attached flaw. Furthermore, a new post-processing technique for infrared thermographic data was proposed by exploiting polynomial orders of the fitting curves to produce a synthetic image, which can highlight the flaw with distinctive contrast. Finally, the upgraded TSR technique was applied to image a natural shelling defect in a rail sample with an excellent contrast. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. How structural biology has changed our understanding of icosahedral viruses.
- Author
-
Comas-Garcia, Mauricio
- Subjects
- *
HEPATITIS B , *ARTIFICIAL intelligence , *VIRION , *ELECTRONIC data processing , *ALPHAVIRUSES - Abstract
Cryo-electron microscopy and tomography have allowed us to unveil the remarkable structure of icosahedral viruses. However, in the past few years, the idea that these viruses must have perfectly symmetric virions, but in some cases, it might not be true. This has opened the door to challenging paradigms in structural virology and raised new questions about the biological implications of "unusual" or "defective" symmetries and structures. Also, the continual improvement of these technologies, coupled with more rigorous sample purification protocols, improvements in data processing, and the use of artificial intelligence, has allowed solving the structure of sub-viral particles in highly heterogeneous samples and finding novel symmetries or structural defects. In this review, I initially analyzed the case of the symmetry and composition of hepatitis B virus-produced spherical sub-viral particles. Then, I focused on Alphaviruses as an example of "imperfect" icosahedrons and analyzed how structural biology has changed our understanding of the Alphavirus assembly and some biological implications arising from these discoveries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Honorarkürzung bei fehlender Anbindung eines Vertragsarztes an Telematikinfrastruktur.
- Subjects
- *
ELECTRONIC data processing , *TELEMATICS , *GENERAL Data Protection Regulation, 2016 , *PHYSICIANS , *CONTRACTS - Abstract
The article deals with the fee reduction of a contract doctor due to a lack of connection to the telematics infrastructure. The obligation to connect to the TI is considered appropriate and not as a disproportionate restriction of the medical professional freedom. The data processing during the execution of the insured person master data comparison by contract doctors complies with the requirements of the GDPR. A revision against the fee reduction was dismissed as unfounded, and the differential cost regulation according to § 106b para. 2a SGB V applies only to uneconomical prescriptions. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
40. A two-stage seismic data denoising network based on deep learning.
- Author
-
Zhang, Yan, Zhang, Chi, and Song, Liwei
- Subjects
- *
CONVOLUTIONAL neural networks , *SIGNAL-to-noise ratio , *ELECTRONIC data processing , *PROBLEM solving , *NOISE - Abstract
Seismic data with a high signal-to-noise ratio is beneficial in the inversion and interpretation. Thus, denoising is an indispensable step in the seismic data processing. Traditional denoising methods based on prior knowledge are susceptible to the influence of the hypothesis model and parameters. In contrast, deep learning-based denoising methods can extract deep features from the data autonomously and generate a sophisticated denoising model through adaptive learning. However, these methods generally learn a specific model for each noise level, which results in poor representation ability and suboptimal denoising efficacy when applied to seismic data with different noise levels. To address this issue, we propose a denoising method based on a two-stage convolutional neural network (TSCNN). The TSCNN comprises an estimation subnet (ES) and a denoising subnet (DS). The ES employs a multilayer CNN to estimate noise levels, and the DS performs noise suppression on noisy seismic data based on the ES estimation of the noise distribution. In addition, attention mechanisms are implemented in the proposed network to efficiently extract noise information hidden in complex backgrounds. The TSCNN also adopts the L1 loss function to enhance the generalization ability and denoising outcome of the model, and a residual learning scheme is utilized to solve the problem of network degradations. Experimental results demonstrate that the proposed method can preserve event features more accurately and outperforms existing methods in terms of signal-to-noise ratio and generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Sangerbox 2: Enhanced functionalities and update for a comprehensive clinical bioinformatics data analysis platform.
- Author
-
Chen, Di, Xu, Lixia, Xing, Huiwu, Shen, Weitao, Song, Ziguang, Li, Hongjiang, Zhu, Xuqiang, Li, Xueyuan, Wu, Lixin, Jiao, Henan, Li, Shuang, Yan, Jing, He, Yuting, and Yan, Dongming
- Subjects
- *
PATTERN recognition systems , *SUPPORT vector machines , *COMPUTER performance , *RANDOM forest algorithms , *ELECTRONIC data processing - Abstract
In recent years, development in high‐throughput sequencing technologies has experienced an increasing application of statistics, pattern recognition, and machine learning in bioinformatics analyses. SangeBox platform to meet different scientific demands. The new version of Sangs is a widely used tool among many researchers, which encourages us to continuously improve the plerBox 2 (http://vip.sangerbox.com) and extends and optimizes the functions of interactive graphics and analysis of clinical bioinformatics data. We introduced novel analytical tools such as random forests and support vector machines, as well as corresponding plotting functions. At the same time, we also optimized the performance of the platform and fixed known problems to allow users to perform data analyses more quickly and efficiently. SangerBox 2 improved the speed of analysis, reduced resource required for computer performance, and provided more analysis methods, greatly promoting the research efficiency. Highlights: SangerBox 2.0 has expanded its features by adding new machine learning tools, including random forest and support vector machine (SVM), with improved plotting capabilities.The platform's performance and visualization tools have been significantly optimized, including the introduction of interactive adjustments for heatmaps.SangerBox 2.0 stands out in bioinformatics analysis due to its enhanced multifunctionality, user‐friendliness, and superior performance compared to other platforms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. An Iteratively Reweighted Importance Kernel Bayesian Filtering Approach for High-Dimensional Data Processing.
- Author
-
Liu, Xin
- Subjects
- *
BAYESIAN field theory , *ELECTRONIC data processing - Abstract
This paper proposes an iteratively re-weighted importance kernel Bayes filter (IRe-KBF) method for handling high-dimensional or complex data in Bayesian filtering problems. This innovative approach incorporates importance weights and an iterative re-weighting scheme inspired by iteratively re-weighted Least Squares (IRLS) to enhance the robustness and accuracy of Bayesian inference. The proposed method does not require explicit specification of prior and likelihood distributions; instead, it learns the kernel mean representations from training data. Experimental results demonstrate the superior performance of this method over traditional KBF methods on high-dimensional datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Returnees and innovation: evidence from Chinese publicly listed firms.
- Author
-
Qiao, Yibo, Ascani, Andrea, Breschi, Stefano, and Morrison, Andrea
- Subjects
- *
ELECTRONIC data processing , *SKILLED labor , *INNOVATIONS in business , *EMPLOYMENT in foreign countries ,ECONOMIC conditions in China - Abstract
As the Chinese economy shifts from factor-driven to innovation-driven growth, Chinese firms are increasingly lacking highly skilled talents. In this context, attracting high-skill returnees might represent an effective strategy to access knowledge. In this paper, we investigate the relationship between high-skill returnees and innovation of Chinese publicly listed firms. We construct a unique dataset of 2,499 firms over the period 2002–16 by combining three different data sources (i.e. Chinese Research Data Services Platform, China Stock Market & Accounting Research Database, and LinkedIn). Our results show that different typologies of returnees (employees, technologists, and managers) with different experiences abroad (work vs study) may bring back different skills and impact differently on firm innovation. Our main findings show that (1) returnee employees and technologists are positively associated with firm's patenting; (2) returnees' overseas work experience matters more than study experience; and (3) the positive role of returnees is subject to contingencies related to firm characteristics such as ownership, location, and size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. PointCloud-At: Point Cloud Convolutional Neural Networks with Attention for 3D Data Processing.
- Author
-
Umar, Saidu and Taherkhani, Aboozar
- Subjects
- *
MACHINE learning , *CONVOLUTIONAL neural networks , *POINT cloud , *ELECTRONIC data processing , *POINT processes - Abstract
The rapid growth in technologies for 3D sensors has made point cloud data increasingly available in different applications such as autonomous driving, robotics, and virtual and augmented reality. This raises a growing need for deep learning methods to process the data. Point clouds are difficult to be used directly as inputs in several deep learning techniques. The difficulty is raised by the unstructured and unordered nature of the point cloud data. So, machine learning models built for images or videos cannot be used directly on point cloud data. Although the research in the field of point clouds has gained high attention and different methods have been developed over the decade, very few research works directly with point cloud data, and most of them convert the point cloud data into 2D images or voxels by performing some pre-processing that causes information loss. Methods that directly work on point clouds are in the early stage and this affects the performance and accuracy of the models. Advanced techniques in classical convolutional neural networks, such as the attention mechanism, need to be transferred to the methods directly working with point clouds. In this research, an attention mechanism is proposed to be added to deep convolutional neural networks that process point clouds directly. The attention module was proposed based on specific pooling operations which are designed to be applied directly to point clouds to extract vital information from the point clouds. Segmentation of the ShapeNet dataset was performed to evaluate the method. The mean intersection over union (mIoU) score of the proposed framework was increased after applying the attention method compared to a base state-of-the-art framework that does not have the attention mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. DDE KG Editor: A data service system for knowledge graph construction in geoscience.
- Author
-
Hou, Chengbin, Liu, Kaichuang, Wang, Tianheng, Shi, Shunzhong, Li, Yan, Zhu, Yunqiang, Hu, Xiumian, Wang, Chengshan, Zhou, Chenghu, and Lv, Hairong
- Subjects
- *
KNOWLEDGE graphs , *ELECTRONIC data processing , *DATA mining , *COMPUTER science , *EARTH scientists - Abstract
Deep‐time Digital Earth (DDE) is an innovative international big science program, focusing on scientific propositions of earth evolution, changing Earth Science by coordinating global geoscience data, and sharing global geoscience knowledge. To facilitate the DDE program with recent advances in computer science, the geoscience knowledge graph plays a key role in organizing the data and knowledge of multiple geoscience subjects into Knowledge Graphs (KGs), which enables the calculation and inference over geoscience KGs for data mining and knowledge discovery. However, the construction of geoscience KGs is challenging. Though there have been some construction tools, they commonly lack collaborative editing and peer review for building high‐quality large‐scale geoscience professional KGs. To this end, a data service system or tool, DDE KG Editor, is developed to construct geoscience KGs. Specifically, it comes with several distinctive features such as collaborative editing, peer review, contribution records, intelligent assistance, and discussion forums. Currently, global geoscientists have contributed over 60,000 ontologies for 22 subjects. The stability, scalability, and intelligence of the system are regularly improving as a public online platform to better serve the DDE program. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. CAMELS‐SE: Long‐term hydroclimatic observations (1961–2020) across 50 catchments in Sweden as a resource for modelling, education, and collaboration.
- Author
-
Teutschbein, Claudia
- Subjects
- *
ELECTRONIC data processing , *WATER management , *STREAMFLOW , *GEOLOGICAL surveys , *RESEARCH personnel , *WATERSHEDS - Abstract
This paper introduces a community‐accessible dataset comprising daily hydroclimatic variables (precipitation, temperature, and streamflow) observed in 50 catchments in Sweden (median size of 1019 km2). The dataset covers a 60‐year period (1961–2020) and includes information on geographical location, landcover, soil classes, hydrologic signatures, and regulation for each catchment. Data were collected from various sources, such as the Swedish Meteorological and Hydrological Institute, the Swedish Geological Survey, and several Copernicus products provided by the European Environment Agency. The compiled, spatially‐matched, and processed data are publicly available online through the Swedish National Data Service (https://snd.se/en), contributing a new region to the collection of existing CAMELS (Catchment Attributes and Meteorology for Large‐sample Studies) datasets. The CAMELS‐SE dataset spans a wide range of hydroclimatic, topographic, and environmental catchment properties, making it a valuable resource for researchers and practitioners to study hydrological processes, climate dynamics, environmental impacts, and sustainable water management strategies in Nordic regions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Online data service for geologic formations (Lexicons) of China, India, Vietnam and Thailand with one‐click visualizations onto East Asia plate reconstructions.
- Author
-
Du, Wen, Mishra, Suyash, Ogg, James G., Qian, Yuzheng, Chang, Sabrina, Oberoi, Karan, Ault, Aaron, Zahirovic, Sabin, Hou, Hongfei, Raju, D. S. N., Mamallapalli, O'Neil, Ogg, Gabriele M., Li, Haipeng, Scotese, Christopher R., and Dong, Bui
- Subjects
- *
GEOLOGICAL formations , *ELECTRONIC data processing , *GEOLOGICAL time scales , *MAP projection , *PLATE tectonics ,GONDWANA (Continent) - Abstract
Paleogeography is the merger of sediment and volcanic facies, depositional settings, tectonic plate movements, topography, climate patterns and ecosystems through time. The construction of paleogeographic maps on tectonic plate reconstruction models requires a team effort to compile databases, data sharing standards and map projection methods. Two goals of the Paleogeography Working Group of the International Union of Geological Sciences (IUGS) program for Deep‐Time Digital Earth (DDE) are: (1) to interlink online national lexicons for all sedimentary and volcanic formations, and develop online ones for nations that currently lack these; (2) to target specific regions and intervals for testing/showcasing paleogeography output from the merger of these databases. Following those goals, we developed and applied new cloud‐based lexicon data services and interactive visualization techniques to regions in East Asia. This has been a successful collaboration among computer engineers and plate modellers and has involved stratigraphers in India (ONGC), China (Chengdu Univ. Tech., and Chinese Acad. Geol. Sci.), United States (Paleomap Project, and Purdue Univ.), Australia (GPlates visualization team) and Vietnam (Vietnam Nat. Univ.). Independent online lexicons with map‐based and stratigraphy‐based user interfaces have been developed (as of the date of this submission in March 2022) for all Proterozoic to Quaternary formations on the Indian Plate (over 800) and Vietnam (over 200), the majority of the Devonian through Neogene of China (ca. 2000) and partially for Thailand. A multi‐database search system returns all geologic formations of a desired geologic time from these four independent databases. With one click, users can plot the regional extent of one or of all of those regional formations on different plate reconstruction models of that desired age, and these polygons are filled with the lithologic facies pattern. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. High-Precision Instance Segmentation Detection of Micrometer-Scale Primary Carbonitrides in Nickel-Based Superalloys for Industrial Applications.
- Author
-
Zhang, Jie, Zheng, Haibin, Zeng, Chengwei, and Gu, Changlong
- Subjects
- *
HEAT resistant alloys , *DEEP learning , *ELECTRONIC data processing , *ALLOYS , *HOMOGENEITY - Abstract
In industrial production, the identification and characterization of micron-sized second phases, such as carbonitrides in alloys, hold significant importance for optimizing alloy compositions and processes. However, conventional methods based on threshold segmentation suffer from drawbacks, including low accuracy, inefficiency, and subjectivity. Addressing these limitations, this study introduced a carbonitride instance segmentation model tailored for various nickel-based superalloys. The model enhanced the YOLOv8n network structure by integrating the SPDConv module and the P2 small target detection layer, thereby augmenting feature fusion capability and small target detection performance. Experimental findings demonstrated notable improvements: the mAP50 (Box) value increased from 0.676 to 0.828, and the mAP50 (Mask) value from 0.471 to 0.644 for the enhanced YOLOv8n model. The proposed model for carbonitride detection surpassed traditional threshold segmentation methods, meeting requirements for precise, rapid, and batch-automated detection in industrial settings. Furthermore, to assess the carbonitride distribution homogeneity, a method for quantifying dispersion uniformity was proposed and integrated into a data processing framework for seamless automation from prediction to analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. On the Capabilities of the IREA-CNR Airborne SAR Infrastructure.
- Author
-
Esposito, Carmen, Natale, Antonio, Lanari, Riccardo, Berardino, Paolo, and Perna, Stefano
- Subjects
- *
INFORMATION technology , *DATA warehousing , *ELECTRONIC data processing , *EMERGENCY management , *PRODUCT attributes - Abstract
In this work, the airborne Synthetic Aperture Radar (SAR) infrastructure developed at the Institute for Electromagnetic Sensing of the Environment (IREA) of the National Research Council of Italy (CNR) is described. This infrastructure allows IREA-CNR to plan and execute airborne SAR campaigns and to process the acquired data with a twofold aim. On one hand, the aim is to develop research activities; on the other hand, the aim is to support the emergency prevention and management activities of the Department of Civil Protection of the Italian Presidency of the Council of Ministers, for which IREA-CNR serves as National Centre of Competence. Such infrastructure consists of a flight segment and a ground segment that include a multi-frequency airborne SAR sensor based on the Frequency-Modulated Continuous Wave (FMCW) technology and operating in the X- and L-bands, an Information Technology (IT) platform for data storage and processing and an airborne SAR data processing chain. In this work, the technical aspects related to the flight and ground segments of the infrastructure are presented. Moreover, a discussion on the response times and characteristics of the final products that can be achieved with the infrastructure is provided with the aim of showing its capabilities to support the monitoring activities required in a possible emergency scenario. In particular, as a case study, the acquisition and subsequent interferometric processing of airborne SAR data relevant to the Stromboli volcanic area in the Sicily region, southern Italy, are presented [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. NDP-FD6:一种 IPv6 网络 NDP 洪泛行为多分类检测框架.
- Author
-
夏文豪, 张连成, 郭毅, 张宏涛, and 林斌“
- Subjects
- *
MACHINE learning , *TRANSFORMER models , *ELECTRONIC data processing , *ACQUISITION of data , *DEEP learning , *DENIAL of service attacks - Abstract
Current researches on NDP flooding behavior detection mainly focus on detecting RA flooding and NS flooding behaviors, and there is insufficient flooding detection for other messages of the NDP protocol. Moreover, traditional threshold rule detection methods suffer from poor dynamics and low accuracy, while most of the Al-based detection methods can only perform binary classification detection, and there are still challenges in performing multi-classification detection. In short, there is a lack of corresponding research in multi-classification flooding detection of all messages of NDP protocol. Therefore, this paper proposed a multi-classification detection framework for NDP protocol flooding behaviors, and proposed a flooding behavior detection method for NDP protocol based on time interval characteristics. The framework constructed the first multiclassification dataset for NDP flooding detection through the processes of traffic collection and data processing, it compared and used 5 machine learning and 5 deep learning algorithms to train the detection model. The experimental results show that the detection accuracy of the XGBOOST algorithm in machine learning can reach 99.18%, and the detection accuracy of the Transformer algorithm in deep learning can reach 98.45%. Compared with the existing detection methods, the accuracy is higher. Meanwhile, the detection framework can detect 9 types of flooding behaviors for all 5 types of messages of NDP protocol and classify the flooding behaviors into multiple types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.