4,316 results
Search Results
202. Metadata Application Profiles in U. S. Academic Libraries: A Document Analysis.
- Author
-
Green, Ashlea M.
- Subjects
DIGITAL maps ,INSTITUTIONAL repositories ,ACADEMIC libraries ,METADATA ,DATA management ,DATA libraries ,ACQUISITION of data - Abstract
This paper describes a document analysis of 24 metadata application profiles (MAPs) used by academic libraries in the United States. The MAPs under study were collected from (a) the DLF AIG Metadata Application Profile Clearinghouse and (b) a Google search of.edu domains. Data collection and analysis took place between December 2020 and February 2021. While most of the MAPs under review provided metadata guidelines for digital collections, a small number were intended for institutional repositories or research data management. The study's findings reveal MAP features and content, usage of controlled vocabularies and standards, and other characteristics pertaining to MAP document scope, contents and format in this context. In addition to its discussion of the literature, the paper's findings should help metadata specialists and others involved in digital collection management gain insights useful in the development or revision of their own metadata documentation. Further, these findings offer a current glimpse of metadata application practices among U.S. academic libraries generally. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
203. The role of cultural heritage in wellbeing perceptions: a web-based software analysis in two Italian provinces.
- Author
-
Albanese, Valentina Erminia and Graziano, Teresa
- Subjects
CULTURAL property ,SENTIMENT analysis ,DATA libraries ,INFORMATION retrieval ,INTANGIBLE property ,CLASSIFICATION - Abstract
Copyright of Il Capitale Culturale: Studies on the Value of Cultural Heritage is the property of Il Capitale Culturale Studies on the Value of Cultural Heritage and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
204. Finite automata model for leaf disease classification.
- Author
-
V. T., KRISHNAPRASATH and J., PREETHI
- Subjects
FINITE state machines ,NOSOLOGY ,PLANT diseases ,DATA libraries ,AGRICULTURAL equipment ,COOPERATIVE societies - Abstract
In this modern era, the detection of plant disease plays a vital role in the sustainability of agricultural ecosystem. Today, India being second in farming, well-timed information related to crop is still questioning. Indian Government's farmer portal is available for pesticides, fertilisers, and farm machinery. To alleviate this problem, the paper describes a model to validate the leaf image, predicting leaf disease and notifying the farmer in an effective way on the harvest failure to stabilise farming income. For specific consideration on the validation, a data set library with predefined, uniformly scaled, regular image patterns of leaf disease, is maintained. The research suggests that farmers utilising the model can predict the breakout of leaf disease predominantly acquiring 100% yield. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
205. Google to convert Finnish paper mill to data hub.
- Author
-
Kho, Nancy Davis
- Subjects
SALE of business enterprises ,FACTORIES ,DATA libraries - Abstract
The article reports that Stora Enso will sell its Summa mill in Hamina, Finland to Google Inc. in 2009. It mentions that the site is expected to be used by the latter as a new data centre. It states that the sale, which is valued at €40 million and covers all plants, is expected to be completed by the end of this year's first quarter. It claims that the latter company has already 40 data centres around the world to support its computing activities.
- Published
- 2009
206. An intensive healthcare monitoring paradigm by using IoT based machine learning strategies.
- Author
-
Kondaka, Lakshmi Sudha, Thenmozhi, M., Vijayakumar, K., and Kohli, Rashi
- Subjects
DEEP learning ,MACHINE learning ,LEARNING strategies ,DATA libraries ,HEALTH care industry ,INTERNET of things - Abstract
Internet of Things (IoT) in association with cloud technologies are the raising stars in information technology industry, which provides a lot of innovative gadgets to several industries to automate their needs as well as monitoring the events properly without any human interventions. The most common and emerging needs now-a-days is the development of new gadgets to support healthcare industry to rectify the medical flaws and save the human life. In one side the booming technologies such as Internet of Things and Cloud platforms are available and the other hand a drastic need to introduce an intelligent gadget for medical oriented needs to save one's life in critical situations. This paper aims to create a bridge between the two and introduce a new device to combine healthcare with recent technological developments. The proposed approach introduces a new algorithm called iCloud Assisted Intensive Deep Learning (iCAIDL), which provides support to healthcare medium as well as patients by means of applying the intelligent cloud system along with machine learning strategies and this proposed algorithm is derived from the base of deep learning norms. An iCloud Assisted Intensive Deep Learning algorithm initially begins with the flow of collecting the existing health records from the data repository and train the system with deep learning principles. Once the data training phase ends the proposed algorithm begins to get the live data from patient and this data is assumed as a testing data, which will be processed by using intensive deep learning principles and store the resulting summary into the Cloud repository by means of enabling Internet of Things feature in association with the proposed algorithm called iCAIDL. This is transparent in both the state of users like doctors as well as the patients to monitor the health records in intelligent manner. A Smart Medical Gadget is designed to collect the health record from patients and maintain it into the medical repository for testing phase, in which it collects the patient heart rate, pressure level, blood flow in intensive manner. The logic of IoT is connected with the machine learning process such as: the data accumulated from the smart Medical Gadget needs to be send to the Server end for processing, here the processing is mentioned as the machine learning based processing, in which the received data is considered to be the testing data and the results are emulated accordingly. Once the results are emulated that also will be coming for training part for the upcoming testing data. So, that the data coming from the Medical Gadget is considered to be the testing data and once the processing is done it will be considered to be the training data for further medical summaries. The performance evaluations of the proposed approach is estimated based on the following metrics such as data transfer ratio from Smart medical Gadget to the server end, storage accuracy and the communication efficiency. Empirical results are attained using simulation, in which it produces a drastical improvement of healthcare parameters by merge the proposed algorithm called iCloud Assisted Intensive Deep Learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
207. How can we capture what is important?: A Case Study on the Appraisal of Government Emails in Austrian Public Archives.
- Author
-
Benauer, Maria
- Subjects
ARCHIVES ,STATE government archives ,DIGITAL libraries ,MUNICIPAL archives ,GOVERNMENT archives ,LIFE cycles (Biology) ,CONCEPTUALISM ,DATA libraries - Abstract
Although recent national research has drawn attention to the critical function of emails in Austrian administration, Austrian public archives have not approached email archiving yet. This paper aims to provide the Austrian archival community a starting point for the imple- mentation of email archiving by investigating the process of capture, as basis for successful digital archiving. By using the Upper Austrian State Archive as a case study and employing a qualitative approach, it examines and evaluates current practice in the appraisal and selection of government emails in order to identify conceptual characteristics specific to an Austrian context and reflect on practical issues that must be considered across the life cycle of Austrian government emails. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
208. An Effective Model of Confidentiality Management of Digital Archives in a Cloud Environment.
- Author
-
Xie, Jian, Xuan, Shaolong, You, Weijun, Wu, Zongda, and Chen, Huiling
- Subjects
DIGITAL libraries ,ARCHIVES collection management ,CLOUD storage security measures ,DATA libraries ,EMAIL security ,OPTICAL disks ,CONFIDENTIAL communications - Abstract
Aiming at the problem of confidentiality management of digital archives on the cloud, this paper presents an effective solution. The basic idea is to deploy a local server between the cloud and each client of an archive system to run a confidentiality management model of digital archives on the cloud, which includes an archive release model, and an archive search model. (1) The archive release model is used to strictly encrypt each archive file and archive data released by an administrator and generate feature data for the archive data, and then submit them to the cloud for storage to ensure the security of archive-sensitive data. (2) The archive search model is used to transform each query operation defined on the archive data submitted by a searcher, so that it can be correctly executed on feature data on the cloud, to ensure the accuracy and efficiency of archive search. Finally, both theoretical analysis and experimental evaluation demonstrate the good performance of the proposed solution. The result shows that compared with others, our solution has better overall performance in terms of confidentiality, accuracy, efficiency and availability, which can improve the security of archive-sensitive data on the untrusted cloud without compromising the performance of an existing archive management system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
209. Logarithmic heavy traffic error bounds in generalized switch and load balancing systems.
- Author
-
Hurtado-Lange, Daniela, Varma, Sushil Mahavir, and Maguluri, Siva Theja
- Subjects
CLOUD computing ,DATA libraries ,LOAD balancing (Computer networks) ,TELECOMMUNICATION ,ALGORITHMS - Abstract
Motivated by applications to wireless networks, cloud computing, data centers, etc., stochastic processing networks have been studied in the literature under various asymptotic regimes. In the heavy traffic regime, the steady-state mean queue length is proved to be $\Theta({1}/{\epsilon})$ , where $\epsilon$ is the heavy traffic parameter (which goes to zero in the limit). The focus of this paper is on obtaining queue length bounds on pre-limit systems, thus establishing the rate of convergence to heavy traffic. For the generalized switch, operating under the MaxWeight algorithm, we show that the mean queue length is within $\textrm{O}({\log}({1}/{\epsilon}))$ of its heavy traffic limit. This result holds regardless of the complete resource pooling (CRP) condition being satisfied. Furthermore, when the CRP condition is satisfied, we show that the mean queue length under the MaxWeight algorithm is within $\textrm{O}({\log}({1}/{\epsilon}))$ of the optimal scheduling policy. Finally, we obtain similar results for the rate of convergence to heavy traffic of the total queue length in load balancing systems operating under the 'join the shortest queue' routeing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
210. Extraction and Semantic Representation of Domain-Specific Relations in Spanish Labour Law.
- Author
-
Revenko, Artem and Martín-Chozas, Patricia
- Subjects
KNOWLEDGE graphs ,LABOR laws ,SPANISH language ,DATA libraries ,LEGAL literature - Abstract
Copyright of Procesamiento del Lenguaje Natural is the property of Sociedad Espanola para el Procesamiento del Lenguaje Natural and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
211. Demonstration and Understanding of Nano-RAM Novel One-Time Programmable Memory Application.
- Author
-
Ning, Sheyang and Luo, Jia
- Subjects
RECORDS management ,ELECTRON tunneling ,RF values (Chromatography) ,MEMORY ,DATA libraries ,DYNAMIC random access memory - Abstract
In prior researches, carbon nanotube (CNT)-based nano-random access-memory (NRAM) uses reset initialization which obtains >1011 high write endurance for large-time programmable (LTP) application. In this paper, for the first time, NRAM one-time programmable (OTP) application is proposed by using set initialization for data archive. Specifically, virgin NRAM cells are all in high resistance state (HRS) for storing bit “0.” In contrast, set initialization uses reversed polarity voltage to obtain low resistance state (LRS) for bit “1.” Furthermore, physical models of set initialization and retention current degradation are proposed for the first time. The current increment during set initialization can be attributed to electron tunneling and CNT deformation. Retention current degradation may be due to variation of paralleled CNT contacts in bottleneck zone. As for OTP performance, median NRAM bit and 1% tail bit demonstrate more than 1 billion years and 15 years data retentions, respectively, on 150 °C. The tail bit activation energy is 2.41 eV. Finally, no LRS read disturb is found after 10 s 0.5-V stress on both polarities. The virgin NRAM cells in HRS should be more stable than set initialized NRAM cells in LRS for both retention time and read disturb. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
212. Network Function Virtualization-Aware Orchestrator for Service Function Chaining Placement in the Cloud.
- Author
-
Hawilo, Hassan, Jammal, Manar, and Shami, Abdallah
- Subjects
CLOUD computing ,RATE of return ,DATA libraries - Abstract
Network function virtualization (NFV) has been introduced by network service providers to overcome various challenges that hinder them from satisfying the growing demand for networking services with higher return-on-investment. The association of NFV with the leading technologies of information technology virtualization and software defined networking is paving the way for flexible and dynamic orchestration of the VNFs, but still, various challenges need to be addressed. The VNFs instantiation and placement problems on data center’s (DC) servers are key enablers to achieve the desired flexible and dynamic NFV applications. In this paper, we have addressed the VNF placement problem by providing a novel mixed integer linear programming (MILP) optimization model and a novel heuristic solution, Betweenness centrality Algorithm for Component Orchestration of NFV platform (BACON), for small- and large-scale DC networks. The proposed solution addresses the VNF placement while taking into consideration the carrier-grade nature of the NFV applications and at the same time, minimizing the intra- and end-to-end delays of the service function chain (SFC). Also, the proposed approach enhances the reliability and the quality of service (QoS) of the SFC by maximizing the count of the functional group members. To evaluate the performance of the proposed solution, this paper conducts a comparative analysis with an NFV-agnostic algorithm and a greedy-k-NFV approach, which is proposed in the literature work. Also, this paper defines the complexity and the order of magnitude of the MILP model and BACON. BACON outperforms the greedy algorithms especially the greedy-k-NFV solution and has a lower complexity, which is calculated as $O((n^{3}-n^{2})/2)$. The simulation results show that finding an optimized VNF placement can achieve minimal SFCs delays and enhance the QoS accordingly. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
213. One store has all? - the backend story of managing geospatial information toward an easy discovery.
- Author
-
Kong, Ningning Nicole
- Subjects
GEOSPATIAL data ,METADATA ,DATA libraries ,HISTORICAL maps ,ACQUISITION of data ,INFORMATION resources management - Abstract
Geospatial data includes many formats, varying from historical paper maps, to digital information collected by various sensors. Many libraries have started the efforts to build a geospatial data portal to connect users with the various information. For example, GeoBlacklight and OpenGeoportal are two open-source projects that initiated from academic institutions which have been adopted by many universities and libraries for geospatial data discovery. While several recent studies have focused on the metadata, usability and data collection perspectives of geospatial data portals, not many have explored the backend stories about data management to support the data discovery platform. The objective of this paper is to provide a summary about geospatial data management strategies involved in the geospatial data portal development by reviewing related projects. These data management strategies include managing the historical paper maps, scanned maps, aerial photos, research generated geospatial information, and web map services. This paper focuses on the data organization, storage, cyberinfrastructure configuration, preservation and sharing perspectives of these efforts with the goal to provide a range of options or best management practices for information managers when curating geospatial data in their own institutions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
214. Secure archiving system: Integrating object information with document images using mathematical coding techniques.
- Author
-
Kadhim, Inas Jawad and Salman, Ghalib Ahmed
- Subjects
- *
DATA security , *MATHEMATICAL domains , *INFORMATION retrieval , *SECURITY systems , *IMAGE processing , *DATA libraries - Abstract
Efficient digital archiving systems are indispensable for managing vast amounts of data, facilitating streamlined information retrieval, enabling remote data exchange, and ensuring robust data security. While existing techniques often introduce complexity and security concerns, necessitating larger storage spaces, this paper proposes a new and straightforward approach using mathematical coding to construct a secure archiving system. Our methodology prioritizes simplicity while maintaining robust security measures to archive higher education system information, particularly document images. The proposed system integrates three key domains: mathematical coding for security, image processing for high-quality image archiving, and archive system development. Specifically, information is encoded into a unique CODE using XOR coding for enhanced security and combined with student names to generate PDF file titles containing scanned documents. Additional security layers are implemented through password-protected PDF files. Benchmarking against other database types reveals that our approach yields a simple, secure system for HES records to be archived without requiring high storing abilities or security complexities. Our findings underscore the effectiveness of our methodology in achieving efficient digital archiving while maintaining data security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
215. Assessment for Alzheimer's Disease Advancement Using Classification Models with Rules.
- Author
-
Thabtah, Fadi and Peebles, David
- Subjects
ALZHEIMER'S disease ,MACHINE learning ,DATA libraries ,HEALTH services accessibility ,DISEASE management ,CLASSIFICATION algorithms ,NAIVE Bayes classification - Abstract
Pre-diagnosis of common dementia conditions such as Alzheimer's disease (AD) in the initial stages is crucial to help in early intervention, treatment plan design, disease management, and for providing quicker healthcare access. Current assessments are often stressful, invasive, and unavailable in most countries worldwide. In addition, many cognitive assessments are time-consuming and rarely cover all cognitive domains involved in dementia diagnosis. Therefore, the design and implementation of an intelligent method for dementia signs of progression from a few cognitive items in a manner that is accessible, easy, affordable, quick to perform, and does not require special and expensive resources is desirable. This paper investigates the issue of dementia progression by proposing a new classification algorithm called Alzheimer's Disease Class Rules (AD-CR). The AD-CR algorithm learns models from the distinctive feature subsets that contain rules with low overlapping among their cognitive items yet are easily interpreted by clinicians during clinical assessment. An empirical evaluation of the Disease Neuroimaging Initiative data repository (ADNI) datasets shows that the AD-CR algorithm offers good performance (accuracy, sensitivity, etc.) when compared with other machine learning algorithms. The AD-CR algorithm was superior in comparison to the other algorithms overall since it reached a performance above 92%, 92.38% accuracy, 91.30% sensitivity, and 93.50% specificity when processing data subsets with cognitive and demographic attributes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
216. Application of bidirectional LSTM deep learning technique for sentiment analysis of COVID-19 tweets: post-COVID vaccination era.
- Author
-
Akande, Oluwatobi Noah, Lawrence, Morolake Oladayo, and Ogedebe, Peter
- Subjects
DEEP learning ,SOCIAL media ,SENTIMENT analysis ,MICROBLOGS ,COVID-19 pandemic ,COVID-19 ,DATA libraries - Abstract
Background: Social media platforms, especially Twitter, have turned out to be a major source of data repositories. They have become a platform that citizens can use to voice their concerns about issues that affect them. Most importantly, during the COVID-19 era, the platform was greatly used by governments and health organizations to sensitize people about the safety guidelines that they must adhere to so as to remain safe during the pandemic. As expected, people also used Twitter and other social media platforms to voice their opinions about how governments are handling the COVID-19 pandemic outbreak. Governments and organizations could, therefore, use these social media as a feedback mechanism that can help them know the view of the citizens about their policies. This could help them in making informed decisions about their policies. Aim: The aim of this paper is to explore the use of BiLSTM deep learning technique for sentiment analysis of COVID-19 tweets. Methodology: The study retrieved 197,327 tweets from the Nigeria Twitter domain using #COVID or #COVID-19 hashtags as keywords. The dataset was retrieved within the 1st month of COVID-19 vaccination in Nigeria, i.e., March 15–June 15, 2021. BiLSTM deep learning technique was trained using 789,306 sentiment annotated tweets obtained from Kaggle Sentiment140 tweet datasets. The preprocessed case study tweets were then used to evaluate the proposed model. Also, a precision of 78.26% and a recall value of 78.27% were also obtained. Results: With an accuracy of 78.29%, 98,545 (49.93%) positive sentiments and 98,782 negative sentiments (50.06%) were recorded. Also, a precision of 78.26% and a recall value of 78.27% were also obtained. However, the presence of outliers which are tweets not related to COVID but which used the hashtag was observed. Conclusion: This study has revealed the strength of BiLSTM deep learning technique for sentiment analysis. The results obtained revealed an almost balanced sentiments toward the pandemic with 49.93% positive disposition to the pandemic as compared to 50.06% negative disposition. This showed affirmed the impact of COVID vaccine in dousing citizen's tension when it was made available for public use. However, the presence of outliers in the classified tweets could be a pointer to the reason why aspect-based sentiment analysis could be preferred to sentence-based sentiment analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
217. FAIR EVA: Bringing institutional multidisciplinary repositories into the FAIR picture.
- Author
-
Aguilar Gómez, Fernando and Bernal, Isabel
- Subjects
INSTITUTIONAL repositories ,DATA libraries ,OPEN scholarship ,DATA science ,DATA management - Abstract
The FAIR Principles are a set of good practices to improve the reproducibility and quality of data in an Open Science context. Different sets of indicators have been proposed to evaluate the FAIRness of digital objects, including datasets that are usually stored in repositories or data portals. However, indicators like those proposed by the Research Data Alliance are provided from a high-level perspective that can be interpreted and they are not always realistic to particular environments like multidisciplinary repositories. This paper describes FAIR EVA, a new tool developed within the European Open Science Cloud context that is oriented to particular data management systems like open repositories, which can be customized to a specific case in a scalable and automatic environment. It aims to be adaptive enough to work for different environments, repository software and disciplines, taking into account the flexibility of the FAIR Principles. As an example, we present DIGITAL.CSIC repository as the first target of the tool, gathering the particular needs of a multidisciplinary institution as well as its institutional repository. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
218. Condition Assessment of Highway Bridges Using Textual Data and Natural Language Processing- (NLP-) Based Machine Learning Models.
- Author
-
Feng, De-Cheng, Wang, Wen-Jie, Mangalathu, Sujith, and Sun, Zhen
- Subjects
NATURAL languages ,PIERS ,DATA libraries ,WORD frequency ,MACHINE learning ,NATURAL language processing ,ELECTRONIC data processing ,CONCRETE bridges - Abstract
Condition rating of bridges is specified in many countries since it provides a basis for the decision-making of maintenance actions such as repair, strengthening, or limitation of passing vehicle weight. In practice, professional engineers check the textual description of damages to bridge members, such as girders, bearings, expansion joints, and piers that are acquired from periodic inspections, and then make a rating of the bridge condition. The task is time-consuming and labor-intensive due to the large amount of detailed data buried in the inspection reports. In this paper, a natural language processing- (NLP-) based machine learning (ML) approach is proposed for automated and fast bridge condition rating, which can efficiently extract the information of deficiencies in bridge members. The proposed approach involves three major steps, say, data repository establishment, NLP-based textual data processing, and ML-based bridge condition rating prediction. The data repository is established with the inspection reports of 263 concrete bridges, and in total there, are four condition levels for the bridges. Then, the NLP-based textual data processing approach is implemented to calculate the word frequency and the word clouds to visualize the characteristics of bridges in different condition levels. Finally, four typical ML techniques are adopted to generate the predictive model of the bridge condition rating. The results indicate that the NLP-based ML prediction model has an accuracy of 89% and is very efficient so that it can be used for large-scale applications such as condition rating for regional-level bridges. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
219. Learning Symbolic Expressions: Mixed-Integer Formulations, Cuts, and Heuristics.
- Author
-
Kim, Jongeun, Leyffer, Sven, and Balaprakash, Prasanna
- Subjects
- *
NONLINEAR operators , *DATA libraries , *SCIENTIFIC computing , *APPLIED mathematics , *SQUARE root , *HEURISTIC - Abstract
In this paper, we consider the problem of learning a regression function without assuming its functional form. This problem is referred to as symbolic regression. An expression tree is typically used to represent a solution function, which is determined by assigning operators and operands to the nodes. Cozad and Sahinidis propose a nonconvex mixed-integer nonlinear program (MINLP), in which binary variables are used to assign operators and nonlinear expressions are used to propagate data values through nonlinear operators, such as square, square root, and exponential. We extend this formulation by adding new cuts that improve the solution of this challenging MINLP. We also propose a heuristic that iteratively builds an expression tree by solving a restricted MINLP. We perform computational experiments and compare our approach with a mixed-integer program–based method and a neural network–based method from the literature. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work was supported by the Applied Mathematics activity within the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research [Grant DE-AC02-06CH11357]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0050) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0050). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
220. The Hot Spot Coverage Patrol Problem: Formulations and Solution Approaches.
- Author
-
Luo, Yuchen, Golden, Bruce, and Zhang, Rui
- Subjects
- *
POLICE vehicles , *DATA libraries , *SEARCH algorithms , *INTEGER programming , *CRIME statistics - Abstract
When designing a patrol route, it is often necessary to pay more attention to locations with high crime rates. In this paper, we study a patrol routing problem for a fleet of patrol cars patrolling a region with a high-crime neighborhood (HCN) consisting of multiple hot spots. Considering the disorder and chaos in the HCN, at least one patrol car is required in the HCN at any given time during the patrol. We call this routing problem the hot spot coverage patrol problem (HSCPP). In the HSCPP, the importance of a patrol location is quantified by a prize, and the prize is collected if a patrol car visits the location. Our objective is to maximize the sum of prizes collected by the patrol cars, obeying all operational requirements. We propose mathematical formulations and develop several solution approaches for the HSCPP. The global approach consists of finding the routing solution for all patrol cars with a single integer programming (IP) formulation. The partition approach involves first partitioning the region geographically and solving the routing problem in each subregion with two IP formulations. Next, we strengthen the partition approach by developing a column generation (CG) approach in which the initial columns of the CG approach are the solutions generated from the partition approach. We conduct a detailed computational case study using instances based on real crime data from Montgomery County, Maryland. To further understand the computational tractability of our solution approaches, we also perform a sensitivity analysis using synthetic instances under various scenarios. History: Accepted by Erwin Pesch, Area Editor for Heuristic Search & Approximation Algorithms. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0192) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0192). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
221. PaPILO: A Parallel Presolving Library for Integer and Linear Optimization with Multiprecision Support.
- Author
-
Gleixner, Ambros, Gottwald, Leona, and Hoen, Alexander
- Subjects
- *
DATA libraries , *LINEAR programming , *SOFTWARE development tools , *COLUMNS , *GOVERNMENT agencies - Abstract
Presolving has become an essential component of modern mixed integer program (MIP) solvers, both in terms of computational performance and numerical robustness. In this paper, we present PaPILO, a new C++ header-only library that provides a large set of presolving routines for MIP and linear programming problems from the literature. The creation of PaPILO was motivated by the current lack of (a) solver-independent implementations that (b) exploit parallel hardware and (c) support multiprecision arithmetic. Traditionally, presolving is designed to be fast. Whenever necessary, its low computational overhead is usually achieved by strict working limits. PaPILO's parallelization framework aims at reducing the computational overhead also when presolving is executed more aggressively or is applied to large-scale problems. To rule out conflicts between parallel presolve reductions, PaPILO uses a transaction-based design. This helps to avoid both the memory-intensive allocation of multiple copies of the problem and special synchronization between presolvers. Additionally, the use of Intel's Threading Building Blocks library aids PaPILO in efficiently exploiting recursive parallelism within expensive presolving routines, such as probing, dominated columns, or constraint sparsification. We provide an overview of PaPILO's capabilities and insights into important design choices. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: This work has been financially supported by Research Campus MODAL, funded by the German Federal Ministry of Education and Research [Grants 05M14ZAM, 05M20ZBM], and the European Union's Horizon 2020 research and innovation programme under grant agreement No 773897 (plan4res). The content of this paper only reflects the author's views. The European Commission / Innovation and Networks Executive Agency is not responsible for any use that may be made of the information it contains. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0171), as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0171). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
222. Distributionally Robust Chance-Constrained p-Hub Center Problem.
- Author
-
Zhao, Yue, Chen, Zhi, and Zhang, Zhenzhen
- Subjects
- *
DATA libraries , *GAUSSIAN distribution , *LOCATION problems (Programming) , *UNIVERSITY research - Abstract
The p-hub center problem is a fundamental model for the strategic design of hub location. It aims at constructing p fully interconnected hubs and links from nodes to hubs so that the longest path between any two nodes is minimized. Existing literature on the p-hub center problem under uncertainty often assumes a joint distribution of travel times, which is difficult (if not impossible) to elicit precisely. In this paper, we bridge the gap by investigating two distributionally robust chance-constrained models that cover, respectively, an existing stochastic one under independent normal distribution and one that is based on the sample average approximation approach as a special case. We derive deterministic reformulations as a mixed-integer program wherein a large number of constraints can be dynamically added via a constraint-generation approach to accelerate computation. Counterparts of our models in the emerging robust satisficing framework are also discussed. Extensive numerical experiments demonstrate the encouraging out-of-sample performance of our proposed models as well as the effectiveness of the constraint-generation approach. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work is partially supported by the National Natural Science Foundation of China [Grants 72101187 and 72021002] and Early Career Scheme from the Hong Kong Research Grants Council General Research Fund [Grant 9043424] and NSFC/RGC Joint Research Scheme N_CityU105/21. Y. Zhao is supported by the Ministry of Education, Singapore, under its 2019 Academic Research Fund Tier 3 grant call [Grant MOE-2019-T3-1-010]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0113) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0113). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
223. Platoon Optimization Based on Truck Pairs.
- Author
-
Bhoopalam, Anirudh Kishore, Agatz, Niels, and Zuidwijk, Rob
- Subjects
- *
BASE pairs , *DATA libraries , *MOTOR vehicle driving , *ORGANIZATIONAL research , *POLYNOMIAL time algorithms - Abstract
Truck platooning technology allows trucks to drive at short headways to save fuel and associated emissions. However, fuel savings from platooning are relatively small, so forming platoons should be convenient and associated with minimum detours and delays. In this paper, we focus on developing optimization technology to form truck platoons. We formulate a mathematical program for the platoon routing problem with time windows (PRP-TW) based on a time–space network. We provide polynomial-time algorithms to solve special cases of PRP-TW with two-truck platoons. Based on these special cases, we build several fast heuristics. An extensive set of numerical experiments shows that our heuristics perform well. Moreover, we show that simple two-truck platoons already capture most of the potential savings of platooning. History: Accepted by Pascal van Hentenryck, Area Editor for Computational Modeling: Methods and Analysis. Funding: This work was supported by the Netherlands Organization for Scientific Research (NWO) as part of the Spatial and Transport Impacts of Automated Driving [Grant 438-15-161] project. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2020.0302) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2020.0302). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
224. Framework to develop an open‐source forage data network to improve primary productivity and enhance system resiliency.
- Author
-
Ashworth, A. J., Marshall, L., Volenec, J. J., Casler, M. D., Berti, M. T., van Santen, E., Williams, C. L., Gopakumar, V., Foster, J. L., Propst, T., Picasso, V., and Su, J.
- Subjects
DATA libraries ,EXTREME weather ,AGRICULTURAL diversification ,AGRICULTURAL intensification ,ONLINE databases ,FORAGE ,REDUNDANCY in engineering - Abstract
Copyright of Agronomy Journal is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
225. COVID-19 Incidence Proportion as a Function of Regional Testing Strategy, Vaccination Coverage, and Vaccine Type.
- Author
-
Totolian, Areg A., Smirnov, Viacheslav S., Krasnov, Alexei A., Ramsay, Edward S., Dedkov, Vladimir G., and Popova, Anna Y.
- Subjects
VACCINATION coverage ,COVID-19 ,COVID-19 vaccines ,COVID-19 pandemic ,DATA libraries - Abstract
Introduction: The COVID-19 pandemic has become a serious challenge for humanity almost everywhere globally. Despite active vaccination around the world, the incidence proportion in different countries varies significantly as of May 2022. The reason may be a combination of demographic, immunological, and epidemiological factors. The purpose of this study was to analyze possible relationships between COVID-19 incidence proportion in the population and the types of SARS-CoV-2 vaccines used in different countries globally, taking into account demographic and epidemiological factors. Materials and methods: An initial database was created of demographic and immunoepidemiological information about the COVID-19 situation in 104 countries collected from published official sources and repository data. The baseline included, for each country, population size and density; SARS-CoV-2 testing coverage; vaccination coverage; incidence proportion; and a list of vaccines that were used, including their relative share among all vaccinations. Subsequently, the initial data set was stratified by population and vaccination coverage. The final data set was subjected to statistical processing both in general and taking into account population testing coverage. Results: After formation of the final data set (including 53 countries), it turned out that reported COVID-19 case numbers correlated most strongly with testing coverage and the proportions of vaccine types used, specifically, mRNA (V1); vector (V2); peptide/protein (V3); and whole-virion/inactivated (V4). Due to the fact that an inverse correlation was found between 'reported COVID-19 case numbers' with V2, V3, and V4, these three vaccine types were also combined into one analytic group, 'non-mRNA group' vaccines (Vnmg). When the relationship between vaccine type and incidence proportion was examined, minimum incidence proportion was noted at V1:Vnmg ratios (%:%) from 0:100 to 30:70. Maximum incidence proportion was seen with V1:Vnmg from 80:20 to 100:0. On the other hand, we have shown that the number of reported COVID-19 cases in different countries largely depends on testing coverage. To offset this factor, countries with low and extremely high levels of testing were excluded from the data set; it was then confirmed that the largest number of reported COVID-19 cases occurred in countries with a dominance of V1 vaccines. The fewest reported cases were seen in countries with a dominance of Vnmg vaccines. Conclusion: In this paper, we have shown for the first time that the level of reported COVID-19 incidence proportion depends not only on SARS-CoV-2 testing and vaccination coverage, which is quite logical, but probably also on the vaccine types used. With the same vaccination level and testing coverage, those countries that predominantly use vector and whole-virion vaccines feature incidence proportion that is significantly lower than countries that predominantly use mRNA vaccines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
226. A semantic and service-based approach for adaptive mutli-structured data curation in data lakehouses.
- Author
-
Zouari, Firas, Ghedira-Guegan, Chirine, Boukadi, Khouloud, and Kabachi, Nadia
- Subjects
DATA curation ,DATA libraries ,DATA management ,DATA scrubbing ,DATA quality ,ONTOLOGIES (Information retrieval) ,PIPELINE inspection - Abstract
Recently, we noticed the emergence of several data management architectures to cope with the challenges imposed by big data. Among them, data lakehouses are receiving much interest from industrial and academic fields due to their ability to hold disparate multi-structured batch and streaming data sources in a single data repository. Thus, the heterogeneous and complex aspect of the data requires a dedicated process to improve their quality and retrieve value from them. Therefore, data curation encompasses several tasks that clean and enrich data to ensure it continues to fit the user requirements. Nevertheless, most existing data curation approaches need more dynamics, flexibility, and customization in constituting the data curation pipeline to align with end user requirements that may vary according to her/his decision context. Moreover, they are dedicated to curating only a single type of structure of batch data sources (e.g., semi-structured). Considering the changing requirements of the user and the need to build a customized data curation pipeline according to the users and the data source characteristics, we propose a service-based framework for adaptive data curation in data lakehouses that encompasses five modules: data collection, data quality evaluation, data characterization, curation service composition, and data curation. The proposed framework is built upon new data characterization and evaluation modular ontology and a curation service composition approach that we detail in the following paper. The experimental findings validate the contributions' performance in terms of effectiveness and execution time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
227. Use of Elasticsearch-based business intelligence tools for integration and visualization of biological data.
- Author
-
Scott-Boyer, Marie-Pier, Dufour, Pascal, Belleau, François, Ongaro-Carcy, Regis, Plessis, Clément, Périn, Olivier, and Droit, Arnaud
- Subjects
DATA libraries ,BUSINESS intelligence ,DATA visualization ,BIOLOGICAL databases ,BIOCOMPLEXITY ,KNOWLEDGE transfer ,ELECTRIC connectors - Abstract
The emergence of massive datasets exploring the multiple levels of molecular biology has made their analysis and knowledge transfer more complex. Flexible tools to manage big biological datasets could be of great help for standardizing the usage of developed data visualizations and integration methods. Business intelligence (BI) tools have been used in many fields as exploratory tools. They have numerous connectors to link numerous data repositories with a unified graphic interface, offering an overview of data and facilitating interpretation for decision makers. BI tools could be a flexible and user-friendly way of handling molecular biological data with interactive visualizations. However, it is rather uncommon to see such tools used for the exploration of massive and complex datasets in biological fields. We believe that two main obstacles could be the reason. Firstly, we posit that the way to import data into BI tools are not compatible with biological databases. Secondly, BI tools may not be adapted to certain particularities of complex biological data, namely, the size, the variability of datasets and the availability of specialized visualizations. This paper highlights the use of five BI tools (Elastic Kibana, Siren Investigate, Microsoft Power BI, Salesforce Tableau and Apache Superset) onto which the massive data management repository engine called Elasticsearch is compatible. Four case studies will be discussed in which these BI tools were applied on biological datasets with different characteristics. We conclude that the performance of the tools depends on the complexity of the biological questions and the size of the datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
228. A comparison of neuroelectrophysiology databases.
- Author
-
Subash, Priyanka, Gray, Alex, Boswell, Misque, Cohen, Samantha L., Garner, Rachael, Salehi, Sana, Fisher, Calvary, Hobel, Samuel, Ghosh, Satrajit, Halchenko, Yaroslav, Dichter, Benjamin, Poldrack, Russell A., Markiewicz, Chris, Hermes, Dora, Delorme, Arnaud, Makeig, Scott, Behan, Brendan, Sparks, Alana, Arnott, Stephen R, and Wang, Zhengjia
- Subjects
DATA structures ,INFORMATION sharing ,DATABASES ,DATA integration ,DATA libraries ,ARCHIVES ,NURSING informatics - Abstract
As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. The aim of this review is to describe archives that provide researchers with tools to store, share, and reanalyze both human and non-human neurophysiology data based on criteria that are of interest to the neuroscientific community. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. As the necessity for integrating large-scale analysis into data repository platforms continues to grow within the neuroscientific community, this article will highlight the various analytical and customizable tools developed within the chosen archives that may advance the field of neuroinformatics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
229. DALib: A Curated Repository of Libraries for Data Augmentation in Computer Vision.
- Author
-
Amarù, Sofia, Marelli, Davide, Ciocca, Gianluigi, and Schettini, Raimondo
- Subjects
DATA augmentation ,LIBRARY storage centers ,MACHINE learning ,DATA libraries ,COMPUTER vision ,LIBRARY design & construction - Abstract
Data augmentation is a fundamental technique in machine learning that plays a crucial role in expanding the size of training datasets. By applying various transformations or modifications to existing data, data augmentation enhances the generalization and robustness of machine learning models. In recent years, the development of several libraries has simplified the utilization of diverse data augmentation strategies across different tasks. This paper focuses on the exploration of the most widely adopted libraries specifically designed for data augmentation in computer vision tasks. Here, we aim to provide a comprehensive survey of publicly available data augmentation libraries, facilitating practitioners to navigate these resources effectively. Through a curated taxonomy, we present an organized classification of the different approaches employed by these libraries, along with accompanying application examples. By examining the techniques of each library, practitioners can make informed decisions in selecting the most suitable augmentation techniques for their computer vision projects. To ensure the accessibility of this valuable information, a dedicated public website named DALib has been created. This website serves as a centralized repository where the taxonomy, methods, and examples associated with the surveyed data augmentation libraries can be explored. By offering this comprehensive resource, we aim to empower practitioners and contribute to the advancement of computer vision research and applications through effective utilization of data augmentation techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
230. A Data Quality Model for Master Data Repositories.
- Author
-
Gualo, Fernando, Caballero, Ismael, Rodríguez, Moisés, and Piattini, Mario
- Subjects
DATA libraries ,DATA quality ,DATA modeling ,DATA management - Abstract
Master data has been revealed as one of the most potent instruments to guarantee adequate levels of data quality. The main contribution of this paper is a data quality model to guide repeatable and homogeneous evaluations of the level of data quality of master data repositories. This data quality model follows several international open standards: ISO/IEC 25012, ISO/IEC 25024, and ISO 8000-1000, enabling compliance certification. A case study of applying the data quality model to an organizational master data repository has been carried out to demonstrate the applicability of the data quality model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
231. Modern air, englacial and permafrost temperatures at high altitude on Mt Ortles (3905 m a.s.l.), in the eastern European Alps.
- Author
-
Carturan, Luca, De Blasi, Fabrizio, Dinale, Roberto, Dragà, Gianfranco, Gabrielli, Paolo, Mair, Volkmar, Seppi, Roberto, Tonidandel, David, Zanoner, Thomas, Zendrini, Tiziana Lazzarina, and Dalla Fontana, Giancarlo
- Subjects
PERMAFROST ,HIGH temperatures ,ALPINE glaciers ,DATA libraries ,EARTH temperature ,BEDROCK ,GLACIERS - Abstract
The climatic response of mountain permafrost and glaciers located in high-elevation mountain areas has major implications for the stability of mountain slopes and related geomorphological hazards, water storage and supply, and preservation of palaeoclimatic archives. Despite a good knowledge of physical processes that govern the climatic response of mountain permafrost and glaciers, there is a lack of observational datasets from summit areas. This represents a crucial gap in knowledge and a serious limit for model-based projections of future behaviour of permafrost and glaciers. A new observational dataset is available for the summit area of Mt Ortles, which is the highest summit of South Tyrol, Italy. This paper presents a series of air, englacial, soil surface and rock wall temperatures collected between 2010 and 2016. Details are provided regarding instrument types and characteristics, field methods, and data quality control and assessment. The obtained data series are available through an open data repository (10.5281/zenodo.8330289, Carturan et al., 2023). In the observed period, the mean annual air temperature at 3830 m a.s.l. was between -7.8 and -8.6 ∘ C. The most shallow layers of snow and firn (down to a depth of about 10 m) froze during winter. However, melt water percolation restored isothermal conditions during the ablation season, and the entire firn layer was found at the melting pressure point. Glacier ice is cold, but only from about 30 m depth. Englacial temperature decreases with depth, reaching a minimum of almost -3 ∘ C close to the bedrock, at 75 m depth. A small glacier located at 3470 m a.s.l., close to the summit of Mt Ortles, was also found in cold conditions down to a depth of 9.5 m. The mean annual ground surface temperature was negative for all but one monitored sites, indicating cold ground conditions and the existence of permafrost in nearly all debris-mantled slopes of the summit. Similarly, the mean annual rock wall temperature was negative at most monitored sites, except the lowest one at 3030 m a.s.l. This suggests that the rock faces of the summit are affected by permafrost at all exposures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
232. Infrastructure tools to support an effective Radiation Oncology Learning Health System.
- Author
-
Kapoor, Rishabh, Sleeman, William C, Ghosh, Preetam, and Palta, Jatinder
- Subjects
SURGICAL gloves ,CONCEPT mapping ,CLINICAL decision support systems ,DATA libraries ,RDF (Document markup language) ,INSTRUCTIONAL systems ,DATABASES - Abstract
Purpose: Radiation Oncology Learning Health System (RO‐LHS) is a promising approach to improve the quality of care by integrating clinical, dosimetry, treatment delivery, research data in real‐time. This paper describes a novel set of tools to support the development of a RO‐LHS and the current challenges they can address. Methods: We present a knowledge graph‐based approach to map radiotherapy data from clinical databases to an ontology‐based data repository using FAIR concepts. This strategy ensures that the data are easily discoverable, accessible, and can be used by other clinical decision support systems. It allows for visualization, presentation, and data analyses of valuable information to identify trends and patterns in patient outcomes. We designed a search engine that utilizes ontology‐based keyword searching, synonym‐based term matching that leverages the hierarchical nature of ontologies to retrieve patient records based on parent and children classes, connects to the Bioportal database for relevant clinical attributes retrieval. To identify similar patients, a method involving text corpus creation and vector embedding models (Word2Vec, Doc2Vec, GloVe, and FastText) are employed, using cosine similarity and distance metrics. Results: The data pipeline and tool were tested with 1660 patient clinical and dosimetry records resulting in 504 180 RDF (Resource Description Framework) tuples and visualized data relationships using graph‐based representations. Patient similarity analysis using embedding models showed that the Word2Vec model had the highest mean cosine similarity, while the GloVe model exhibited more compact embeddings with lower Euclidean and Manhattan distances. Conclusions: The framework and tools described support the development of a RO‐LHS. By integrating diverse data sources and facilitating data discovery and analysis, they contribute to continuous learning and improvement in patient care. The tools enhance the quality of care by enabling the identification of cohorts, clinical decision support, and the development of clinical studies and machine learning programs in radiation oncology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
233. Enhancing Software Project Monitoring with Multidimensional Data Repository Mining.
- Author
-
Reszka, Łukasz, Sosnowski, Janusz, and Dobrzyński, Bartosz
- Subjects
DATA libraries ,DATA mining ,DEEP learning ,MULTIDIMENSIONAL databases ,TIME perspective ,COMPUTER software ,COMPUTER software development - Abstract
Software project development and maintenance activities have been reported in various repositories. The data contained in these repositories have been widely used in various studies on specific problems, e.g., predicting bug appearance, allocating issues to developers, and identifying duplicated issues. Developed analysis schemes are usually based on simplified data models while issue report details are neglected. Confronting this problem requires a deep and wide-ranging exploration of software repository contents adapted to their specificities, which differs significantly from classical data mining. This paper is targeted at three aspects: the structural and semantic exploration of repositories, deriving characteristic features in value and time perspectives, and defining the space of project monitoring goals. The considerations presented demonstrate a holistic image of the project development process, which is useful in the assessment of its efficiency and identification of imperfections. The original analysis introduced in this work was verified using open source and some commercial software project repositories. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
234. Leveraging electronic medical records for HIV testing, care, and treatment programming in Kenya—the national data warehouse project.
- Author
-
Ndisha, Margaret, Hassan, Amin S., Ngari, Faith, Munene, Evans, Gikura, Mary, Kimutai, Koske, Muthoka, Kennedy, Murie, Lisa Amai, Tolentino, Herman, Odhiambo, Jacob, Mwele, Pascal, Odero, Lydia, Mbaire, Kate, Omoro, Gonza, and Kimanga, Davies O.
- Subjects
ELECTRONIC health records ,DATA warehousing ,DIAGNOSIS of HIV infections ,HIV ,DATA libraries ,HEALTH information systems - Abstract
Background: Aggregate electronic data repositories and population-level cross-sectional surveys play a critical role in HIV programme monitoring and surveillance for data-driven decision-making. However, these data sources have inherent limitations including inability to respond to public health priorities in real-time and to longitudinally follow up clients for ascertainment of long-term outcomes. Electronic medical records (EMRs) have tremendous potential to bridge these gaps when harnessed into a centralised data repository. We describe the evolution of EMRs and the development of a centralised national data warehouse (NDW) repository. Further, we describe the distribution and representativeness of data from the NDW and explore its potential for population-level surveillance of HIV testing, care and treatment in Kenya. Main body: Health information systems in Kenya have evolved from simple paper records to web-based EMRs with features that support data transmission to the NDW. The NDW design includes four layers: data warehouse application programming interface (DWAPI), central staging, integration service, and data visualization application. The number of health facilities uploading individual-level data to the NDW increased from 666 in 2016 to 1,516 in 2020, covering 41 of 47 counties in Kenya. By the end of 2020, the NDW hosted longitudinal data from 1,928,458 individuals ever started on antiretroviral therapy (ART). In 2020, there were 936,869 individuals who were active on ART in the NDW, compared to 1,219,276 individuals on ART reported in the aggregate-level Kenya Health Information System (KHIS), suggesting 77% coverage. The proportional distribution of individuals on ART by counties in the NDW was consistent with that from KHIS, suggesting representativeness and generalizability at the population level. Conclusion: The NDW presents opportunities for individual-level HIV programme monitoring and surveillance because of its longitudinal design and its ability to respond to public health priorities in real-time. A comparison with estimates from KHIS demonstrates that the NDW has high coverage and that the data maybe representative and generalizable at the population-level. The NDW is therefore a unique and complementary resource for HIV programme monitoring and surveillance with potential to strengthen timely data driven decision-making towards HIV epidemic control in Kenya. Database link: (https://dwh.nascop.org/). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
235. An Interior Point–Inspired Algorithm for Linear Programs Arising in Discrete Optimal Transport.
- Author
-
Zanetti, Filippo and Gondzio, Jacek
- Subjects
- *
INTERIOR-point methods , *SCHUR complement , *DATA libraries , *ALGORITHMS , *SPARSE approximations - Abstract
Discrete optimal transport problems give rise to very large linear programs (LPs) with a particular structure of the constraint matrix. In this paper, we present a hybrid algorithm that mixes an interior point method (IPM) and column generation, specialized for the LP originating from the Kantorovich optimal transport problem. Knowing that optimal solutions of such problems display a high degree of sparsity, we propose a column generation–like technique to force all intermediate iterates to be as sparse as possible. The algorithm is implemented nearly matrix-free. Indeed, most of the computations avoid forming the huge matrices involved and solve the Newton system using only a much smaller Schur complement of the normal equations. We prove theoretical results about the sparsity pattern of the optimal solution, exploiting the graph structure of the underlying problem. We use these results to mix iterative and direct linear solvers efficiently in a way that avoids producing preconditioners or factorizations with excessive fill-in and at the same time guaranteeing a low number of conjugate gradient iterations. We compare the proposed method with two state-of-the-art solvers and show that it can compete with the best network optimization tools in terms of computational time and memory use. We perform experiments with problems reaching more than four billion variables and demonstrate the robustness of the proposed method. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Funding: F. Zanetti received funding from the University of Edinburgh, in the form of a PhD scholarship. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0184) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0184). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
236. A Cost-Effective Sequential Route Recommender System for Taxi Drivers.
- Author
-
Liu, Junming, Teng, Mingfei, Chen, Weiwei, and Xiong, Hui
- Subjects
- *
RECOMMENDER systems , *DEEP learning , *OPTIMIZATION algorithms , *DATA libraries , *SEARCH algorithms , *TAXICABS - Abstract
This paper develops a cost-effective sequential route recommender system to provide real-time routing recommendations for vacant taxis searching for the next passenger. We propose a prediction-and-optimization framework to recommend the searching route that maximizes the expected profit of the next successful passenger pickup based on the dynamic taxi demand-supply distribution. Specifically, this system features a deep learning-based predictor that dynamically predicts the passenger pickup probability on a road segment and a recursive searching algorithm that recommends the optimal searching route. The predictor integrates a graph convolution network (GCN) to capture the spatial distribution and a long short-term memory (LSTM) to capture the temporal dynamics of taxi demand and supply. The GCN-LSTM model can accurately predict the pickup probability on a road segment with the consideration of potential taxi oversupply. Then, the dynamic distribution of pickup probability is fed into the route optimization algorithm to recommend the optimal searching routes sequentially as route inquiries emerge in the system. The recursion tree-based route optimization algorithm can significantly reduce the computational time and provide the optimal routes within seconds for real-time implementation. Finally, extensive experiments using Beijing Taxi GPS data demonstrate the effectiveness and efficiency of the proposed recommender system. History: Accepted by Ram Ramesh, Area Editor for Data Science and Machine Learning. Funding: This work was partially supported by the Hong Kong Research Grants Council [Grants CityU 21500220, CityU 11504322] and the National Natural Science Foundation of China [Grant 72201222]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0112) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0112). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
237. Data for Digital Forensics: Why a Discussion on "How Realistic is Synthetic Data" is Dispensable.
- Author
-
Göbel, Thomas, Baier, Harald, and Breitinger, Frank
- Subjects
DIGITAL forensics ,DATA libraries ,FORENSIC sciences ,RESEARCH personnel - Abstract
Digital forensics depends on data sets for various purposes like concept evaluation, educational training, and tool validation. Researchers have gathered such data sets into repositories and created data simulation frameworks for producing large amounts of data. Synthetic data often face skepticism due to its perceived deviation from real-world data, raising doubts about its realism. This paper addresses this concern, arguing that there is no definitive answer. We focus on four common digital forensic use cases that rely on data. Through these, we elucidate the specifications and prerequisites of data sets within their respective contexts. Our discourse uncovers that both real-world and synthetic data are indispensable for advancing digital forensic science, software, tools, and the competence of practitioners. Additionally, we provide an overview of available data set repositories and data generation frameworks, contributing to the ongoing dialogue on digital forensic data sets' utility. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
238. Requiem for the spiritual experience : Reconceptualising 'quality of the environment' by looking at the renovation process of the Samen district in Mashhad, Iran.
- Author
-
Ghalandarian, Iman and Goharipour, Hamed
- Subjects
ENVIRONMENTAL quality ,SPIRITUALITY ,DEMOGRAPHIC surveys ,DATA libraries ,POPULATION statistics - Abstract
Imam Reza's holy shrine in the Samen area in Mashhad and its impact on the urban fabric of the area has always been of interest to residents and pilgrims. In addition to being an area where people live and businesses are based, the district has continuously supported the sacred act of pilgrimage. Although mainstream sources have defined the quality of Samen's environment mostly from a physical and psychological perspective, this neighbourhood fabric also has spiritual values. This paper aims to reconceptualise the quality of the environment by looking at the renovation process that the district has experienced to date. The research approach is qualitative, and grounded theory, including descriptive techniques, frames the methodology. The philosophical position of the study is interpretivism, and the research strategy is abductive. We collected data through libraries (documents) and survey techniques (observation and interview). The statistical population surveyed was people who are well informed about the plan and the district. We then conducted theoretical sampling through 28 semi-structured interviews. The number of interviews continued until data saturation. Using MAXQDA, we coded the interviews in three phases: open, axial, selective. The findings show that the quality of the environment is a multilayered concept and includes management, physical, economic, sociocultural and environmental dimensions, helping planners and policymakers respond to physical, psychological and spiritual spheres needs. In the case of the Samen district, decision makers must develop all aspects of the environment's quality, including those related to the pilgrimage culture. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
239. Arithmetic Study about Efficiency in Network Topologies for Data Centers.
- Author
-
Roig, Pedro Juan, Alcaraz, Salvador, Gilly, Katja, Bernad, Cristina, and Juiz, Carlos
- Subjects
DATA libraries ,CLOUD computing ,INTERNET of things ,ARITHMETIC ,COEFFICIENTS (Statistics) - Abstract
Data centers are getting more and more attention due the rapid increase of IoT deployments, which may result in the implementation of smaller facilities being closer to the end users as well as larger facilities up in the cloud. In this paper, an arithmetic study has been carried out in order to measure a coefficient related to both the average number of hops among nodes and the average number of links among devices for a range of typical network topologies fit for data centers. Such topologies are either tree-like or graph-like designs, where this coefficient provides a balance between performance and simplicity, resulting in lower values in the coefficient accounting for a better compromise between both factors in redundant architectures. The motivation of this contribution is to craft a coefficient that is easy to calculate by applying simple arithmetic operations. This coefficient can be seen as another tool to compare network topologies in data centers that could act as a tie-breaker so as to select a given design when other parameters offer contradictory results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
240. Muchos cadáveres, pocas soluciones. Muertes masivas y cementerios en Caracas: 1764-1856.
- Author
-
Altez, Rogelio
- Subjects
CEMETERIES ,SPANISH colonies ,DATA libraries ,MASS burials ,WESTERN countries ,DEAD ,SECULARIZATION - Abstract
Copyright of Historia Regional is the property of Historia Regional and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
241. The secret life of garnets: a comprehensive, standardized dataset of garnet geochemical analyses integrating localities and petrogenesis.
- Author
-
Chiama, Kristen, Gabor, Morgan, Lupini, Isabella, Rutledge, Randolph, Nord, Julia Ann, Zhang, Shuang, Boujibar, Asmaa, Bullock, Emma S., Walter, Michael J., Lehnert, Kerstin, Spear, Frank, Morrison, Shaunna M., and Hazen, Robert M.
- Subjects
ANALYTICAL geochemistry ,DATA libraries ,GARNET ,PETROGENESIS ,DIORITE ,DATABASES ,DATA science - Abstract
Integrating mineralogy with data science is critical to modernizing Earth materials research and its applications to geosciences. Data were compiled on 95 650 garnet sample analyses from a variety of sources, ranging from large repositories (EarthChem, RRUFF, MetPetDB) to individual peer-reviewed literature. An important feature is the inclusion of mineralogical "dark data" from papers published prior to 1990. Garnets are commonly used as indicators of formation environments, which directly correlate with their geochemical properties; thus, they are an ideal subject for the creation of an extensive data resource that incorporates composition, locality information, paragenetic mode, age, temperature, pressure, and geochemistry. For the data extracted from existing databases and literature, we increased the resolution of several key aspects, including petrogenetic and paragenetic attributes, which we extended from generic material type (e.g., igneous, metamorphic) to more specific rock-type names (e.g., diorite, eclogite, skarn) and locality information, increasing specificity by examining the continent, country, area, geological context, longitude, and latitude. Likewise, we utilized end-member and quality index calculations to help assess the garnet sample analysis quality. This comprehensive dataset of garnet information is an open-access resource available in the Evolutionary System of Mineralogy Database (ESMD) for future mineralogical studies, paving the way for characterizing correlations between chemical composition and paragenesis through natural kind clustering (Chiama et al., 2022; 10.48484/camh-xy98). We encourage scientists to contribute their own unpublished and unarchived analyses to the growing data repositories of mineralogical information that are increasingly valuable for advancing scientific discovery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
242. A Retrospective on the Challenges of Incorporating Grey Literature into a Scholarly Publishing Platform.
- Author
-
Reece, Alistair
- Subjects
GREY literature ,SCHOLARLY publishing ,DATA libraries ,COVID-19 pandemic ,RESEARCH personnel - Abstract
In 2019, GeoScienceWorld was actively planning to bring a large content and data repository, that includes a significant proportion of highly valued Grey Literature, into our existing collection of 50+ peer-reviewed journals and over 2300 books in the geosciences. Due to various external situations, including the impacts of the COVID-19 pandemic, and an absence of communityaccepted standards for Grey Literature publishing, this project has stalled. GeoScienceWorld continues to investigate opportunities to bring original datasets, as well as other collections of Grey Literature, predominantly in the form of partner societies' conference proceedings and related conference materials, into our traditional research platform. We are also in the early stages of planning for a new research tool that will be truly content agnostic in bringing research and valuable insights to our primary end-user stakeholders, researchers, whether in academia or industry. As an organization, GeoScienceWorld is further implementing an Agile mindset and development philosophy to bring increasingly useful, and timely, resources to our stakeholder groups. A key ceremony of all truly Agile development processes is the Retrospective. In this paper, I review the initial aims of the project to incorporate a large grey dataset into our traditional scholarly literature platform and provide reflections on how both GeoScienceWorld and the wider Grey Literature community can move forward to bring such valuable datasets to audiences that both want and need, such content to advance their research. For each element of the initial project, I ask the following Agile Retrospective questions: What did we do well? What could we have done better? What have we learned? What are we still puzzled by? As a result of applying these questions to the initial project, I will present recommendations that both inform GeoScienceWorld's future integration and presentation of Grey Literature, as well as offer a clearer path toward greater Grey Literature acceptance within traditional scholarly platforms such as ours. [ABSTRACT FROM AUTHOR]
- Published
- 2023
243. NxGEN white paper examines evolution of data centers.
- Subjects
DATA libraries ,DATA warehousing ,SURVEYS ,COMPUTER software - Abstract
The article focuses on a white paper titled "The Optimum Data Center: How Modular Data Centers Transcend Containers" released by NxGen Modular in January 2012 which explores the evolution of containerized data centers into more customizable, scalable and maintainable modular data centers. The key differences between containerized and modular data centers addressed in the paper are cited. NxGen will also conduct a survey on the key considerations in choosing a modular data center solution.
- Published
- 2012
244. Research on intelligent medical big data system based on Hadoop and blockchain.
- Author
-
Zhang, Xiangfeng and Wang, Yanmei
- Subjects
DATABASES ,DATA acquisition systems ,INTELLIGENT transportation systems ,BIG data ,BLOCKCHAINS ,DATA libraries ,INFORMATION storage & retrieval systems - Abstract
In order to improve the intelligence of the medical system, this paper designs and implements a secure medical big data ecosystem on top of the Hadoop big data platform. It is designed against the background of the increasingly serious trend of the current security medical big data ecosystem. In order to improve the efficiency of traditional medical rehabilitation activities and enable patients to maximize their understanding of their treatment status, this paper designs a personalized health information system that allows patient users to understand their treatment and rehabilitation status anytime and anywhere, and all medical health data distributed in different independent medical institutions to ensure that these data are stored independently. As a distributed accounting technology for multi-party maintenance and backup information security, blockchain is a good breakthrough point for innovation in medical data sharing. In this paper, the system realizes the personal health data centre on the Hadoop big data platform, and the original distributed data are stored and analyzed centrally through the data synchronization module and the independent data acquisition system. Utilizing the advantages of the Hadoop big data platform, the personalized health information system for stroke has designed to provide personalized health management services for patients and facilitate the management of patients by medical staff. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
245. SECURITY OPTIMIZATION OF IOT PLATFORMS BASED ON NAMED DATA NETWORKING.
- Author
-
MARGIN, Dan-Andrei, MOLDOVAN, Denisa-Adina, IVANCIU, Iustin-Alexandru, DOMINGO-PASCUAL, Jordi, and DOBROTA, Virgil
- Subjects
DATABASES ,INTERNET of things ,DATA integrity ,DATA management ,DATA libraries - Abstract
When it comes to developing a smart system involving sensors and actuators, there are two main problems to be addressed: what platform to be used as infrastructure and how to develop the security layer? In this paper, Orion Context Broker (OCB) is chosen for data management, while security layer is composed by Named Data Networking (NDN) and FIWARE modules (IdM KeyRock and Wilma PEP Proxy). The work was focused on data confidentiality and integrity because this is an important aspect of an IoT system. Store capabilities of OCB were extended from one value for an attribute to long term store archive. This approach is benefic for obtaining data necessary for statistics and predictions, where historical data must be used. [ABSTRACT FROM AUTHOR]
- Published
- 2021
246. Open Science and Intervention Research: a Program Developer's and Researcher's Perspective on Issues and Concerns.
- Author
-
Lochman, John E.
- Subjects
DATA libraries - Abstract
Open Science practices bear great promise for making research in general more reproducible and transparent, and these goals are very important for preventive intervention research. From my perspective as a program co-developer, I note potential concerns and issues of how open science practices can be used in intervention research. Key issues considered are in the realms of pre-registration (making pre-registration a living document; providing rewards for hypothesis-generating research, in addition to hypothesis-testing research), data archiving (resources for data archiving of large datasets; ethical issues related to need for strong de-identification), and research materials (intervention manuals and materials, and characteristics, training and supervision of intervention staff). The paper focuses on easier-to-address and considerably harder-to-address issues and concerns in these three areas. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
247. Personalized Book Recommendation Algorithm for University Library Based on Deep Learning Models.
- Author
-
Hou, Dongjin
- Subjects
DEEP learning ,ACADEMIC libraries ,ALGORITHMS ,LIBRARY circulation & loans ,DATA libraries - Abstract
Personalized recommendation is one of the important contents of personalized service in university libraries. Accurate and in-depth understanding of users is the premise of personalized recommendation. This paper proposes a personalized book recommendation algorithm based on deep learning models according to the characteristics and laws of user savings in university libraries. The method first uses the long short-term memory network (LSTM) to improve the deep autoencoder (DAE) so that the model can extract the temporal features of the data. Then, the Softmax function is used to obtain the book recommendation result of the current user. The proposed method is verified based on actual library lending data. The experimental results show that the proposed method has performance advantages compared with several existing recommendation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
248. Deuteron and alpha sub-libraries of JENDL-5.
- Author
-
Nakayama, Shinsuke, Iwamoto, Osamu, and Sublet, Jean-Christophe
- Subjects
DEUTERONS ,PARTICLE accelerators ,NEUTRON sources ,DATA libraries ,RADIOISOTOPES - Abstract
JENDL-5, the latest version of the Japanese evaluated nuclear data library, includes several sub-libraries to contribute to various applications. In this paper, we outline the evaluation and validation of the deuteron reaction sub-library developed mainly for the design of accelerator-based neutron sources and the alpha-particle reaction sub-library developed mainly for use in the back-end field. As for the deuteron sub-library, the data for
6,7 Li,9 Be, and12,13 C from JENDL/DEU-2020 were partially modified and adopted. The data up to 200 MeV for27 Al,63,65 Cu, and93 Nb, which are important as accelerator structural materials, were newly evaluated based on the calculations with the DEURACS code. As for the alpha-particle sub-library, the data up to 15 MeV for 18 light nuclides from Li to Si isotopes were evaluated based on the calculations with the CCONE code, and then only the neutron production cross sections were replaced with the data of JENDL/AN-2005. Validation on neutron yield by Monte Carlo transport simulations was performed for both sub-libraries. As a result, it was confirmed that the simulations based on the sub-libraries showed good agreement with experimental data. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
249. Managing and Processing Nuclear Data Libraries with FUDGE.
- Author
-
Mattoon, Caleb, Beck, Bret, and Gert, Godfree
- Subjects
NUCLEAR reactions ,RADIOACTIVE decay ,DATA libraries ,ELECTRONIC data processing ,DATA management - Abstract
FUDGE (For Updating Data and Generating Evaluations) is an open-source code that supports reading, visualizing, checking, modifying, and processing nuclear reaction and decay data. For ease of use the front-end of FUDGE is written in Python while C and C++ routines are employed for computationally intensive calculations. FUDGE has been developed primarily at Lawrence Livermore National Laboratory (LLNL) with contributions from Brookhaven National Laboratory (BNL). It is used by the LLNL Nuclear Data and Theory (NDT) group to deliver high-quality nuclear data libraries to users for a variety of applications. FUDGE is also the world leader in converting data to the Generalized Nuclear Database Structure (GNDS) and working with GNDS data, including processing and visualizing. GNDS is a new extensible hierarchy that has been internationally adopted as the new standard for storing and using nuclear data libraries, replacing the previous standard ENDF-6. A new public release of FUDGE has recently been published on github. This paper gives an overview of nuclear data processing capabilities in FUDGE, as well as describing the latest release, new capabilities, future plans, and basic instructions for users interested in applying FUDGE to their nuclear data workflow. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
250. Processing of JEFF nuclear data libraries for the SCALE Code System and testing with criticality benchmark experiments.
- Author
-
Jiménez-Carrascosa, Antonio, Cabellos, Oscar, Díez, Carlos Javier, and García-Herranz, Nuria
- Subjects
CRITICALITY (Nuclear engineering) ,NUCLEAR fission ,DATA libraries ,NUCLEAR fusion ,NEUTRON scattering - Abstract
In the last years, a new version of the Joint Evaluated Fission and Fusion File (JEFF) data library, namely JEFF-3.3, has been released with relevant updates in the neutron reaction, thermal neutron scattering and covariance sub-libraries. In the frame of the EU H2020 SANDA project, severale efforts have been made to enable the use of JEFF nuclear data libraries with the extensively tested and verified SCALE Code System. With this purpose, AMPX processing code has been applied to enable such application, allowing to provide insight into the interaction between the code and the new versions of JEFF data file. This paper provides an overview about the processing of JEFF-3.3 nuclear data library with AMPX for its application within the SCALE package. The AMPX-formatted cross-section library has been widely verified and tested using a comprehensive set of criticality benchmarks from ICSBEP, by comparing both with results provided by other processing and neutron transport codes and experimental. Processing of JEFF-3.3 covariances is also addressed along with their corresponding verification using covariances processed with NJOY. This work paves the way towards a successful future interaction between JEFF libraries and SCALE. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.