Back to Search Start Over

Standardised Versioning of Datasets: a FAIR-compliant Proposal.

Authors :
González-Cebrián A
Bradford M
Chis AE
González-Vélez H
Source :
Scientific data [Sci Data] 2024 Apr 09; Vol. 11 (1), pp. 358. Date of Electronic Publication: 2024 Apr 09.
Publication Year :
2024

Abstract

This paper presents a standardised dataset versioning framework for improved reusability, recognition and data version tracking, facilitating comparisons and informed decision-making for data usability and workflow integration. The framework adopts a software engineering-like data versioning nomenclature ("major.minor.patch") and incorporates data schema principles to promote reproducibility and collaboration. To quantify changes in statistical properties over time, the concept of data drift metrics (d) is introduced. Three metrics (d <subscript>P</subscript> , d <subscript>E</subscript> , <subscript>PCA</subscript> , and d <subscript>E,AE</subscript> ) based on unsupervised Machine Learning techniques (Principal Component Analysis and Autoencoders) are evaluated for dataset creation, update, and deletion. The optimal choice is the d <subscript>E</subscript> , <subscript>PCA</subscript> metric, combining PCA models with splines. It exhibits efficient computational time, with values below 50 for new dataset batches and values consistent with seasonal or trend variations. Major updates (i.e., values of 100) occur when scaling transformations are applied to over 30% of variables while efficiently handling information loss, yielding values close to 0. This metric achieved a favourable trade-off between interpretability, robustness against information loss, and computation time.<br /> (© 2024. The Author(s).)

Details

Language :
English
ISSN :
2052-4463
Volume :
11
Issue :
1
Database :
MEDLINE
Journal :
Scientific data
Publication Type :
Academic Journal
Accession number :
38594314
Full Text :
https://doi.org/10.1038/s41597-024-03153-y