Back to Search Start Over

Making inference with messy (citizen science) data: when are data accurate enough and how can they be improved?

Authors :
Clare JDJ
Townsend PA
Anhalt-Depies C
Locke C
Stenglein JL
Frett S
Martin KJ
Singh A
Van Deelen TR
Zuckerberg B
Source :
Ecological applications : a publication of the Ecological Society of America [Ecol Appl] 2019 Mar; Vol. 29 (2), pp. e01849. Date of Electronic Publication: 2019 Feb 19.
Publication Year :
2019

Abstract

Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers-how accurate do data need to be and what is the best or most efficient way to improve it?-remains limited. We present a generalizable framework for evaluating data quality and identifying remediation practices, and demonstrate the framework using trail camera images classified using crowdsourcing to determine acceptable rates of misclassification and identify optimal remediation strategies for analysis using occupancy models. We used expert validation to estimate baseline classification accuracy and simulation to determine the sensitivity of two occupancy estimators (standard and false-positive extensions) to different empirical misclassification rates. We used regression techniques to identify important predictors of misclassification and prioritize remediation strategies. More than 93% of images were accurately classified, but simulation results suggested that most species were not identified accurately enough to permit distribution estimation at our predefined threshold for accuracy (<5% absolute bias). A model developed to screen incorrect classifications predicted misclassified images with >97% accuracy: enough to meet our accuracy threshold. Occupancy models that accounted for false-positive error provided even more accurate inference even at high rates of misclassification (30%). As simulation suggested occupancy models were less sensitive to additional false-negative error, screening models or fitting occupancy models accounting for false-positive error emerged as efficient data remediation solutions. Combining simulation-based sensitivity analysis with empirical estimation of baseline error and its variability allows users and managers of potentially error-prone data to identify and fix problematic data more efficiently. It may be particularly helpful for "big data" efforts dependent upon citizen scientists or automated classification algorithms with many downstream users, but given the ubiquity of observation or measurement error, even conventional studies may benefit from focusing more attention upon data quality.<br /> (© 2019 by the Ecological Society of America.)

Subjects

Subjects :
Algorithms
Data Accuracy
Ecology

Details

Language :
English
ISSN :
1051-0761
Volume :
29
Issue :
2
Database :
MEDLINE
Journal :
Ecological applications : a publication of the Ecological Society of America
Publication Type :
Academic Journal
Accession number :
30656779
Full Text :
https://doi.org/10.1002/eap.1849