Back to Search
Start Over
A framework for rigorous evaluation of human performance in human and machine learning comparison studies.
- Source :
-
Scientific reports [Sci Rep] 2022 Mar 31; Vol. 12 (1), pp. 5444. Date of Electronic Publication: 2022 Mar 31. - Publication Year :
- 2022
-
Abstract
- Rigorous comparisons of human and machine learning algorithm performance on the same task help to support accurate claims about algorithm success rates and advances understanding of their performance relative to that of human performers. In turn, these comparisons are critical for supporting advances in artificial intelligence. However, the machine learning community has lacked a standardized, consensus framework for performing the evaluations of human performance necessary for comparison. We demonstrate common pitfalls in a designing the human performance evaluation and propose a framework for the evaluation of human performance, illustrating guiding principles for a successful comparison. These principles are first, to design the human evaluation with an understanding of the differences between human and algorithm cognition; second, to match trials between human participants and the algorithm evaluation, and third, to employ best practices for psychology research studies, such as the collection and analysis of supplementary and subjective data and adhering to ethical review protocols. We demonstrate our framework's utility for designing a study to evaluate human performance on a one-shot learning task. Adoption of this common framework may provide a standard approach to evaluate algorithm performance and aid in the reproducibility of comparisons between human and machine learning algorithm performance.<br /> (© 2022. The Author(s).)
Details
- Language :
- English
- ISSN :
- 2045-2322
- Volume :
- 12
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Scientific reports
- Publication Type :
- Academic Journal
- Accession number :
- 35361786
- Full Text :
- https://doi.org/10.1038/s41598-022-08078-3