1. Can AI help in crowdsourcing? A theory-based model for idea screening in crowdsourcing contests
- Author
-
Bell, JJ, Pescher, C, Tellis, GJ, and Füller, J
- Abstract
Crowdsourcing generates up to thousands of ideas per contest. The selection of best ideas is costly because of the limited number, objectivity, and attention of experts. Using a data set of 21 crowdsourcing contests that include 4,191 ideas, we test how artificial intelligence can assist experts in screening ideas. The authors have three major findings. First, whereas even the best previously published theory-based models cannot mimic human experts in choosing the best ideas, a simple model using the least average shrinkage and selection operator can efficiently screen out ideas considered bad by experts. In an additional 22nd hold-out contest with internal and external experts, the simple model does better than external experts in predicting the ideas selected by internal experts. Second, the authors develop an idea screening efficiency curve that trades off the false negative rate against the total ideas screened. Managers can choose the desired point on this curve given their loss function. The best model specification can screen out 44% of ideas, sacrificing only 14% of good ideas. Alternatively, for those unwilling to lose any winners, a novel two-step approach screens out 21% of ideas without sacrificing a single first place winner. Third, a new predictor, word atypicality, is simple and efficient in screening. Theoretically, this predictor screens out atypical ideas and keeps inclusive and rich ideas.
- Published
- 2023