Back to Search
Start Over
Computational modeling of behavioral tasks: An illustration on a classic reinforcement learning paradigm
- Source :
- Tutorials in Quantitative Methods for Psychology, Vol 17, Iss 2, Pp 105-140 (2021)
- Publication Year :
- 2021
- Publisher :
- Université d'Ottawa, 2021.
-
Abstract
- There has been a growing interest among psychologists, psychiatrists and neuroscientists in applying computational modeling to behavioral data to understand animal and human behavior. Such approaches can be daunting for those without experience. This paper presents a step-by-step tutorial to conduct parameter estimation in R via three techniques: Maximum Likelihood Estimation (MLE), Maximum A Posteriori (MAP) and Expectation-Maximization with Laplace approximation (EML). We first demonstrate how to simulate a classic reinforcement learning paradigm -- the two-armed bandit task -- for N = 100 subjects; and then explain how to develop the computational model and implement the MLE, MAP and EML methods to recover the parameters. By presenting a sufficiently detailed walkthrough on a familiar behavioral task, we hope this tutorial could benefit readers interested in applying parameter estimation methods in their own research.
Details
- Language :
- English, French
- ISSN :
- 19134126
- Volume :
- 17
- Issue :
- 2
- Database :
- Directory of Open Access Journals
- Journal :
- Tutorials in Quantitative Methods for Psychology
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.2294717180243ae8da2183d8b066456
- Document Type :
- article
- Full Text :
- https://doi.org/10.20982/tqmp.17.2.p105