1. Bayesian inference: more than Bayes's theorem.
- Author
-
Loredo, Thomas J. and Wolpert, Robert L.
- Subjects
- *
BAYES' theorem , *BAYESIAN field theory , *PROBABILITY theory , *POISSON distribution , *MAXIMUM likelihood statistics - Abstract
Bayesian inference gets its name from Bayes's theorem , expressing posterior probabilities for hypotheses about a data generating process as the (normalized) product of prior probabilities and a likelihood function. But Bayesian inference uses all of probability theory, not just Bayes's theorem. Many hypotheses of scientific interest are composite hypotheses , with the strength of evidence for the hypothesis dependent on knowledge about auxiliary factors, such as the values of nuisance parameters (e.g., uncertain background rates or calibration factors). Many important capabilities of Bayesian methods arise from use of the law of total probability, which instructs analysts to compute probabilities for composite hypotheses by marginalization over auxiliary factors. This tutorial targets relative newcomers to Bayesian inference, aiming to complement tutorials that focus on Bayes's theorem and how priors modulate likelihoods. The emphasis here is on marginalization over parameter spaces—both how it is the foundation for important capabilities, and how it may motivate caution when parameter spaces are large. Topics covered include the difference between likelihood and probability, understanding the impact of priors beyond merely shifting the maximum likelihood estimate, and the role of marginalization in accounting for uncertainty in nuisance parameters, systematic error, and model misspecification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF