Back to Search
Start Over
Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link
- Source :
- Scandinavian actuarial journal, 2022 (10
- Publication Year :
- 2022
-
Abstract
- Thanks to its outstanding performances, boosting has rapidly gained wide acceptance among actuaries. To speed up calculations, boosting is often applied to gradients of the loss function, not to responses (hence the name gradient boosting). When the model is trained by minimizing Poisson deviance, this amounts to apply the least-squares principle to raw residuals. This exposes gradient boosting to the same problems that lead to replace least-squares with Poisson Generalized Linear Models (GLM) to analyze low counts (typically, the number of reported claims at policy level in personal lines). This paper shows that boosting can be conducted directly on the response under Tweedie loss function and log-link, by adapting the weights at each step. Numerical illustrations demonstrate similar or better performances compared to gradient boosting when trees are used as weak learners, with a higher level of transparency since responses are used instead of gradients.<br />SCOPUS: ar.j<br />info:eu-repo/semantics/published
Details
- Database :
- OAIster
- Journal :
- Scandinavian actuarial journal, 2022 (10
- Notes :
- 1 full-text file(s): application/pdf, English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1356660430
- Document Type :
- Electronic Resource