Back to Search Start Over

Liquid Ensemble Selection for Continual Learning

Authors :
Blair, Carter
Armstrong, Ben
Larson, Kate
Source :
Proceedings of the Canadian Conference on Artificial Intelligence. https://caiac.pubpub.org/pub/7gegu91h (2024)
Publication Year :
2024

Abstract

Continual learning aims to enable machine learning models to continually learn from a shifting data distribution without forgetting what has already been learned. Such shifting distributions can be broken into disjoint subsets of related examples; by training each member of an ensemble on a different subset it is possible for the ensemble as a whole to achieve much higher accuracy with less forgetting than a naive model. We address the problem of selecting which models within an ensemble should learn on any given data, and which should predict. By drawing on work from delegative voting we develop an algorithm for using delegation to dynamically select which models in an ensemble are active. We explore a variety of delegation methods and performance metrics, ultimately finding that delegation is able to provide a significant performance boost over naive learning in the face of distribution shifts.<br />Comment: Accepted at Canadian AI Conference 2024

Details

Database :
arXiv
Journal :
Proceedings of the Canadian Conference on Artificial Intelligence. https://caiac.pubpub.org/pub/7gegu91h (2024)
Publication Type :
Report
Accession number :
edsarx.2405.07327
Document Type :
Working Paper