Back to Search Start Over

Domain Generalization by Marginal Transfer Learning.

Authors :
Blanchard, Gilles
Deshmukh, Aniket Anand
Dogan, Ürun
Lee, Gyemin
Scott, Clayton
Source :
Journal of Machine Learning Research. 2021, Vol. 22, p1-55. 55p.
Publication Year :
2021

Abstract

In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner. This problem arises in several applications where data distributions fluctuate because of environmental, technical, or other sources of variation. We introduce a formal framework for DG, and argue that it can be viewed as a kind of supervised learning problem by augmenting the original feature space with the marginal distribution of feature vectors. While our framework has several connections to conventional analysis of supervised learning algorithms, several unique aspects of DG require new methods of analysis. This work lays the learning theoretic foundations of domain generalization, building on our earlier conference paper where the problem of DG was introduced (Blanchard et al., 2011). We present two formal models of data generation, corresponding notions of risk, and distribution-free generalization error analysis. By focusing our attention on kernel methods, we also provide more quantitative results and a universally consistent algorithm. An efficient implementation is provided for this algorithm, which is experimentally compared to a pooling strategy on one synthetic and three real-world data sets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15324435
Volume :
22
Database :
Academic Search Index
Journal :
Journal of Machine Learning Research
Publication Type :
Academic Journal
Accession number :
155404483