1. Few-shot learning in changing domains
- Author
-
Mason, Ian, Komura, Taku, Bilen, Hakan, and Hospedales, Timothy
- Subjects
Domain Shift ,Domain Adaptation ,Content Invariance Assumption ,Few-shot Learning ,Neural Animation Systems ,Data-Driven Animation Synthesis ,Style Transfer ,Style Modelling ,Residual Adaptation Approach ,EMNIST-DA ,Unit-level Surprise - Abstract
This thesis will present a number of investigations into how machine learning systems, in particular artificial neural networks, function in changing domains. In the standard machine learning paradigm a model is evaluated on held out (test) data from the same dataset the model is trained on. That is, across training and test splits, the data is independent and identically distributed. However, in many situations, after a model is trained we encounter related but different data (out of distribution) on which we hope to use our model. This thesis is concerned with methods for how we can adapt our model to work well on such new data and additionally considers the restriction that very often there may not be large amounts of data to adapt on. In the first part of the thesis we examine a challenging application situation in realtime animation systems. We focus on changing styles of human locomotion, building systems that can model multiple styles at once and rapidly adapt to new styles. By augmenting state of the art systems with style modulating residual adaptation or feature-wise linear modulation we are able to model large numbers of styles at high levels of detail. We also make contributions to the general modelling of locomotion with the creation of contact-free local phase labelling. In the second part of the thesis we examine the problem in more controlled settings. We present a technique for general adaptation in situations where the change in domain is caused by a change in measurement system and the original training data is not available (source-free). By aligning activation distributions, and training in a bottomup manner, we achieve improvements in accuracy, calibration and data efficiency over existing techniques. We additionally analyse the effects of changing domains at the unit (or neuron) level, showing how changes in individual unit's activation distributions can reveal network structure and may be able to give us cues for faster adaptation via improved credit assignment. In summary we highlight the importance of few-shot adaptation in multiple different settings and show how different techniques can be used productively to solve problems in these areas and provide inspiration for the next generation of machine learning models that should be able to learn continually from small amounts of data.
- Published
- 2022
- Full Text
- View/download PDF