Back to Search Start Over

Knowledge sharing : from atomic to parametrised context and shallow to deep models

Authors :
Yang, Yongxin
Publication Year :
2017
Publisher :
Queen Mary, University of London, 2017.

Abstract

Key to achieving more effective machine intelligence is the capability to generalise knowledge across different contexts. In this thesis, we develop a new and very general perspective on knowledge sharing that unifi es and generalises many existing methodologies, while being practically effective, simple to implement, and opening up new problem settings. Knowledge sharing across tasks and domains has conventionally been studied disparately. We fi rst introduce the concept of a semantic descriptor and a flexible neural network approach to knowledge sharing that together unify multi-task/multi-domain learning, and encompass various classic and recent multi-domain learning (MDL) and multi-task learning (MTL) algorithms as special cases. We next generalise this framework from single-output to multi-output problems and from shallow to deep models. To achieve this, we establish the equivalence between classic tensor decomposition methods, and specifi c neural network architectures. This makes it possible to implement our framework within modern deep learning stacks. We present both explicit low-rank, and trace norm regularisation solutions. From a practical perspective, we also explore a new problem setting of zero-shot domain adaptation (ZSDA) where a model can be calibrated solely based on some abstract information of a new domain, e.g., some metadata like the capture device of photos, without collecting or labelling the data.

Details

Language :
English
Database :
British Library EThOS
Publication Type :
Dissertation/ Thesis
Accession number :
edsble.766006
Document Type :
Electronic Thesis or Dissertation