Back to Search
Start Over
Designing content-based adversarial perturbations and distributed one-class learning for images
- Publication Year :
- 2021
- Publisher :
- Queen Mary, University of London, 2021.
-
Abstract
- This thesis covers two privacy-related problems for images: designing adversarial perturbations that can be added to the input images to protect the private content of images that a user shares with other users from the undesirable automatic inference of classifiers, and training privacy-preserving classifiers on images that are distributed among their owners (image holders) and contain their private information. Adversarial images can be easily detected using denoising algorithms when high-frequency spatial perturbations are used, or can be noticed by humans when perturbations are large and irrelevant to the content of images. In addition to this, adversarial images are not transferable to unseen classifiers as perturbations are small (in terms of the lp norm). In the first part of the thesis, we propose content-based adversarial perturbations that account for the content of the images (objects, colour, structure and details), human perception and the semantics of the class labels to address the above-mentioned limitations of perturbations. Our adversarial colour perturbations selectively modify the colours of objects within chosen ranges that are perceived as natural by humans. In addition to these natural-looking adversarial images, our structure-aware perturbations exploit traditional image processing filters, such as detail enhancement filter and Gamma correction filter, to generate enhanced adversarial images. We validate the proposed perturbations against three classifiers trained on ImageNet. Experiments show that the proposed perturbations are more robust and transferable and cause misclassification with a label that is semantically different from the label of the original image, when compared with seven state-ofthe- art perturbations. Classifiers are often trained by relying on centralised collection and aggregation of images that could lead to significant privacy concerns by disclosing the sensitive information of image holders. In the second part of the thesis, we propose a privacy-preserving technique, called distributed one-class learning, that enables training to take place on edge devices and therefore image holders do not need to centralise their images. Each image holder can independently use their images to locally train a reconstructive adversarial network as their one-class classifier. As sending the model parameters to the service provider would reveal sensitive information, we secret-share the parameters among two non-colluding service providers. Then, we provide cryptographically private prediction services through a mixture of multi-party computation protocols to achieve substantial gains in complexity and speed. A major advantage of the proposed technique is that none of the image holders and service providers can access the parameters and images of other image holders. We quantify the benefits of the proposed technique and compare its 3 4 performance with centralised training on three privacy-sensitive image-based tasks. Experiments show that the proposed technique achieves similar classification performance as non-private centralised training, while not violating the privacy of the image holders.
Details
- Language :
- English
- Database :
- British Library EThOS
- Publication Type :
- Dissertation/ Thesis
- Accession number :
- edsble.851430
- Document Type :
- Electronic Thesis or Dissertation