Back to Search Start Over

Fooling the primate brain with minimal, targeted image manipulation

Authors :
Yuan, Li
Xiao, Will
Dellaferrera, Giorgia
Kreiman, Gabriel
Tay, Francis E. H.
Feng, Jiashi
Livingstone, Margaret S.
Publication Year :
2020

Abstract

Artificial neural networks (ANNs) are considered the current best models of biological vision. ANNs are the best predictors of neural activity in the ventral stream; moreover, recent work has demonstrated that ANN models fitted to neuronal activity can guide the synthesis of images that drive pre-specified response patterns in small neuronal populations. Despite the success in predicting and steering firing activity, these results have not been connected with perceptual or behavioral changes. Here we propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior. We generated 'deceptive images' of human faces, monkey faces, and noise patterns so that they are perceived as a different, pre-specified target category, and measured both monkey neuronal responses and human behavior to these images. We found several effective methods for changing primate visual categorization that required much smaller image change compared to untargeted noise. Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images. Our results represent a valuable step in quantifying and characterizing the differences in perturbation robustness of biological and artificial vision.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2011.05623
Document Type :
Working Paper