Back to Search Start Over

Deep Convolutional Neural Networks in the Face of Caricature: Identity and Image Revealed

Authors :
Hill, Matthew Q.
Parde, Connor J.
Castillo, Carlos D.
Colon, Y. Ivette
Ranjan, Rajeev
Chen, Jun-Cheng
Blanz, Volker
O'Toole, Alice J.
Publication Year :
2018

Abstract

Real-world face recognition requires an ability to perceive the unique features of an individual face across multiple, variable images. The primate visual system solves the problem of image invariance using cascades of neurons that convert images of faces into categorical representations of facial identity. Deep convolutional neural networks (DCNNs) also create generalizable face representations, but with cascades of simulated neurons. DCNN representations can be examined in a multidimensional "face space", with identities and image parameters quantified via their projections onto the axes that define the space. We examined the organization of viewpoint, illumination, gender, and identity in this space. We show that the network creates a highly organized, hierarchically nested, face similarity structure in which information about face identity and imaging characteristics coexist. Natural image variation is accommodated in this hierarchy, with face identity nested under gender, illumination nested under identity, and viewpoint nested under illumination. To examine identity, we caricatured faces and found that network identification accuracy increased with caricature level, and--mimicking human perception--a caricatured distortion of a face "resembled" its veridical counterpart. Caricatures improved performance by moving the identity away from other identities in the face space and minimizing the effects of illumination and viewpoint. Deep networks produce face representations that solve long-standing computational problems in generalized face recognition. They also provide a unitary theoretical framework for reconciling decades of behavioral and neural results that emphasized either the image or the object/face in representations, without understanding how a neural code could seamlessly accommodate both.<br />Comment: 8 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1812.10902
Document Type :
Working Paper
Full Text :
https://doi.org/10.1038/s42256-019-0111-7