Back to Search Start Over

Captioning Images with Diverse Objects

Authors :
Venugopalan, Subhashini
Hendricks, Lisa Anne
Rohrbach, Marcus
Mooney, Raymond
Darrell, Trevor
Saenko, Kate
Publication Year :
2016

Abstract

Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.<br />Comment: CVPR 2017 Camera ready version. 17 pages (8 + 9 supplement), 12 figures, 8 tables. Includes project page http://vsubhashini.github.io/noc.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1606.07770
Document Type :
Working Paper