Back to Search Start Over

Large Scale Retrieval and Generation of Image Descriptions

Authors :
Alyssa Mensch
Vicente Ordonez
Jesse Dodge
Alexander C. Berg
Girish Kulkarni
Margaret Mitchell
Tamara L. Berg
Karl Stratos
Xufeng Han
Hal Daumé
Kota Yamaguchi
Yejin Choi
Polina Kuznetsova
Amit Goyal
Source :
International Journal of Computer Vision. 119:46-59
Publication Year :
2015
Publisher :
Springer Science and Business Media LLC, 2015.

Abstract

What is the story of an image? What is the relationship between pictures, language, and information we can extract using state of the art computational recognition systems? In an attempt to address both of these questions, we explore methods for retrieving and generating natural language descriptions for images. Ideally, we would like our generated textual descriptions (captions) to both sound like a person wrote them, and also remain true to the image content. To do this we develop data-driven approaches for image description generation, using retrieval-based techniques to gather either: (a) whole captions associated with a visually similar image, or (b) relevant bits of text (phrases) from a large collection of image + description pairs. In the case of (b), we develop optimization algorithms to merge the retrieved phrases into valid natural language sentences. The end result is two simple, but effective, methods for harnessing the power of big data to produce image captions that are altogether more general, relevant, and human-like than previous attempts.

Details

ISSN :
15731405 and 09205691
Volume :
119
Database :
OpenAIRE
Journal :
International Journal of Computer Vision
Accession number :
edsair.doi.dedup.....52483ab717fb80527b9629dd389cd80f
Full Text :
https://doi.org/10.1007/s11263-015-0840-y