We propose Localized Narratives, an efficient way to collect image captions with dense visual grounding. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotate 628k images with Localized Narratives: the whole COCO dataset and 504k images of the Open Images dataset, which can be downloaded below. We provide an extensive analysis of these annotations and demonstrate their utility on two applications which benefit from our mouse trace: controlled image captioning and image generation.

Caption: “In this image I can see a painting and above it I can see few numbers are written. I can see the painting is of water, few birds, few people, few trees and few buildings.”

Metadata: Image source: C & O Canal Mural - 3000 M Street NW Georgetown Washington (DC) August 2014. Author: Ron Cogswell. Image license.
Dataset: Open Images. ID: 28ad453294ca98ce. Recording file.

Research: https://arxiv.org/pdf/1912.03098.pdf by Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari
arXiv:1912.03098, 2019

Dataset: https://google.github.io/localized-narratives/

< Prev Next >