You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Show, attend and tell: Neural image caption generation with visual attention
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Bibtex:
@InProceedings{pmlr-v37-xuc15,
title = {Show, Attend and Tell: Neural Image Caption Generation with Visual Attention},
author = {Kelvin Xu and Jimmy Ba and Ryan Kiros and Kyunghyun Cho and Aaron Courville and Ruslan Salakhudinov and Rich Zemel and Yoshua Bengio},
booktitle = {Proceedings of the 32nd International Conference on Machine Learning},
pages = {2048--2057},
year = {2015},
editor = {Francis Bach and David Blei},
volume = {37},
series = {Proceedings of Machine Learning Research},
address = {Lille, France},
month = {07--09 Jul},
publisher = {PMLR}
}
The text was updated successfully, but these errors were encountered:
By contrast* Xu et al.’s caption generation method (2015) can show where in the image the network is focusing its attention while generating each word in its description, but does not perform classification.
Show, attend and tell: Neural image caption generation with visual attention
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Bibtex:
@InProceedings{pmlr-v37-xuc15,
title = {Show, Attend and Tell: Neural Image Caption Generation with Visual Attention},
author = {Kelvin Xu and Jimmy Ba and Ryan Kiros and Kyunghyun Cho and Aaron Courville and Ruslan Salakhudinov and Rich Zemel and Yoshua Bengio},
booktitle = {Proceedings of the 32nd International Conference on Machine Learning},
pages = {2048--2057},
year = {2015},
editor = {Francis Bach and David Blei},
volume = {37},
series = {Proceedings of Machine Learning Research},
address = {Lille, France},
month = {07--09 Jul},
publisher = {PMLR}
}
The text was updated successfully, but these errors were encountered: