论文交流 >  机器视觉

Deep Compositional Captioning: Describing Novel Object Categories Without Paired Training Data

深度组合字幕:无成对训练数据描述新物体分类

1年前 591 0  点赞 (0)  收藏 (0)

研究领域: 机器视觉   CVPR2016

应用方向: 图像问答

原理方法:

软件实现:

论文摘要:

While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.

论文精要:

论文点评 

您可以在评论中对论文进行“摘要翻译” “标签备注” “精要点评” “疑难提问”,我们会及时更新到数据库中!

任何论文都是 "在特定领域内"、"基于某种学术原理"、"研究某个应用问题", 因此分 领域标签 / 应用标签 / 原理标签 / 补充标签