Aug 20,2022 Scientific research & Postgraduate Studies, ICT Engineering

Image captioning model using attention and object features to mimic human image understanding

Researchers

Muhammad Abdelhadie Al-Malla, Assef Jafar and Nada Ghneim

Published in

Journal of Big Data, volume 9, article number 20, February 2022.

 

Abstract

Image captioning spans the fields of computer vision and natural language processing. The image captioning task generalizes object detection where the descriptions are a single word. Recently, most research on image captioning has focused on deep learning techniques, especially Encoder-Decoder models with Convolutional Neural Network (CNN) feature extraction. However, few works have tried using object detection features to increase the quality of the generated captions. This paper presents an attention-based, Encoder-Decoder deep architecture that makes use of convolutional features extracted from a CNN model pre-trained on ImageNet (Xception), together with object features extracted from the YOLOv4 model, pre-trained on MS COCO. This paper also introduces a new positional encoding scheme for object features, the “importance factor”. Our model was tested on the MS COCO and Flickr30k datasets, and the performance is compared to performance in similar works. Our new feature extraction scheme raises the CIDEr score by 15.04%.

Keywords: Image captioning, object features, positional encoding, Encoder-Decoder.

Link to Read Full Paper

https://doi.org/10.1186/s40537-022-00571-w