Exploring the Relationship between Dataset Size and Image Captioning Model Performance
Date issued
2023
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
CEUR-WS
Abstract
Image captioning is a deep learning task that involves computer vision methods to extract visual information from the image and also natural language processing to generate the result caption in natural language. Image captioning models, just like other deep learning models, need a large amount of training data and require a long time to train. In this work, we investigate the impact of using a smaller amount of training data on the performance of the standard image captioning model Oscar. We train Oscar on different sizes of the training dataset and measure its performance in terms of accuracy and computational complexity. We observe that the computational time increases linearly with the amount of data used for training. However, the accuracy does not follow this linear trend and the relative improvement diminishes as we add more data to the training. We also measure the consistency of individual sizes of the training sets and observe that the more data we use for training the more consistent the metrics are. In addition to traditional evaluation metrics, we evaluate the performance using CLIP similarity. We investigate whether it can be used as a fully-fledged metric providing a unique advantage over the traditional metrics; it does not need reference captions that had to be acquired by human annotators. Our results show a high correlation between CLIP with the other metrics. This work provides valuable insights for understanding the requirements for training effective image captioning models. We believe our results can be transferred to other models, even in other deep-learning tasks. © 2023 Copyright for this paper by its authors.
Description
Subject(s)
Citation
ŽELEZNÝ, T.; HRÚZ, M. Exploring the Relationship between Dataset Size and Image Captioning Model Performance. In: CEUR Workshop Proceedings. Aachen: CEUR-WS, 2023, s. 1-8. ISBN neuvedeno, ISSN 1613-0073.