2022
DOI: 10.48550/arxiv.2208.03030
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding

Abstract: Visual question answering is an important task in both natural language and vision understanding. However, in most of the public visual question answering datasets such as VQA [5] CLEVR [32], the questions are human generated that specific to the given image, such as 'What color are her eyes?'. The human generated crowdsourcing questions are relatively simple and sometimes have the bias toward certain entities or attributes [1,55].In this paper, we introduce a new question answering dataset based on image-ChiQ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 50 publications
(90 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?