Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest
The photograph and our understanding of photography is ever changing and has transitioned from a world of unprocessed rolls of C-41 sitting in a fridge 50 years ago to sharing photos on the 1.5" screen of a point and shoot camera 10 years back. And today the photograph is again something different. The way we take photos is fundamentally different. We can view, share, and interact with photos on the device they were taken on. We can edit, tag, or "filter" photos directly on the camera at the same time the photo is being taken. Photos can be automatically pushed to various online sharing services, and the distinction between photos and videos has lessened. Beyond this, and more importantly, there are now lots of them. To Facebook alone more than 250 billion photos have been uploaded and on average it receives over 350 million new photos every day [6], while YouTube reports that 300 hours of video are uploaded every minute [22]. A back of the envelope estimation reports 10% of all photos in the world were taken in the last 12 months, and that was calculated already more than three years ago [8].Today, a large number of the digital media objects that are shared have been uploaded to services like Flickr or Instagram, which along with their metadata and their social ecosystem form a vibrant environment for finding solutions to many research questions at scale. Photos and videos provide a wealth of information about the universe, covering entertainment, travel, personal records, and various other aspects of life in general as it was when they were taken.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.