State-of-the-art large-scale object retrieval systems usually combine efficient Bag-of-Words indexing models with a spatial verification re-ranking stage to improve query performance. In this paper we propose to directly discover spatially verified visual words as a batch process. Contrary to previous related methods based on feature sets hashing or clustering, we suggest not trading recall for efficiency by sticking on an accurate two-stage matching strategy. The problem then rather becomes a sampling issue: how to effectively and efficiently select relevant query regions while minimizing the number of tentative probes? We therefore introduce an adaptive weighted sampling scheme, starting with some prior distribution and iteratively converging to unvisited regions. Interestingly, the proposed paradigm is generalizable to any input prior distribution, including specific visual concept detectors or efficient hashing-based methods. We show in the experiments that the proposed method allows to discover highly interpretable visual words while providing excellent recall and image representativity.
This paper presents a scalable method for automatically discovering frequent visual objects in large multimedia collections even if their size is very small. It first formally revisits the problem of mining or discovering such objects, and then generalizes two kinds of existing methods for probing candidate object seeds: weighted adaptive sampling and hashingbased methods. The idea is that the collision frequencies obtained with hashing-based methods can actually be converted into a prior probability density function given as input to a weighted adaptive sampling algorithm. This allows for an evaluation of any hashing scheme effectiveness in a more generalized way, and a comparison with other priors, e.g. guided by visual saliency concerns. We then introduce a new hashing strategy, working first at the visual level, and then at the geometric level. This strategy allows us to integrate weak geometric constraints into the hashing phase itself and not only neighborhood constraints as in previous works. Experiments conducted on a new dataset introduced in this paper will show that using this new hashing-based prior allows a drastic reduction of the number of tentative probes required to discover small objects instantiated several times in a large dataset.
International audienceThis paper considers the problem of recognizing legal en-tities in visual contents in a similar way to named-entity recognizers for text documents. Whereas previous works were restricted to the recognition of a few tens of logotypes, we generalize the problem to the recognition of thousands of legal persons, each being modeled by a rich corporate identity automatically built from web images. We intro-duce a new geometrically-consistent instance-based classifi-cation method that is shown to outperform state-of-the-art techniques on several challenging datasets while being much more scalable. Further experiments performed on an au-tomatic web crawl of 5,824 legal entities demonstrates the scalability of the approach
This paper presents a visual-based media event detection system based on the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). Its main originality is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient transmedia events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.