Abstract. We propose an algorithm for automatizing the task of "Tidying up Art" introduced by the comedian Wehrli [1]. Driven by a strong sense of order and tidyness, Wehrli systematically dissects famous artworks into their constituents and rearranges them according to certain ordering principles. The proposed algorithmic solution to this problem builds up on a number of recent advances in image segmentation and grouping. It has two important advantages: Firstly, the computerized tidying up of art is substantially faster than manual labor requiring only a few seconds on state-of-the-art GPUs compared to many hours of manual labor. Secondly, the computed part decomposition and reordering is fully reproducible. In particular, the arrangement of parts is determined based on mathematically transparent criteria rather than the invariably subjective and irreproducible human sense of order.
Finding objects and motifs across artworks is of great importance for art history as it helps to understand individual works and analyze relations between them. The advent of digitization has produced extensive digital art collections with many research opportunities. However, manual approaches are inadequate to handle this amount of data, and it requires appropriate computer-based methods to analyze them. This article presents a visual search algorithm and user interface to support art historians to find objects and motifs in extensive datasets. Artistic image collections are subject to significant domain shifts induced by large variations in styles, artistic media, and materials. This poses new challenges to most computer vision models which are trained on photographs. To alleviate this problem, we introduce a multi-style feature aggregation that projects images into the same distribution, leading to more accurate and style-invariant search results. Our retrieval system is based on a voting procedure combined with fast nearest-neighbor search and enables finding and localizing motifs within an extensive image collection in seconds. The presented approach significantly improves the state-of-the-art in terms of accuracy and search time on various datasets and applies to large and inhomogeneous collections. In addition to the search algorithm, we introduce a user interface that allows art historians to apply our algorithm in practice. The interface enables users to search for single regions, multiple regions regarding different connection types and holds an interactive feedback system to improve retrieval results further. With our methodological contribution and easy-to-use user interface, this work manifests further progress towards a computer-based analysis of visual art.
The search for specific objects or motifs is essential to art history as both assist in decoding the meaning of artworks. Digitization has produced large art collections, but manual methods prove to be insufficient to analyze them. In the following, we introduce an algorithm that allows users to search for image regions containing specific motifs or objects and find similar regions in an extensive dataset, helping art historians to analyze large digitized art collections. Computer vision has presented efficient methods for visual instance retrieval across photographs. However, applied to art collections, they reveal severe deficiencies because of diverse motifs and massive domain shifts induced by differences in techniques, materials, and styles. In this paper, we present a multi-style feature fusion approach that successfully reduces the domain gap and improves retrieval results without labelled data or curated image collections. Our region-based voting with GPU-accelerated approximate nearest-neighbour search [29] allows us to find and localize even small motifs within an extensive dataset in a few seconds. We obtain state-of-the-art results on the Brueghel dataset [2, 53] and demonstrate its generalization to inhomogeneous collections with a large number of distractors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.