Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Search Anything is presented, a novel approach to perform similarity search in images. In contrast to other approaches to image similarity search, Search Anything enables users to utilize point, box, and text prompts to search for similar regions in a set of images. The region selected by a prompt is automatically segmented, and a binary feature vector is extracted. This feature vector is then used as a query for an image region index, and the images that contain the corresponding regions are returned. Search Anything is trained in a self-supervised manner on mask features extracted by the FastSAM foundation model and semantic features for masked image regions extracted by the CLIP foundation model to learn binary hash code representations for image regions. By coupling these two foundation models, images can be indexed and searched at a more fine-grained level than finding only entire similar images. Experiments on several datasets from different domains in a zero-shot setting demonstrate the benefits of Search Anything as a versatile region-based similarity search approach for images. The efficacy of the approach is further supported by qualitative results. Ablation studies are performed to evaluate how the proposed combination of semantic features and segmentation features together with masking improves the performance of Search Anything over the baseline using CLIP features alone. For large regions, relative improvements of up to 9.87% in mean average precision are achieved. Furthermore, considering context is beneficial for searching small image regions; a context of 3 times an object’s bounding box gives the best results. Finally, we measure computation time and determine storage requirements.
Search Anything is presented, a novel approach to perform similarity search in images. In contrast to other approaches to image similarity search, Search Anything enables users to utilize point, box, and text prompts to search for similar regions in a set of images. The region selected by a prompt is automatically segmented, and a binary feature vector is extracted. This feature vector is then used as a query for an image region index, and the images that contain the corresponding regions are returned. Search Anything is trained in a self-supervised manner on mask features extracted by the FastSAM foundation model and semantic features for masked image regions extracted by the CLIP foundation model to learn binary hash code representations for image regions. By coupling these two foundation models, images can be indexed and searched at a more fine-grained level than finding only entire similar images. Experiments on several datasets from different domains in a zero-shot setting demonstrate the benefits of Search Anything as a versatile region-based similarity search approach for images. The efficacy of the approach is further supported by qualitative results. Ablation studies are performed to evaluate how the proposed combination of semantic features and segmentation features together with masking improves the performance of Search Anything over the baseline using CLIP features alone. For large regions, relative improvements of up to 9.87% in mean average precision are achieved. Furthermore, considering context is beneficial for searching small image regions; a context of 3 times an object’s bounding box gives the best results. Finally, we measure computation time and determine storage requirements.
PurposeThe digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned from single modalities, such as text, images, audio and video, to integrated multimodal forms. This paper identifies key trends, gaps and areas of focus in the field. Furthermore, it proposes a theoretical organizational framework based on deep learning to address the challenges of managing archives in the era of big data.Design/methodology/approachVia a comprehensive systematic literature review, the authors investigate the field of multimodal archive resource organization and the application of deep learning techniques in archive organization. A systematic search and filtering process is conducted to identify relevant articles, which are then summarized, discussed and analyzed to provide a comprehensive understanding of existing literature.FindingsThe authors' findings reveal that most research on multimodal archive resources predominantly focuses on aspects related to storage, management and retrieval. Furthermore, the utilization of deep learning techniques in image archive retrieval is increasing, highlighting their potential for enhancing image archive organization practices; however, practical research and implementation remain scarce. The review also underscores gaps in the literature, emphasizing the need for more practical case studies and the application of theoretical concepts in real-world scenarios. In response to these insights, the authors' study proposes an innovative deep learning-based organizational framework. This proposed framework is designed to navigate the complexities inherent in managing multimodal archive resources, representing a significant stride toward more efficient and effective archival practices.Originality/valueThis study comprehensively reviews the existing literature on multimodal archive resources organization. Additionally, a theoretical organizational framework based on deep learning is proposed, offering a novel perspective and solution for further advancements in the field. These insights contribute theoretically and practically, providing valuable knowledge for researchers, practitioners and archivists involved in organizing multimodal archive resources.
Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this paper, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achievingsample efficiencyor learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of theMS MARCOandTREC-DLtest sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.