The conventional models of authorization have been designed for database systems supporting the hierarchical, network, and relational models of data. However, these models are not adequate for next-generation database systems that support richer data models that include object-oriented concepts and semantic data modeling concepts. Rabitti, Woelk, and Kim [141 presented a preliminary model of authorization for use as the basis of an authorization mechanism in such database systems. In this paper we present a fuller model of authorization that fills a few major gaps that the conventional models of authorization cannot fill for next-generation database systems. We also further formalize the notion of implicit authorization and refine the application of the notion of implicit authorization to object-oriented and semantic modeling concepts, We also describe a user interface for using the model of authorization and consider key issues in implementing the authorization model.
The signature file access method has proved to be a convenient indexing technique, in particular for text data Because it can deal with unformatted data, many application domains have shown interest in signature file techniques, e.g., office information systems, statistical and logic databases. We argue that multimedia databases should also take advantage of this method, provided convenient storage structures for organizing signature tiles are available, Our main concern here is the dynamic organization of signatures based on a partitioning paradigm called Quick Filter. A signature file is partitioned by a hashing function and the partitions are orgamzed by linear hashing, Thorough performance evaluation of the new scheme is provided, and it is compared with single-level and multdevel storage structures Results show that quick filter is economical in space and very convenient for applications dealing with large files of dynamic data, and where user queries result in signatures with high weights. These characteristics are particularly interesting for multimedia databases, where integrated access to attributes, text and images must be provided.
As the number of digital images is growing fast and Content-based Image Retrieval (CBIR) is gaining in popularity, CBIR systems should leap towards Web-scale datasets. In this paper, we report on our experience in building an experimental similarity search system on a test collection of more than 50 million images. The first big challenge we have been facing was obtaining a collection of images of this scale with the corresponding descriptive features. We have tackled the non-trivial process of image crawling and extraction of several MPEG-7 descriptors. The result of this effort is a test collection, the first of such scale, opened to the research community for experiments and comparisons. The second challenge was to develop indexing and searching mechanisms able to scale to the target size and to answer similarity queries in real-time. We have achieved this goal by creating sophisticated centralized and distributed structures based purely on the metric space model of data. We have joined them together which has resulted in an extremely flexible and scalable solution. In this paper, we study in detail the performance of this technology and its evolvement as the data volume grows by three orders of magnitude. The results of the experiments are very encouraging and promising for future applications. © 2009 Springer Science+Business Media, LLC
Similarity search structures for metric data typically bound object partitions by ball regions. Since regions can overlap, a relevant issue is to estimate the proximity of regions in order to predict the number of objects in the regions' intersection. This paper analyzes the problem using a probabilistic approach and provides a solution that effectively computes the proximity through realistic heuristics that only require small amounts of auxiliary data. An extensive simulation to validate the technique is provided. An application is developed to demonstrate how the proximity measure can be successfully applied to the approximate similarity search. Search speedup is achieved by ignoring data regions whose proximity to the query region is smaller than a user-defined threshold. This idea is implemented in a metric tree environment for the similarity range and "nearest neighbors" queries. Several measures of efficiency and effectiveness are applied to evaluate proposed approximate search algorithms on real-life data sets. An analytical model is developed to relate proximity parameters and the quality of search. Improvements of two orders of magnitude are achieved for moderately approximated search results. We demonstrate that the precision of proximity measures can significantly influence the quality of approximated algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.