SUMMARY Virtual reality (VR) systems are useful tools that enable users to alter environmental settings and the location of landmarks in an accurate and fast way. Primates have been shown to be able to navigate in virtual environments. For rodents, however, all previous attempts to develop VR systems in which rats behave in the same way as in corresponding 3-D environments have failed. The question arises as to whether, in principle, rodents can be trained to navigate in a properly designed virtual environment (VE), or whether this peculiarity is limited to primates and humans. We built a virtual reality set-up that takes the wide-angle visual system of rats into account. We show for the first time that rats learn spatial tasks in this VE quite readily. This set-up opens up new opportunities for investigations of information processing in navigation (e.g. the importance of optic flow or vestibular input).
A traffic matrix encompassing the entire Internet would be very valuable. Unfortunately, from any given vantage point in the network, most traffic is invisible. In this paper we describe results that hold some promise for this problem. First, we show a new characterization result: traffic matrices (TMs) typically show very low effective rank. This result refers to TMs that are purely spatial (have no temporal component), over a wide range of spatial granularities. Next, we define an inference problem whose solution allows one to infer invisible TM elements. This problem relies crucially on an atomicity property we define. Finally, we show example solutions of this inference problem via two different methods: regularized regression and matrix completion. The example consists of an AS inferring the amount of invisible traffic passing between other pairs of ASes. Using this example we illustrate the accuracy of the methods as a function of spatial granularity.
Abstract. Automatic image annotation empowers the user to search an image database using keywords, which is often a more practical option than a query-by-example approach. In this work, we present a novel image annotation scheme which is fast and effective and scales well to a large number of keywords. We first provide a feature weighting scheme suitable for image annotation, and then an annotation model based on the one-class support vector machine. We show that the system works well even with a small number of visual features. We perform experiments using the Corel Image Collection and compare the results with a wellestablished image annotation system.
Image classification systems have received a recent boost from methods using local features generated over interest points, delivering higher robustness against partial occlusion and cluttered backgrounds. We propose in this paper to use relational features calculated over multiple directions and scales around these interest points. Furthermore, a very important design issue is the choice of similarity measure to compare the bags of local feature vectors generated by each image, for which we propose a novel approach by computing image similarity using cluster co-occurrence matrices of local features. Excellent results are achieved for a widely used medical image classification task, and ideas to generalize to other tasks are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.