This article describes the general architecture and application of a remote laboratory for teaching control theory based in Matlab/Simulink. The proposed system allows solving the time and spatial limitations of laboratories that rely on real physical systems used in control courses. In this way, control lab assignments with various physical processes present in the remote laboratories can be performed. Also, some examples that show the validity and applicability of the presented architecture are introduced. ß
This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.
A fully integrated development tool for computer vision systems has been built in the framewo r k o f t h i s p a p e r . There are many applications that help the user in the design of such systems, using graphical interfaces and function libraries. Even in some cases, the nal source code can be generated by these applications. This paper goes a step beyond it allows the development of computer vision systems from a distributed environment. Besides, and as a distinctive c haracteristic with regard to other similar utilities, the system is able to automatically optimize task scheduling and assignment, depending on the available hardware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.