Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
While visual tracking problem has been actively studied in computer vision discipline, recoginition and tracking objects beneath the water surface still remains a challenging problem since this problem open deals with several difficulties: 1) poor light condition 2) limited visibility 3) high turbidity condition 4) lack of benchmark image data, etc. Nevertheless, the importance of vision based capabilities in underwater environment cannot be overstated because, in these days, many underwater robots are guided by vision systems. In this research work, we propose an efficient and accurate method of tracking texture-free objects in underwater environment. The challenge is to segment out and to track interesting objects in the presence of camera motion and scale changes of the objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, we extract shape context descriptors that used for classifying objects into predetermined interesting targets. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. The proposed framework is validated with real data sets obtained from a water tank, and we observed promising performance of the algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.