OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. AbstractUsing fiducial markers ensures reliable detection and identification of planar features in images. Fiducials are used in a wide range of applications, especially when a reliable visual reference is needed, e.g., to track the camera in cluttered or textureless environments. A marker designed for such applications must be robust to partial occlusions, varying distances and angles of view, and fast camera motions. In this paper, we present a robust, highly accurate fiducial system, whose markers consist of concentric rings, along with its theoretical foundations. Relying on projective properties, it allows to robustly localize the imaged marker and to accurately detect the position of the image of the (common) circle center. We demonstrate that our system can detect and accurately localize these circular fiducials under very challenging conditions and the experimental results reveal that it outperforms other recent fiducial systems.
Abstract-Most map building methods employed by mobile robots are based on the assumption that an estimate of robot poses can be obtained from odometry readings or from observing landmarks or other robots. In this paper we propose methods to build a global geometric map by integrating scans collected by laser range scanners without using any knowledge about the robots poses. We consider scans that are collections of line segments. Our approach increases the flexibility in data collection, since robots do not need to see each other during mapping, and data can be collected by multiple robots or a single robot in one or multiple sessions. Experimental results show the effectiveness of our approach in different types of indoor environments.Index Terms-Map building, multirobot systems, scan matching, map merging, laser range scanners.
The registration of a preoperative 3D model, reconstructed for example from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the nontrivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery. Methods. Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end-to-end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector. Results. Evaluation shows that the proposed detector has a similar false negative rate to existing methods but substantially reduces both false positive rate and response thickness. Finally, we ran a user-study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 minutes and 53 seconds in surgeon time without sacrificing registration accuracy. Conclusions. We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state of the art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.