We present a novel circumferential-scan endoscopic optical coherence tomography (OCT) probe by using a circular array of six electrothermal microelectromechanical (MEMS) mirrors and six C-lenses. The MEMS mirrors have a 0.5 mm × 0.5 mm mirror plate and a chip size of 1.5 mm × 1.3 mm. Each MEMS mirror can scan up to 45° at a voltage of less than 12 V. Six of those mirrors have been successfully packaged to a probe head; full circumferential scans have been demonstrated. Furthermore, each scan unit is composed of a MEMS mirror and a C-lens and the six scan units can be designed with different focal lengths to adapt for lesions with uneven surfaces. Configured with a swept source OCT system, this MEMS array-based circumferential scanning probe has been applied to image a swine's small intestine wrapped on a 20 mm-diameter glass tube. The OCT imaging result shows that this new MEMS endoscopic OCT has promising applications in large tubular organs.
Abstract. In this paper, we exploit robust depth information with simple color-shape appearance model on single object tracking in crowd dynamic scenes. Since binocular video streams are captured from a moving camera rig, background subtraction cannot provide a reliable enhancement of region of interest. Our main contribution is a novel tracking strategy to employ explicit stereo depth to track and segment object in crowd dynamic scenes with occlusion handling. Appearance cues including color and shape play a secondary role to further extract the foreground acquired by previous depth-based segmentation. The proposed depth-driven tracking approach can largely alleviate the drifting issue, especially when the object frequently interacts with similar background in long sequence tracking. The problems caused by rapid object appearance change can also be avoided due to the stability of the depth cue. Furthermore, we propose a new, yet simple and effective depth-based scheme to cope with complete occlusion in tracking. From experiments on a large collection of challenging outdoor and indoor sequences, our algorithm demonstrates accurate and reliable tracking performance which outperforms other state-of-the-art competing algorithms.
We present a novel miniature endoscopic optical coherence tomography (OCT) probe based on an electrothermal microelectromechanical (MEMS) mirror and a C-lens. The MEMS mirror has a relatively large mirror plate of 0.5 mm × 0.5 mm on a small chip size of 1.5 mm × 1.3 mm, leading to the outer diameter of the endoscopic probe down to only 2.5 mm so that the probe is capable of being inserted through the biopsy channel of a conventional endoscope. A long focal length of 12 mm is achieved by properly designing the C-lens. Compared to commonly used GRIN lenses, C-lenses have advantages of much smaller sensitivities of the working distance and spot size to machining variations. The C-lens based probe can scan a large field of view (FOV) of 6 mm × 6 mm at only 5-V drive voltage. A swept source endoscopic OCT system is constructed using this miniature probe. Experiments show that the corresponding lateral and axial resolutions are about 50 and 14 μm, respectively. Various samples including human fingers, onions, and cancerous colon tissue are imaged and the result suggests this large-working-distance, large-FOV endoscopic probe is promising for applications in large tubular organs and large-area lesions.
The first Agriculture-Vision Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images, especially for the semantic segmentation task associated with our challenge dataset. Around 57 participating teams from various countries compete to achieve state-of-the-art in aerial agriculture semantic segmentation. The Agriculture-Vision Challenge Dataset was employed, which comprises of 21,061 aerial and multi-spectral farmland images. This paper provides a summary of notable methods and results in the challenge. Our submission server and leaderboard will continue to open for researchers that are interested in this challenge dataset and task; the link can be found here. * indicates joint first author. For more information on our database and other related efforts in Agriculture-Vision, please visit our CVPR 2020 workshop and challenge website https://www.agriculture-vision.com.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.