A colonoscopy is a medical examination used to check disease or abnormalities in the large intestine. If necessary, polyps or adenomas would be removed through the scope during a colonoscopy. Colorectal cancer can be prevented through this. However, the polyp detection rate differs depending on the condition and skill level of the endoscopist. Even some endoscopists have a 90% chance of missing an adenoma. Artificial intelligence and robot technologies for colonoscopy are being studied to compensate for these problems. In this study, we propose a self-supervised monocular depth estimation using spatiotemporal consistency in the colon environment. It is our contribution to propose a loss function for reconstruction errors between adjacent predicted depths and a depth feedback network that uses predicted depth information of the previous frame to predict the depth of the next frame. We performed quantitative and qualitative evaluation of our approach, and the proposed FBNet (depth FeedBack Network) outperformed state-of-the-art results for unsupervised depth estimation on the UCL datasets.
Depth estimation using monocular camera sensors is an important technique in computer vision. Supervised monocular depth estimation requires a lot of data acquired from depth sensors. However, acquiring depth data is an expensive task. We sometimes cannot acquire data due to the limitations of the sensor. View synthesisbased depth estimation research is a self-supervised learning method that does not require depth data supervision. Previous studies mainly use CNN-based networks in encoders. CNN is suitable for extracting local features through convolution operation. Recent vision transformers are suitable for global feature extraction based on multi-self-attention modules. In this paper, we propose a hybrid network combining CNN and vision transformer network in self-supervised learning-based monocular depth estimation. We design an encoder-decoder structure that uses CNNs in the earlier stage of extracting local features and a vision transformer in the later stages of extracting global features. We evaluate the proposed network through various experiments based on KITTI and Cityscapes datasets. The results showed higher performance than previous studies and reduced parameters and computations. Codes and trained models are available at https://github.com/fogfog2/manydepthformer.
Recently UAV(unmanned aerial vehicle) is frequently used not only for military purpose but also for civil purpose. UAV automatically navigates following the coordinates input in advance using GPS information. However it is impossible when GPS cannot be received because of jamming or external interference. In order to solve this problem, we propose a real-time segmentation and classification algorithm for the specific regions from UAV image in this paper. We use the super-pixels algorithm using graph-based image segmentation as a pre-processing stage for the feature extraction. We choose the most ideal model by analyzing various color models and mixture color models. Also, we use support vector machine for classification, which is one of the machine learning algorithms and can use small quantity of training data. 18 color and texture feature vectors are extracted from the UAV image, then 3 classes of regions; river, vinyl house, rice filed are classified in real-time through training and prediction processes.
Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.