Robotic endoscopic systems offer a minimally invasive approach to the examination of internal body structures, and their application is rapidly extending to cover the increasing needs for accurate therapeutic interventions. In this context, it is essential for such systems to be able to perform measurements, such as measuring the distance travelled by a wireless capsule endoscope, so as to determine the location of a lesion in the gastrointestinal (GI) tract, or to measure the size of lesions for diagnostic purposes. In this paper, we investigate the feasibility of performing contactless measurements using a computer vision approach based on neural networks. The proposed system integrates a deep convolutional image registration approach and a multilayer feed-forward neural network in a novel architecture. The main advantage of this system, with respect to the state-of-the-art ones, is that it is more generic in the sense that it is: i) unconstrained by specific models, ii) more robust to non-rigid deformations, and iii) adaptable to most of the endoscopic systems and environments, while enabling measurements of enhanced accuracy. The performance of this system is evaluated in ex-vivo conditions using a phantom experimental model and a robotically-assisted test bench. The results obtained promise a wider applicability and impact in endoscopy in the era of big data.
Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.