In the last decade, researchers and medical device companies have made major advances towards transforming passive capsule endoscopes into active medical robots. One of the major challenges is to endow capsule robots with accurate perception of the environment inside the human body, which will provide necessary information and enable improved medical procedures. We extend the success of deep learning approaches from various research fields to the problem of uncalibrated, asynchronous, and asymmetric sensor fusion for endoscopic capsule robots. The results performed on real pig stomach datasets show that our method achieves sub-millimeter precision for both translational and rotational movements and contains various advantages over traditional sensor fusion techniques.
Deep learning (DL) can fail when there are data mismatches between training and testing data distributions. Due to its operator-dependent nature, acquisitionrelated data mismatches, caused by different scanner settings, can occur in ultrasound imaging. As a result, it is crucial to mitigate the effects of these mismatches to enable wider clinical adoption of DL-powered ultrasound imaging and tissue characterization. To address this challenge, we propose an inexpensive and generalizable method that involves collecting a large training set at a single setting and a small calibration set at each scanner setting. Then, the calibration set will be used to calibrate data mismatches by using a signals and systems perspective. We tested the proposed solution to classify two phantoms using an L9-4 array connected to a SonixOne scanner. To investigate generalizability of the proposed solution, we calibrated three types of data mismatches: pulse frequency mismatch, focus mismatch and output power mismatch. Two well-known convolutional neural networks (CNN)s, i.e., ResNet-50 and DenseNet-201, were trained using the ultrasound radiofrequency (RF) data. To calibrate the setting mismatches, we calculated the setting transfer functions. The CNNs trained without calibration resulted in mean classification accuracies of around 52%, 84% and 85% for pulse frequency, focus and output power mismatches, respectively. By using the setting transfer functions, which allowed a matching of the training and testing domains, we obtained mean accuracies of 96%, 96% and 98%, respectively. Therefore, the incorporation of the setting transfer functions between scanner settings can provide an economical means of generalizing a DL model for specific classification tasks where scanner settings are not fixed by the operator.
Deep learning (DL) powered biomedical ultrasound imaging is an emerging research field where researchers adapt the image analysis capabilities of DL algorithms to biomedical ultrasound imaging settings. A major roadblock to wider adoption of DL powered biomedical ultrasound imaging is that acquisition of large and diverse data-sets is expensive in clinical settings, which is a requirement for successful DL implementation. Hence, there is a constant need for developing data-efficient DL techniques to turn DL powered biomedical ultrasound imaging into reality. In this work, we develop a data-efficient deep learning training strategy for classifying tissues based on the ultrasonic backscattered RF data, i.e., quantitative ultrasound (QUS), which we named Zone Training. In Zone Training, we propose to divide the complete field of view of an ultrasound image into multiple zones associated with different regions of a diffraction pattern and then, train separate DL networks for each zone. The main advantage of Zone Training is that it requires less training data to achieve high accuracy. In this work, three different tissue-mimicking phantoms were classified by a DL network. The results demonstrated that Zone Training can require a factor of 2-3 less training data in low data regime to achieve similar classification accuracies compared to a conventional training strategy.
Deep learning (DL) powered biomedical ultrasound imaging is an emerging research field where researchers adapt the image analysis capabilities of DL algorithms to biomedical ultrasound imaging settings. A major roadblock to wider adoption of DL powered biomedical ultrasound imaging is that acquiring large and diverse datasets is expensive in clinical settings, which is a requirement for successful DL implementation. Hence, there is a constant need for developing data-efficient DL techniques to turn DL powered biomedical ultrasound imaging into reality. In this work, we develop a data-efficient deep learning training strategy, which we named Zone Training. In Zone Training, we propose to divide the complete field of view of an ultrasound image into multiple zones associated with different regions of a diffraction pattern and then, train separate DL networks for each zone. The main advantage of Zone Training is that it requires less training data to achieve high accuracy. In this work, three different tissuemimicking phantoms were classified by a DL network. The results demonstrated that Zone Training required a factor of 2-5 less training data to achieve similar classification accuracies compared to a conventional training strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.