U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net's potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.
This paper proposes a data fusion technique aimed at achieving highly accurate localization in a wireless sensor network with low computational cost. This is accomplished by fusing multiple types of sensor measurement data including received signal strength and angle of arrival. The proposed method incorporates a powerful data fusion technique, one that has never before been used in low cost localization of a stationary node, known as Dempster-Shafer Evidence Theory. Many useful functions of this theory, including sampling, aggregation, and plausibility, are integrated into the localization method. From there, the algorithm determines whether a set of given measurements belong to a particular county. Motivated by the exible nature of Dempster-Shafer Theory, a multitude of network setups and combinations of available measurement features are tested to verify the performance of the proposed method. Performance of the proposed approach is evaluated using numerical results obtained from extensive simulations. When compared with the results of existing approaches in similarly constructed scenarios, the proposed localization technique achieves up to 98% accuracy in less than a tenth of the run-time required under presently established algorithms.
Deep learning algorithms have seen acute growth of interest in their applications throughout several fields of interest in the last decade, with medical hyperspectral imaging being a particularly promising domain. So far, to the best of our knowledge, there is no review paper that discusses the implementation of deep learning for medical hyperspectral imaging, which is what this work aims to accomplish by examining publications that currently utilize deep learning to perform effective analysis of medical hyperspectral imagery. This paper discusses deep learning concepts that are relevant and applicable to medical hyperspectral imaging analysis, several of which have been implemented since the boom in deep learning. This will comprise of reviewing the use of deep learning for classification, segmentation, and detection in order to investigate the analysis of medical hyperspectral imaging. Lastly, we discuss the current and future challenges pertaining to this discipline and the possible efforts to overcome such trials.INDEX TERMS Deep learning, neural networks, machine learning, medical image analysis, medical hyperspectral imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.