The aim of freehand 3D ultrasound imaging is to construct a 3D volume of data from conventional 2D ultrasound images. In freehand 3D ultrasound, the probe is moved by hand over the area of interest in an arbitrary manner and its motion is measured by attaching some kind of position sensor to the probe. Since attaching external tracking sensor to the probe imposes some difficulties, alternative ways are investigated to acquire freehand 3D ultrasound without a position sensor. Sensorless Estimation of any in-plane motion between images is reliably determined using standard image registration techniques and the big challenge is out-of-plane motion estimation. The most important approach to overcome this challenge so far is to use the speckle decorrelation method. The method is based on the idea that the correlation value of a specific model of speckle known as Fully Developed Speckle (FDS) can be used to estimate the out-of-plane displacement between images. However, the method requires the B-scans to contain mostly regions of FDS pattern but this kind of pattern is rare in scans of real tissue. One successful way around this problem is to quantify the amount of coherency at each point in the B-scans by calculating the axial and lateral correlations and comparing them with the FDS calibrated ones. This approach leads to adapt elevational decorrelation curves based on the amount of non-FDS regions in the image. The novelty of this thesis is firstly adjusting the method to be applicable on B-mode ultrasound images rather than RF ultrasound data because RF data is not always available in clinical environments. Secondly, the experiment setup is truly freehand and the motion of the probe is not constrained in any directions during scanning and in-plane motion compensation is required. Thirdly, the method is tested on in vivo human data as well as test chicken and beef data sets. The method is shown to work quite remarkable (accuracy of around 5%) for the elevational distance estimation for both test phantoms and real human tissue data.ii Acknowledgements I would like to use this opportunity to thank my supervisor Dr. Chris Joslin, Associate Professor of School of Information Technology, Carleton University, for his friendly guidance, inspiration and assistance.
Among the different categories of natural images, face images are very important because of the role they play in human social interactions. Face images are also considered very challenging subjects in computer vision due to the uniqueness of information contained in individual face images and the wide range of important information that can be perceived from a single face image. It is recognised that despite all the recent advances of artificial intelligence using deep neural networks, computers are still struggling at achieving a rich and flexible understanding of face images comparable to humans' face perception abilities. This thesis aims at finding fully unsupervised ways for learning a transformation from face images pixel space to a representation space in which the underlying facial concepts are captured and disentangled. The objective of this thesis is to move from a representation of face images in which all facial concepts are captured in a single large cluster towards a representation in which facial concepts are separated into distinct groups. We propose that it is possible to utilize clues from the real 3D world in order to guide the representation learner in the direction of disentangling facial concepts. We conduct two studies in order to test this hypothesis. First, we propose a deep autoencoder model for extracting facial concepts based on their scales. We introduce an adaptive resolution reconstruction loss inspired by the fact that different categories of concepts are encoded in (and can be captured from) different resolutions of face images. With
Burn care management includes assessing the severity of burns accurately, especially distinguishing superficial partial thickness (SPT) burns from deep partial thickness (DPT) burns, in the context of providing definitive, downstream treatment. Moreover, the healing of the wound in the sub-acute care setting requires continuous tracking to avoid complications. Artificial intelligence (AI) and computer vision (CV) provide a unique opportunity to build low-cost and accessible tools to classify burn severity and track changes of wound parameters, both in the clinic by physicians and nurses, and asynchronously in the remote setting by the patient themselves. Wound assessments can be achieved by AI-CV using the principles of Image-Guided Therapy (IGT) using high-quality 2D colour images. Wound parameters can include wound 2D spatial dimension and the characterization of wound colour changes which demonstrates physiological changes such as presentation of eschar/necrotic tissue, pustulence, granulation tissue and scabbing. Here we present the development of AI-CV-based Skin Abnormality Tracking Algorithm (SATA) pipeline.Additionally we provide proof-of-concept results on a severe localized burn tracked for a 6-week period in clinic, and an additional 2-week period of home monitoring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.