Ultrasound image guided needle insertion is the method of choice for a wide variety of medical diagnostic and therapeutic procedures. When flexible needles are inserted in soft tissue, these needles generally follow a curved path. Segmenting the trajectory of the needles in ultrasound images will facilitate guiding them within the tissue. In this paper, a novel algorithm for curved needle segmentation in three-dimensional (3D) ultrasound images is presented. The algorithm is based on the projection of a filtered 3D image onto a two-dimensional (2D) image. Detection of the needle in the resulting 2D image determines a surface on which the needle is located. The needle is then segmented on the surface. The proposed technique is able to detect needles without any previous assumption about the needle shape, or any a priori knowledge about the needle insertion axis line.
The goal of this study was to develop an automated and objective method to separate swallowing sounds from breath sounds. Swallowing sound detection can be utilized as part of a system for swallowing mechanism assessment and diagnosis of swallowing dysfunction (dysphagia) by acoustical means. In this study, an algorithm based on multilayer feed forward neural networks is proposed for decomposition of tracheal sound into swallowing and respiratory segments. Among many features examined, root-mean-square of the signal, the average power of the signal over 150-450 Hz and waveform fractal dimension were selected features applied to the neural network as inputs. Findings from previous studies about temporal and durational patterns of swallowing and respiration were used in a smart algorithm for further identification of the swallow and breath segments. The proposed method was applied to 18 tracheal sound recordings of 7 healthy subjects (ages 13-30 years, 4 males). The results were validated manually by visual inspection using airflow measurement and spectrogram of the sounds and auditory means. The algorithm was able to detect 91.7% of swallows correctly. The average of missed swallows and average of false detection were 8.3% and 9.5%, respectively. With additional preprocessing and post processing, the proposed method may be used for automated extraction of swallowing sounds from breath sounds in healthy and dysphagic individuals.
This paper presents an automated and objective method for extraction of swallowing sounds in a record of the tracheal breath and swallowing sounds. The proposed method takes advantage of the fact that swallowing sounds have more non-stationarity comparing with breath sounds and have large components in many wavelet scales whereas wavelet transform coefficients of breath sounds in higher wavelet scales are small. Therefore, a wavelet transform based filter was utilized in which a multiresolution decomposition-reconstruction process filters the signal. Swallowing sounds are detected in the filtered signal. The proposed method was applied to the tracheal sound recordings of 15 healthy and 11 dysphagic subjects. The results were validated manually by visual inspection using airflow measurement and spectrogram of the sounds and auditory means. Experimental results prove that the proposed method is more accurate, efficient, and objective than the methods proposed previously. Swallowing sound detection may be employed in a system for automated swallowing assessment and diagnosis of swallowing disorders (dysphagia) by acoustical means.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.