An ultra-stretchable and highly-sensitive strain sensor was reported, which can monitor pulse, electrocardiograph, breath, finger motions and emotion changes.
Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the field of medical image segmentation. However, the 2D CNN ignores the 3D information of medical images, while the 3D CNN suffers from high computational resource demands. This paper proposes a new architecture called dimension-fusion-UNet (D-UNet), which combines 2D and 3D convolution innovatively in the encoding stage. The proposed architecture achieves a better segmentation performance than 2D networks, while requiring significantly less computation time in comparison to 3D networks. Furthermore, to alleviate the data imbalance issue between positive and negative samples for the network training, we propose a new loss function called Enhance Mixing Loss (EML). This function adds a weighted focal coefficient and combines two traditional loss functions. The proposed method has been tested on the ATLAS dataset and compared to three state-of-the-art methods. The results demonstrate that the proposed method achieves the best quality performance in terms of DSC = 0.5349±0.2763 and precision = 0.6331±0.295).
This paper proposes a segmentation-free radiomics method to classify malignant and benign breast tumors with shear-wave elastography (SWE) data. The method is targeted to integrate the advantage of both SWE in providing important elastic with morphology information and convolutional neural network (CNN) in automatic feature extraction and accurate classification. Compared to traditional methods, the proposed method is designed to directly extract features from the dataset without the prerequisite of segmentation and manual operation. This can keep the peri-tumor information, which is lost by segmentation-based methods. With the proposed model trained on 540 images (318 of malignant breast tumors and 222 of benign breast tumors, respectively), an accuracy of 95.8%, a sensitivity of 96.2%, and a specificity of 95.7% was obtained for the final test. The superior performances compared to the existing state-of-the-art methods and its automatic nature both demonstrate that the proposed method has a great potential to be applied to clinical computer-aided diagnosis of breast cancer.
1Ultrasound imaging has been frequently used for the study of muscle contraction, 2 including measurements of pennation angles and fascicle orientations. However, these 3 measurements were traditionally conducted by manually drawing lines on the 4 ultrasound images. In this study, we proposed a modified Hough transform (HT), 5 aiming at automatically estimating orientations of straight-line-shaped patterns, such 6 as muscle fibers and muscle-bone interface in ultrasound images. The new method 7 first located the global maximum in the HT accumulator matrix, which corresponded 8 to the most dominant collinear feature points globally, using the standard HT; then the 9 pixels close to the detected line were removed from the edge map, the HT 10 accumulator matrix was calculated again, i.e. revoting, and a new line was detected; 11 the iteration was repeated until the predefined termination conditions were satisfied.
12The performance of the algorithm was tested using computer-generated images with
An accurate segmentation and quantification of the superficial foveal avascular zone (sFAZ) is important to facilitate the diagnosis and treatment of many retinal diseases, such as diabetic retinopathy and retinal vein occlusion. We proposed a method based on deep learning for the automatic segmentation and quantification of the sFAZ in optical coherence tomography angiography (OCTA) images with robustness to brightness and contrast (B/C) variations. A dataset of 405 OCTA images from 45 participants was acquired with Zeiss Cirrus HD-OCT 5000 and the ground truth (GT) was manually segmented subsequently. A deep learning network with an encoder-decoder architecture was created to classify each pixel into an sFAZ or non-sFAZ class. Subsequently, we applied largestconnected-region extraction and hole-filling to fine-tune the automatic segmentation results. A maximum mean dice similarity coefficient (DSC) of 0.976 ± 0.011 was obtained when the automatic segmentation results were compared against the GT. The correlation coefficient between the area calculated from the automatic segmentation results and that calculated from the GT was 0.997. In all nine parameter groups with various brightness/contrast, all the DSCs of the proposed method were higher than 0.96. The proposed method achieved better performance in the sFAZ segmentation and quantification compared to two previously reported methods. In conclusion, we proposed and successfully verified an automatic sFAZ segmentation and quantification method based on deep learning with robustness to B/C variations. For clinical applications, this is an important progress in creating an automated segmentation and quantification applicable to clinical analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.