Background: Basal cell carcinoma (BCC) is the most common skin cancer, which is highly damaging in its advanced stages. Computer-aided techniques provide a feasible option for early detection of BCC. However, automated BCC detection techniques immensely rely on handcrafting high-level precise features. Such features are not only computationally complex to design but can also represent a very limited aspect of the lesion characteristics. This paper proposes an automated BCC detection technique that directly learns the features from image data, eliminating the need for handcrafted feature design.
Methods:The proposed method is composed of 2 parts. First, an unsupervised feature learning framework is proposed which attempts to learn hidden characteristics of the data including vascular patterns directly from the images. This is done through the design of a sparse autoencoder (SAE). After the unsupervised learning, we treat each of the learned kernel weights of the SAE as a filter. Convolving each filter with the lesion image yields a feature map. Feature maps are condensed to reduce the dimensionality and are further integrated with patient profile information. The overall features are then fed into a softmax classifier for BCC classification. Results: On a set of 1199 BCC images, the proposed framework achieved an area under the curve of 91.1%, while the visualization of learned features confirmed meaningful clinical interpretation of the features. Conclusion: The proposed framework provides a non-invasive fast BCC detection tool that incorporates both dermoscopic lesional features and clinical patient information, without the need for complex handcrafted feature extraction. K E Y W O R D S automated basal cell carcinoma detection, blood vessels, dermoscopy, feature learning, sparse autoencoders
Identification of constituent components of each sign gesture can be beneficial to the improved performance of sign language recognition (SLR), especially for large-vocabulary SLR systems. Aiming at developing such a system using portable accelerometer (ACC) and surface electromyographic (sEMG) sensors, we propose a framework for automatic Chinese SLR at the component level. In the proposed framework, data segmentation, as an important preprocessing operation, is performed to divide a continuous sign language sentence into subword segments. Based on the features extracted from ACC and sEMG data, three basic components of sign subwords, namely the hand shape, orientation, and movement, are further modeled and the corresponding component classifiers are learned. At the decision level, a sequence of subwords can be recognized by fusing the likelihoods at the component level. The overall classification accuracy of 96.5% for a vocabulary of 120 signs and 86.7% for 200 sentences demonstrate the feasibility of interpreting sign components from ACC and sEMG data and clearly show the superior recognition performance of the proposed method when compared with the previous SLR method at the subword level. The proposed method seems promising for implementing large-vocabulary portable SLR systems.
A considerable proportion of the apical canal space remained filled with Ca(OH)2 in the C-shaped root canals after instrumentation and conventional needle irrigation. Although combining rotary instrumentation and irrigation with sonic or ultrasonic agitation reduced the amount of residual Ca(OH)2 in the C-shaped root canals, the large amount of calcium hydroxide in the critical apical area remains a concern. Alternative strategies should be considered in medication of the apical canal in C-shaped teeth.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.