Image-based computer-aided diagnosis (CAD) systems have been developed to assist doctors in the diagnosis of thyroid cancer using ultrasound thyroid images. However, the performance of these systems is strongly dependent on the selection of detection and classification methods. Although there are previous researches on this topic, there is still room for enhancement of the classification accuracy of the existing methods. To address this issue, we propose an artificial intelligence-based method for enhancing the performance of the thyroid nodule classification system. Thus, we extract image features from ultrasound thyroid images in two domains: spatial domain based on deep learning, and frequency domain based on Fast Fourier transform (FFT). Using the extracted features, we perform a cascade classifier scheme for classifying the input thyroid images into either benign (negative) or malign (positive) cases. Through expensive experiments using a public dataset, the thyroid digital image database (TDID) dataset, we show that our proposed method outperforms the state-of-the-art methods and produces up-to-date classification results for the thyroid nodule classification problem.
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.
In the fly optic lobe, $800 highly stereotypical columnar microcircuits are arranged retinotopically to process visual information. Differences in cellular composition and synaptic connectivity within functionally specialized columns remain largely unknown. Here, we describe the cellular and synaptic architecture in medulla columns located downstream of photoreceptors in the dorsal rim area (DRA), where linearly polarized skylight is detected for guiding orientation responses. We show that only in DRA medulla columns both R7 and R8 photoreceptors target to the bona fide R7 target layer where they form connections with previously uncharacterized, modality-specific Dm neurons: two morphologically distinct DRA-specific cell types (termed Dm-DRA1 and Dm-DRA2) stratify in separate sublayers and exclusively contact polarization-sensitive DRA inputs, while avoiding overlaps with color-sensitive Dm8 cells. Using the activity-dependent GRASP and trans-Tango techniques, we confirm that DRA R7 cells are synaptically connected to Dm-DRA1, whereas DRA R8 form synapses with Dm-DRA2. Finally, using live imaging of ingrowing pupal photoreceptor axons, we show that DRA R7 and R8 termini reach layer M6 sequentially, thus separating the establishment of different synaptic connectivity in time. We propose that a duplication of R7/Dm circuitry in DRA ommatidia serves as an ideal adaptation for detecting linearly polarized skylight using orthogonal e-vector analyzers.
In the fly optic lobe ~800 highly stereotypical columnar microcircuits are arranged retinotopically to process visual information. Differences in cellular composition and synaptic connectivity within functionally specialized columns remains largely unknown. Here we describe the cellular and synaptic architecture in medulla columns located downstream of photoreceptors in the 'dorsal rim area' (DRA), where linearly polarized skylight is detected for guiding orientation responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.