Tomato plants are highly affected by diverse diseases. A timely and accurate diagnosis plays an important role to prevent the quality of crops. Recently, deep learning (DL), specifically convolutional neural networks (CNNs), have achieved extraordinary results in many applications, including the classification of plant diseases. This work focused on fine-tuning based on the comparison of the state-of-the-art architectures: AlexNet, GoogleNet, Inception V3, Residual Network (ResNet) 18, and ResNet 50. An evaluation of the comparison was finally performed. The dataset used for the experiments is contained by nine different classes of tomato diseases and a healthy class from PlantVillage. The models were evaluated through a multiclass statistical analysis based on accuracy, precision, sensitivity, specificity, F-Score, area under the curve (AUC), and receiving operating characteristic (ROC) curve. The results present significant values obtained by the GoogleNet technique, with 99.72% of AUC and 99.12% of sensitivity. It is possible to conclude that this significantly success rate makes the GoogleNet model a useful tool for farmers in helping to identify and protect tomatoes from the diseases mentioned.
In order to better predict and follow treatment responses in cancer patients, there is growing interest in noninvasively characterizing tumor heterogeneity based on MR images possessing different contrast and quantitative information. This requires mechanisms for integrating such data and reducing the data dimensionality to levels amenable to interpretation by human readers. Here we propose a two-step pipeline for integrating diffusion and perfusion MRI that we demonstrate in the quantification of breast lesion heterogeneity. First, the images acquired with the two modalities are aligned using an intermodal registration. Dissimilarity-based clustering is then performed exploiting the information coming from both modalities. To this end an ad hoc distance metric is developed and tested for tuning the weighting for the two modalities. The distributions of the diffusion parameter values in subregions identified by the algorithm are extracted and compared through nonparametric testing for posterior evaluation of the tissue heterogeneity. Results show that the joint exploitation of the information brought by DCE and DWI leads to consistent results accounting for both perfusion and microstructural information yielding a greater refinement of the segmentation than the separate processing of the two modalities, consistent with that drawn manually by a radiologist with access to the same data.
Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as ‘emotional’ (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal ‘limbic’ brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never ‘seen’ before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in ‘limbic’ areas. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.