An image is a very effective tool for conveying emotions. Many researchers have investigated in computing the image emotions by using various features extracted from images. In this paper, we focus on two high level features, the object and the background, and assume that the semantic information of images is a good cue for predicting emotion. An object is one of the most important elements that define an image, and we find out through experiments that there is a high correlation between the object and the emotion in images. Even with the same object, there may be slight difference in emotion due to different backgrounds, and we use the semantic information of the background to improve the prediction performance. By combining the different levels of features, we build an emotion based feed forward deep neural network which produces the emotion values of a given image. The output emotion values in our framework are continuous values in the 2-dimensional space (Valence and Arousal), which are more effective than using a few number of emotion categories in describing emotions. Experiments confirm the effectiveness of our network in predicting the emotion of images.
Next-generation sequencing (NGS) technology has become a powerful tool for dissecting the molecular and pathological signatures of a variety of human diseases. However, the limited availability of biological samples from different disease stages is a major hurdle in studying disease progressions and identifying early pathological changes. Deep learning techniques have recently begun to be applied to analyze NGS data and thereby predict the progression of biological processes. In this study, we applied a deep learning technique called generative adversarial networks (GANs) to predict the molecular progress of Alzheimer's disease (AD). We successfully applied GANs to analyze RNA-seq data from a 5xFAD mouse model of AD, which recapitulates major AD features of massive amyloid-β (Aβ) accumulation in the brain. We examined how the generator is featured to have specific-sample generation and biological gene association. Based on the above observations, we suggested virtual disease progress by latent space interpolation to yield the transition curves of various genes with pathological changes from normal to AD state. By performing pathway analysis based on the transition curve patterns, we identified several pathological processes with progressive changes, such as inflammatory systems and synapse functions, which have previously been demonstrated to be involved in the pathogenesis of AD. Interestingly, our analysis indicates that alteration of cholesterol biosynthesis begins at a very early stage of AD, suggesting that it is the first effect to mediate the cholesterol metabolism of AD downstream of Aβ accumulation. Here, we suggest that GANs are a useful tool to study disease progression, leading to the identification of early pathological signatures.
The recent advances in deep learning-based approaches hold great promise for unravelling biological mechanisms, discovering biomarkers, and predicting gene function. Here, we deployed a deep generative model for simulating the molecular progression of tauopathy and dissecting its early features. We applied generative adversarial networks (GANs) for bulk RNA-seq analysis in a mouse model of tauopathy (TPR50-P301S). The union set of differentially expressed genes from four comparisons (two phenotypes with two time points) was used as input training data. We devised four-way transition curves for a virtual simulation of disease progression, clustered and grouped the curves by patterns, and identified eight distinct pattern groups showing different biological features from Gene Ontology enrichment analyses. Genes that were upregulated in early tauopathy were associated with vasculature development, and these changes preceded immune responses. We confirmed significant disease-associated differences in the public human data for the genes of the different pattern groups. Validation with weighted gene co-expression network analysis suggested that our GAN-based approach can be used to detect distinct patterns of early molecular changes during disease progression, which may be extremely difficult in in vivo experiments. The generative model is a valid systematic approach for exploring the sequential cascades of mechanisms and targeting early molecular events related to dementia.
For the best results in quantitative polymerase chain reaction (qPCR) experiments, it is essential to design high-quality primers considering a multitude of constraints and the purpose of experiments. The constraints include many filtering constraints, homology test on a huge number of off-target sequences, the same constraints for batch design of primers, exon spanning, and avoiding single nucleotide polymorphism (SNP) sites. The target sequences are either in database or given as FASTA sequences, and the experiment is for amplifying either each target sequence with each corresponding primer pairs designed under the same constraints or all target sequences with a single pair of primers. Many websites have been proposed, but none of them including our previous MRPrimerW fulfilled all the above features. Here, we describe the MRPrimerW2, the update version of MRPrimerW, which fulfils all the features by maintaining the advantages of MRPrimerW in terms of the kinds and sizes of databases for valid primers and the number of search modes. To achieve it, we exploited GPU computation and a disk-based key-value store using PCIe SSD. The complete set of 3 509 244 680 valid primers of MRPrimerW2 covers 99% of nine important organisms in an exhaustive manner. Free access: http://MRPrimerW2.com
SourceResult + Negative (1.0/N) Target + Positive (9.0/N) Source Result Target Figure 1: Given a source image and target emotion (towards more positive or negative), the system recolors the image using reference image segments selected to match the target emotion as well as the labels of the semantically segmented regions. AbstractWe introduce an affective image recoloring method for changing the overall mood in the image in a numerically measurable way. Given a semantically segmented source image and a target emotion, our system finds reference image segments from the collection of images that have been tagged via crowdsourcing with numerically measured emotion labels. We then recolorize the source segments using colors from the selected target segments while preserving the gradient of the source image to generate a seamless and natural result. User study confirms the effectiveness of our method in accomplishing the stated goal of altering the mood of the image to match the target emotion level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.