Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256×256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.
The optical conductance of monolayer graphene is defined solely by the fine structure constant, α = e 2 /ћc (where e is the electron charge, ћ is Dirac's constant and c is the speed of light). The absorbance has been predicted to be independent of frequency. In principle, the interband optical absorption in zero-gap graphene could be saturated readily under strong excitation due to Pauli blocking. Here, we demonstrate the use of atomic layer graphene as saturable absorber in a mode-locked fiber laser for the generation of ultrashort soliton pulses (756 fs) at the telecommunication band. The modulation depth can be tuned in a wide range from 66.5% to 6.2% by varying the thickness of graphene. Our results suggest that ultrathin graphene films are potentially useful as optical elements in fiber lasers. Graphene as a laser mode locker can have many merits such as lower saturation intensity, ultrafast recovery time, tunable modulation depth and wideband tuneability.
In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the At-tnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.
Complex carbohydrates of plants are the main food sources of animals and microbes, and serve as promising renewable feedstock for biofuel and biomaterial production. Carbohydrate active enzymes (CAZymes) are the most important enzymes for complex carbohydrate metabolism. With an increasing number of plant and plant-associated microbial genomes and metagenomes being sequenced, there is an urgent need of automatic tools for genomic data mining of CAZymes. We developed the dbCAN web server in 2012 to provide a public service for automated CAZyme annotation for newly sequenced genomes. Here, dbCAN2 (http://cys.bios.niu.edu/dbCAN2) is presented as an updated meta server, which integrates three state-of-the-art tools for CAZome (all CAZymes of a genome) annotation: (i) HMMER search against the dbCAN HMM (hidden Markov model) database; (ii) DIAMOND search against the CAZy pre-annotated CAZyme sequence database and (iii) Hotpep search against the conserved CAZyme short peptide database. Combining the three outputs and removing CAZymes found by only one tool can significantly improve the CAZome annotation accuracy. In addition, dbCAN2 now also accepts nucleotide sequence submission, and offers the service to predict physically linked CAZyme gene clusters (CGCs), which will be a very useful online tool for identifying putative polysaccharide utilization loci (PULs) in microbial genomes or metagenomes.
P. Gong et al. land-cover classification system as well as the International Geosphere-Biosphere Programme (IGBP) system. Using the four classification algorithms, we obtained the initial set of global land-cover maps. The SVM produced the highest overall classification accuracy (OCA) of 64.9% assessed with our test samples, with RF (59.8%), J4.8 (57.9%), and MLC (53.9%) ranked from the second to the fourth. We also estimated the OCAs using a subset of our test samples (8629) each of which represented a homogeneous area greater than 500 m × 500 m. Using this subset, we found the OCA for the SVM to be 71.5%. As a consistent source for estimating the coverage of global land-cover types in the world, estimation from the test samples shows that only 6.90% of the world is planted for agricultural production. The total area of cropland is 11.51% if unplanted croplands are included. The forests, grasslands, and shrublands cover 28.35%, 13.37%, and 11.49% of the world, respectively. The impervious surface covers only 0.66% of the world. Inland waterbodies, barren lands, and snow and ice cover 3.56%, 16.51%, and 12.81% of the world, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.