Nanoparticles are regarded as promising transfection reagents for effective and safe delivery of nucleic acids into specific type of cells or tissues providing an alternative manipulation/therapy strategy to viral gene delivery. However, the current process of searching novel delivery materials is limited due to conventional low-throughput and time-consuming multistep synthetic approaches. Additionally, conventional approaches are frequently accompanied with unpredictability and continual optimization refinements, impeding flexible generation of material diversity creating a major obstacle to achieving high transfection performance. Here we have demonstrated a rapid developmental pathway toward highly efficient gene delivery systems by leveraging the powers of a supramolecular synthetic approach and a custom-designed digital microreactor. Using the digital microreactor, broad structural/functional diversity can be programmed into a library of DNA-encapsulated supramolecular nanoparticles (DNA⊂SNPs) by systematically altering the mixing ratios of molecular building blocks and a DNA plasmid. In vitro transfection studies with DNA⊂SNPs library identified the DNA⊂SNPs with the highest gene transfection efficiency, which can be attributed to cooperative effects of structures and surface chemistry of DNA⊂SNPs. We envision such a rapid developmental pathway can be adopted for generating nanoparticle-based vectors for delivery of a variety of loads.
In this paper, we address the text and image matching in crossmodal retrieval of the fashion industry. Different from the matching in the general domain, the fashion matching is required to pay much more aention to the fine-grained information in the fashion images and texts. Pioneer approaches detect the region of interests (i.e., RoIs) from images and use the RoI embeddings as image representations. In general, RoIs tend to represent the "object-level" information in the fashion images, while fashion texts are prone to describe more detailed information, e.g. styles, aributes. RoIs are thus not fine-grained enough for fashion text and image matching. To this end, we propose FashionBERT, which leverages patches as image features. With the pre-trained BERT model as the backbone network, FashionBERT learns high level representations of texts and images. Meanwhile, we propose an adaptive loss to trade off multitask learning in the FashionBERT modeling. Two tasks (i.e., text and image matching and cross-modal retrieval) are incorporated to evaluate FashionBERT. On the public dataset, experiments demonstrate FashionBERT achieves significant improvements in performances than the baseline and state-ofthe-art approaches. In practice, FashionBERT is applied in a concrete cross-modal retrieval application. We provide the detailed matching performance and inference efficiency analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.