Microporous metal-organic frameworks (MOFs) that display permanent porosity show great promise for a myriad of purposes. The potential applications of MOFs can be developed further and extended by encapsulating various functional species (for example, nanoparticles) within the frameworks. However, despite increasing numbers of reports of nanoparticle/MOF composites, simultaneously to control the size, composition, dispersed nature, spatial distribution and confinement of the incorporated nanoparticles within MOF matrices remains a significant challenge. Here, we report a controlled encapsulation strategy that enables surfactant-capped nanostructured objects of various sizes, shapes and compositions to be enshrouded by a zeolitic imidazolate framework (ZIF-8). The incorporated nanoparticles are well dispersed and fully confined within the ZIF-8 crystals. This strategy also allows the controlled incorporation of multiple nanoparticles within each ZIF-8 crystallite. The as-prepared nanoparticle/ZIF-8 composites exhibit active (catalytic, magnetic and optical) properties that derive from the nanoparticles as well as molecular sieving and orientation effects that originate from the framework material.
Mesenchymal stem cells (MSCs) are multipotent stromal cells that exist in many tissues and are capable of differentiating into several different cell types. Exogenously administered MSCs migrate to damaged tissue sites, where they participate in tissue repair. Their communication with the inflammatory microenvironment is an essential part of this process. In recent years, much has been learned about the cellular and molecular mechanisms of the interaction between MSCs and various participants in inflammation. Depending on their type and intensity, inflammatory stimuli confer on MSCs the ability to suppress the immune response in some cases or to enhance it in others. Here we review the current findings on the immunoregulatory plasticity of MSCs in disease pathogenesis and therapy.
Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large "teacher" BERT can be effectively transferred to a small "student" Tiny-BERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specific knowledge in BERT. TinyBERT 41 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERT BASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT 4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Moreover, TinyBERT 6 with 6 layers performs on-par with its teacher BERT BASE .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.