In this study, a reactive oxygen species (ROS)-responsive nanoparticle system was designed for combining photodynamic therapy (PDT) and chemotherapy for oral tongue squamous cell carcinoma (OTSCC)-targeted treatment. A PEGlated prodrug (RPTD) of doxorubicin (DOX) via thioketal linkage and cRGD peptide modification was synthesized and then used to prepare nanoparticles for encapsulating photosensitizer hematoporphyrin (HP). Thus, the obtained HP-loaded RPTD (RPTD/HP) nanoparticles had a regular spherical shape and small size, approximately 180 nm. The RPTD/HP nanoparticles showed a remarkable PDT efficiency and successfully induced ROS generation upon laser irradiation both in vitro and in vivo. DOX exhibited significant ROS-responsive release property from RPTD/HP nanoparticles because of the rupture of the thioketal linker. In OTSCC cells, RPTD/HP nanoparticles were efficiently internalized and showed potent effects on cell growth inhibition and apoptosis induction after laser irradiation. In OTSCC tumor-bearing mice, RPTD/HP nanoparticles displayed excellent tumor-targeting ability and notably suppressed tumor growth through multiple mechanisms after local laser irradiation. Taken together, we supplied a novel therapeutic nanosystem for OTSCC treatment through combining PDT and chemotherapy.
Infrared and visible image fusion can compensate for the incompleteness of single-modality imaging and provide a more comprehensive scene description based on cross-modal complementarity. Most works focus on learning the overall cross-modal features by high-and low-frequency constraints at the image level alone, ignoring the fact that cross-modal instance-level features often contain more valuable information. To fill this gap, we model cross-modal instance-level features by embedding instance information into a set of Mixture-of-Experts (MoEs) for the first time, prompting image fusion networks to specifically learn instance-level information. We propose a novel framework with instance embedded Mixture-of-Experts for infrared and visible image fusion, termed MoE-Fusion, which contains an instance embedded MoE group (IE-MoE), an MoE-Decoder, two encoders, and two auxiliary detection networks. By embedding the instance-level information learned in the auxiliary network, IE-MoE achieves specialized learning of crossmodal foreground and background features. MoE-Decoder can adaptively select suitable experts for cross-modal feature decoding and obtain fusion results dynamically. Extensive experiments show that our MoE-Fusion outperforms state-of-the-art methods in preserving contrast and texture details by learning instance-level information in cross-modal images. Our code will be available: https://github.com/SunYM2020/MoE-Fusion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.