Background: Magnetic resonance imaging (MRI) images synthesized from computed tomography (CT) data can provide more detailed information on pathological structures than that of CT data alone; thus, the synthesis of MRI has received increased attention especially in medical scenarios where only CT images are available. A novel convolutional neural network (CNN) combined with a contextual loss function was proposed for synthesis of T1-and T2-weighted images (T1WI and T2WI) from CT data.Methods: A total of 5,053 and 5,081 slices of T1WI and T2WI, respectively were selected for the dataset of CT and MRI image pairs. Affine registration, image denoising, and contrast enhancement were done on the aforementioned multi-modality medical image dataset comprising T1WI, T2WI, and CT images of the brain. A deep CNN was then proposed by modifying the ResNet structure to constitute the encoder and decoder of U-Net, called double ResNet-U-Net (DRUNet). Three different loss functions were utilized to optimize the parameters of the proposed models: mean squared error (MSE) loss, binary crossentropy (BCE) loss, and contextual loss. Statistical analysis of the independent-sample t-test was conducted by comparing DRUNets with different loss functions and different network layers. Results: DRUNet-101 with contextual loss yielded higher values of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Tenengrad function (i.e., 34.25±2.06, 0.97±0.03, and 17.03±2.75 for T1WI and 33.50±1.08, 0.98±0.05, and 19.76±3.54 for T2WI respectively). The results were statistically significant at P<0.001 with a narrow confidence interval of difference, indicating the superiority of DRUNet-101 with contextual loss. In addition, both image zooming and difference maps presented for the final synthetic MR images visually reflected the robustness of DRUNet-101 with contextual loss. The visualization of convolution filters and feature maps showed that the proposed model can generate synthetic MR images with high-frequency information. Conclusions: The results demonstrated that DRUNet-101 with contextual loss function provided better high-frequency information in synthetic MR images compared with the other two functions. The proposed DRUNet model has a distinct advantage over previous models in terms of PSNR, SSIM, and Tenengrad score.Overall, DRUNet-101 with contextual loss is recommended for synthesizing MR images from CT scans.
Background: Moyamoya disease (MMD) is a rare cerebrovascular occlusive disease with progressive stenosis of the terminal portion of internal cerebral artery (ICA) and its main branches, which can cause complications, such as high risks of disability and increased mortality. Accurate and timely diagnosis may be difficult for physicians who are unfamiliar to MMD. Therefore, this study aims to achieve a preoperative deep-learning-based evaluation of MMD by detecting steno-occlusive changes in the middle cerebral artery or distal ICA areas.Methods: A fine-tuned deep learning model was developed using a three-dimensional (3D) coordinate attention residual network (3D CA-ResNet). This study enrolled 50 preoperative patients with MMD and 50 controls, and the corresponding time of flight magnetic resonance angiography (TOF-MRA) imaging data were acquired. The 3D CA-ResNet was trained based on sub-volumes and tested using patch-based and subject-based methods. The performance of the 3D CA-ResNet, as evaluated by the area under the curve (AUC) of receiving-operator characteristic, was compared with that of three other conventional 3D networks.Results: With the resulting network, the patch-based test achieved an AUC value of 0.94 for the 3D CA-ResNet in 480 patches from 10 test patients and 10 test controls, which is significantly higher than the results of the others. The 3D CA-ResNet correctly classified the MMD patients and normal healthy controls, and the vascular lesion distribution in subjects with the disease was investigated by generating a stenosis probability map and 3D vascular structure segmentation. Conclusions:The results demonstrated the reliability of the proposed 3D CA-ResNet in detecting stenotic areas on TOF-MRA imaging, and it outperformed three other models in identifying vascular steno-^ ORCID: 0000-0002-3711-5173.
Adversarial camouflage is a widely used physical attack against vehicle detectors for its superiority in multi-view attack performance. One promising approach involves using differentiable neural renderers to facilitate adversarial camouflage optimization through gradient back-propagation. However, existing methods often struggle to capture environmental characteristics during the rendering process or produce adversarial textures that can precisely map to the target vehicle, resulting in suboptimal attack performance. Moreover, these approaches neglect diverse weather conditions, reducing the efficacy of generated camouflage across varying weather scenarios. To tackle these challenges, we propose a robust and accurate camouflage generation method, namely RAUCA. The core of RAUCA is a novel neural rendering component, Neural Renderer Plus (NRP), which can accurately project vehicle textures and render images with environmental characteristics such as lighting and weather. In addition, we integrate a multi-weather dataset for camouflage generation, leveraging the NRP to enhance the attack robustness. Experimental results on six popular object detectors show that RAUCA consistently outperforms existing methods in both simulation and real-world settings.
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
Opioids are often first-line analgesics in pain therapy. However, prolonged use of opioids causes paradoxical pain, termed “opioid-induced hyperalgesia (OIH)”. The infralimbic medial prefrontal cortex (IL-mPFC) has been suggested to be critical in inflammatory and neuropathic pain processing through its dynamic output from Layer V pyramidal neurons. Whether OIH condition induces excitability changes of these output neurons and what mechanisms underlie these changes remains elusive. Here, with combination of patch-clamp recording, immunohistochemistry, as well as optogenetics, we revealed that IL-mPFC Layer V pyramidal neurons exhibited hyperexcitability together with higher input resistance. In line with this, optogenetic and chemogenetic activation of these neurons aggravate behavioral hyperalgesia in male OIH rats. Inhibition of these neurons alleviates hyperalgesia in male OIH rats but exerts an opposite effect in male control rats. Electrophysiological analysis of hyperpolarization-activated cation current (Ih) demonstrated that decreased Ih is a prerequisite for the hyperexcitability of IL-mPFC output neurons. This decreased Ih was accompanied by a decrease in HCN1, but not HCN2, immunolabeling, in these neurons. In contrast, the application of HCN channel blocker increased the hyperalgesia threshold of male OIH rats. Consequently, we identified an HCN-channel-dependent hyperexcitability of IL-mPFC output neurons, which governs the development and maintenance of OIH in male rats.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.