High spatial resolution (HSR) remote sensing images have a wide range of application prospects in the fields of urban planning, agricultural planning and military training. Therefore, the research on the semantic segmentation of remote sensing images becomes extremely important. However, large data volume and the complex background of HSR remote sensing images put great pressure on the algorithm efficiency. Although the pressure on the GPU can be relieved by down-sampling the image or cropping it into small patches for separate processing, the loss of local details or global contextual information can lead to limited segmentation accuracy. In this study, we propose a multi-field context fusion network (MCFNet), which can preserve both global and local information efficiently. The method consists of three modules: a backbone network, a patch selection module (PSM), and a multi-field context fusion module (FM). Specifically, we propose a confidence-based local selection criterion in the PSM, which adaptively selects local locations in the image that are poorly segmented. Subsequently, the FM dynamically aggregates the semantic information of multiple visual fields centered on that local location to enhance the segmentation of these local locations. Since MCFNet only performs segmentation enhancement on local locations in an image, it can improve segmentation accuracy without consuming excessive GPU memory. We implement our method on two high spatial resolution remote sensing image datasets, DeepGlobe and Potsdam, and compare the proposed method with state-of-the-art methods. The results show that the MCFNet method achieves the best balance in terms of segmentation accuracy, memory efficiency, and inference speed.
The shape and position of abdominal and pelvic organs change greatly during radiotherapy, so image-guided radiation therapy (IGRT) is urgently needed. The world’s first integrated CT-linac platform, equipped with fan beam CT (FBCT), can provide a diagnostic-quality FBCT for achieve adaptive radiotherapy (ART). However, CT scans will bring the risk of excessive scanning radiation dose. Reducing the tube current of the FBCT system can reduce the scanning dose, but it will lead to serious noise and artifacts in the reconstructed images. In this study, we proposed a deep learning method, Content-Noise Cycle-Consistent Generative Adversarial Network (CNCycle-GAN), to improve the image quality and CT value accuracy of low-dose FBCT images to meet the requirements of adaptive radiotherapy. We selected 76 patients with abdominal and pelvic tumors who received radiation therapy. The patients received one low-dose CT scan and one normal-dose CT scan in IGRT mode during different fractions of radiotherapy. The normal dose CT images (NDCT) and low dose CT images (LDCT) of 70 patients were used for network training, and the remaining 6 patients were used to validate the performance of the network. The quality of low-dose CT images after network restoration (RCT) were evaluated in three aspects: image quality, automatic delineation performance and dose calculation accuracy. Taking NDCT images as a reference, RCT images reduced MAE from 34.34 ± 5.91 to 20.25 ± 4.27, PSNR increased from 34.08 ± 1.49 to 37.23 ± 2.63, and SSIM increased from 0.92 ± 0.08 to 0.94 ± 0.07. The P value is less than 0.01 of the above performance indicators indicated that the difference were statistically significant. The Dice similarity coefficients (DCS) between the automatic delineation results of organs at risk such as bladder, femoral heads, and rectum on RCT and the results of manual delineation by doctors both reached 0.98. In terms of dose calculation accuracy, compared with the automatic planning based on LDCT, the difference in dose distribution between the automatic planning based on RCT and the automatic planning based on NDCT were smaller. Therefore, based on the integrated CT-linac platform, combined with deep learning technology, it provides clinical feasibility for the realization of low-dose FBCT adaptive radiotherapy for abdominal and pelvic tumors.
Due to the inherent inter-class similarity and class imbalance of remote sensing images, it is difficult to obtain effective results in single-source semantic segmentation. We consider applying multi-modal data to the task of the semantic segmentation of HSR (high spatial resolution) remote sensing images, and obtain richer semantic information by data fusion to improve the accuracy and efficiency of segmentation. However, it is still a great challenge to discover how to achieve efficient and useful information complementarity based on multi-modal remote sensing image semantic segmentation, so we have to seriously examine the numerous models. Transformer has made remarkable progress in decreasing model complexity and improving scalability and training efficiency in computer vision tasks. Therefore, we introduce Transformer into multi-modal semantic segmentation. In order to cope with the issue that the Transformer model requires a large amount of computing resources, we propose a model, MFTransNet, which combines a CNN (convolutional neural network) and Transformer to realize a lightweight multi-modal semantic segmentation structure. To do this, a small convolutional network is first used for performing preliminary feature extraction. Subsequently, these features are sent to the multi-head feature fusion module to achieve adaptive feature fusion. Finally, the features of different scales are integrated together through a multi-scale decoder. The experimental results demonstrate that MFTransNet achieves the best balance among segmentation accuracy, memory-usage efficiency and inference speed.
Due to the poor analysis of the evolution cycle and cycle theory structure, the prediction accuracy of public opinion in university social networks is low. Thus, a new prediction model for the evolution trend of public opinion in university social networks is proposed. By analyzing the theoretical structure of the public opinion evolution cycle and the life cycle of social networks in colleges and universities, the public opinion evolution cycle is determined. By using the E-Divisive algorithm, the evolution trend of public opinion is divided. The trust degree of different views on the network platform and the characteristics of public opinion events are abstracted, and the public opinion evolution trend prediction model is established to predict the evolution trend of social network public opinion in colleges and universities. The experimental results show that the relative error of this prediction model is lower than that of the traditional model. The error value is less than 0.5, which indicates that the prediction accuracy of this prediction model is higher, which is conducive to creating a healthy social network platform for colleges and universities and promoting the healthy development of college students' bodies and minds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.