Ground-object classification using remote-sensing images of high resolution is widely used in land planning, ecological monitoring, and resource protection. Traditional image segmentation technology has poor effect on complex scenes in high-resolution remote-sensing images. In the field of deep learning, some deep neural networks are being applied to high-resolution remote-sensing image segmentation. The DeeplabV3+ network is a deep neural network based on encoder-decoder architecture, which is commonly used to segment images with high precision. However, the segmentation accuracy of high-resolution remote-sensing images is poor, the number of network parameters is large, and the cost of training network is high. Therefore, this paper improves the DeeplabV3+ network. Firstly, MobileNetV2 network was used as the backbone feature-extraction network, and an attention-mechanism module was added after the feature-extraction module and the ASPP module to introduce focal loss balance. Our design has the following advantages: it enhances the ability of network to extract image features; it reduces network training costs; and it achieves better semantic segmentation accuracy. Experiments on high-resolution remote-sensing image datasets show that the mIou of the proposed method on WHDLD datasets is 64.76%, 4.24% higher than traditional DeeplabV3+ network mIou, and the mIou on CCF BDCI datasets is 64.58%. This is 5.35% higher than traditional DeeplabV3+ network mIou and outperforms traditional DeeplabV3+, U-NET, PSP-NET and MACU-net networks.
Semantic segmentation of standing trees is important to obtain factors of standing trees from images automatically and effectively. Aiming at the accurate segmentation of multiple standing trees in complex backgrounds, some traditional methods have shortcomings such as low segmentation accuracy and manual intervention. To achieve accurate segmentation of standing tree images effectively, SEMD, a lightweight network segmentation model based on deep learning, is proposed in this article. DeepLabV3+ is chosen as the base framework to perform multi-scale fusion of the convolutional features of the standing trees in images, so as to reduce the loss of image edge details during the standing tree segmentation and reduce the loss of feature information. MobileNet, a lightweight network, is integrated into the backbone network to reduce the computational complexity. Furthermore, SENet, an attention mechanism, is added to obtain the feature information efficiently and suppress the generation of useless feature information. The extensive experimental results show that using the SEMD model the MIoU of the semantic segmentation of standing tree images of different varieties and categories under simple and complex backgrounds reaches 91.78% and 86.90%, respectively. The lightweight network segmentation model SEMD based on deep learning proposed in this paper can solve the problem of multiple standing trees segmentation with high accuracy.
Background: Multimodal analysis has shown great potential in the diagnosis and management of cancer.This study aimed to determine the multimodal data associations between radiological, pathologic, and molecular characteristics in bladder cancer.Methods: A retrospective study of computed tomography (CT), pathologic slice, and RNA sequencing data from 127 consecutive adult patients in China who underwent bladder surgery and were pathologically diagnosed with bladder cancer was conducted. A total of 200 radiological and 1,029 pathologic features were extracted by radiomics and pathomics. Multimodal associations analysis and structural equation modeling were used to measure the cross-modal associations and structural relationships between CT and pathologic slice. A convolutional neural network was constructed for molecular subtyping based on multimodal imaging features. Class activation maps were used to examine the feature contribution in model decision-making. Cox regression and Kaplan-Meier survival analysis were used to explore the relevance of multimodal features to the prognosis of patients with bladder cancer.Results: A total of 77 densely associated blocks of feature pairs were identified between CT and whole slide images. The largest cross-modal associated block reflected the tumor-grade properties. A significant relation was found between pathological features and molecular subtypes (β=0.396; P<0.001). High-grade bladder cancer showed heterogeneity of significance across different scales and higher disorders at the microscopic level. The fused radiological and pathologic features achieved higher accuracy (area under the curve: 0.89; 95% CI: 0.75-1.0) than the unimodal method. Thirteen prognosis-related features from CT and whole slide images were identified.Conclusions: Our work demonstrated the associations between CT, pathologic slices, and molecular signatures, and the potential to use multimodal data analysis in related clinical applications. Multimodal data analysis showed the potential of cross-inference of modal data and had higher diagnostic accuracy than the unimodal method.
This paper discusses a sweetgum leaf-spot image segmentation method based on an improved DeeplabV3+ network to address the low accuracy in plant leaf spot segmentation, problems with the recognition model, insufficient datasets, and slow training speeds. We replaced the backbone feature extraction network of the model's encoder with the MobileNetV2 network, which greatly reduced the amount of calculation being performed in the model and improved its calculation speed. Then, the attention mechanism module was introduced into the backbone feature extraction network and the decoder, which further optimized the model’s edge recognition effect and improved the model's segmentation accuracy. Given the category imbalance in the sweetgum leaf spot dataset (SLSD), a weighted loss function was introduced and assigned to two different types of weights, for spots and the background, respectively, to improve the segmentation of disease spot regions in the model. Finally, we graded the degree of the lesions. The experimental results show that the PA, mRecall, and mIou algorithms of the improved model were 94.5%, 85.4%, and 81.3%, respectively, which are superior to the traditional DeeplabV3+, Unet, Segnet models and other commonly used plant disease semantic segmentation methods. The model shows excellent performance for different degrees of speckle segmentation, demonstrating that this method can effectively improve the model’s segmentation performance for sweetgum leaf spots.
Soil moisture time series data are usually nonlinear in nature and are influenced by multiple environmental factors. The traditional autoregressive integrated moving average (ARIMA) method has high prediction accuracy but is only suitable for linear problems and only predicts data with a single column of time series. The gated recurrent unit neural network (GRU) can achieve the prediction of time series and nonlinear multivariate data, but a single nonlinear model does not yield optimal results. Therefore, a hybrid time series prediction model, BAG, combining linear and nonlinear characteristics of soil moisture, is proposed in this paper to achieve the identification process of linear and nonlinear relationships in soil moisture data so as to improve the accuracy of prediction results. In BAG, block Hankel tensor ARIMA (BHT-ARIMA) and GRU are selected to extract the linear and nonlinear features of soil moisture data, respectively. BHT-ARIMA is applied to predict the linear part of the soil moisture, and GRU is used to predict the residual series, which is the nonlinear part, and the superposition of the two predicted results is the final prediction result. The performance of the proposed model on five real datasets was evaluated. The results of the experiments show that BAG has a higher prediction accuracy compared with other prediction models for different amounts of data and different numbers of environmental factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.