The purpose is to explore the feature recognition, diagnosis, and forecasting performances of Semi-Supervised Support Vector Machines (S3VMs) for brain image fusion Digital Twins (DTs). Both unlabeled and labeled data are used regarding many unlabeled data in brain images, and semi supervised support vector machine (SVM) is proposed. Meantime, the AlexNet model is improved, and the brain images in real space are mapped to virtual space by using digital twins. Moreover, a diagnosis and prediction model of brain image fusion digital twins based on semi supervised SVM and improved AlexNet is constructed. Magnetic Resonance Imaging (MRI) data from the Brain Tumor Department of a Hospital are collected to test the performance of the constructed model through simulation experiments. Some state-of-art models are included for performance comparison: Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), AlexNet, and Multi-Layer Perceptron (MLP). Results demonstrate that the proposed model can provide a feature recognition and extraction accuracy of 92.52%, at least an improvement of 2.76% compared to other models. Its training lasts for about 100 s, and the test takes about 0.68 s. The Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of the proposed model are 4.91 and 5.59%, respectively. Regarding the assessment indicators of brain image segmentation and fusion, the proposed model can provide a 79.55% Jaccard coefficient, a 90.43% Positive Predictive Value (PPV), a 73.09% Sensitivity, and a 75.58% Dice Similarity Coefficient (DSC), remarkably better than other models. Acceleration efficiency analysis suggests that the improved AlexNet model is suitable for processing massive brain image data with a higher speedup indicator. To sum up, the constructed model can provide high accuracy, good acceleration efficiency, and excellent segmentation and recognition performances while ensuring low errors, which can provide an experimental basis for brain image feature recognition and digital diagnosis.
Environmental changes and human activities can cause serious degradation of murals, where sootiness is one of the most common problems of ancient Chinese indoor murals. In order to improve the visual quality of the murals, a restoration method is proposed for sootiness murals based on dark channel prior and Retinex by bilateral filter using hyperspectral imaging technology. First, radiometric correction and denoising through band clipping and minimum noise fraction rotation forward and inverse transform were applied to the hyperspectral data of the sootiness mural to produce its denoised reflectance image. Second, a near-infrared band was selected from the reflectance image and combined with the green and blue visible bands to produce a pseudo color image for the subsequent sootiness removal processing. The near-infrared band is selected because it is better penetrating the sootiness layer to a certain extent comparing to other bands. Third, the sootiness covered on the pseudo color image was preliminarily removed by using the method of dark channel prior and by adjusting the brightness of the image. Finally, the Retinex by bilateral filter was performed on the image to get the final restored image, where the sootiness was removed. The results show that the images restored by the proposed method are superior in variance, average gradient, information entropy and gray scale contrast comparing to the results from the traditional methods of homomorphic filtering and Gaussian stretching. The results also show the highest score in comprehensive evaluation of edges, hue and structure; thus, the method proposed can support more potential studies or sootiness removal in real mural paintings with more detailed information. The method proposed shows strong evidence that it can effectively reduce the influence of sootiness on the moral images with more details that can reveal the original appearance of the mural and improve its visual quality.
The Ming and Qing Dynasty type of official-style architecture roof can provide plenty of prior knowledge relating to the structure and size of these works of architecture, and plays an important role in the fields of 3D modeling, semantic recognition and culture inheriting. In this paper, we take the 3D point cloud as the data source, and an automatic classification method for the roof type of Ming and Qing Dynasty official-style architecture based on the hierarchical semantic network is illustrated. To classify the roofs into the correct categories, the characteristics of different roof types are analyzed and features including SoRs, DfFtR, DoPP and NoREs are first selected; subsequently, the corresponding feature extraction methods are proposed; thirdly, aiming at the structure of the ridges, a matching graph relying on the attributed relational graph of the ridges is given; based on the former work, a hierarchical semantic network is proposed and the thresholds are determined with the help of the construction rules of the Ming and Qing Dynasty official-style architecture. In order to fully verify the efficiency of our proposed method, various types of Ming and Qing Dynasty official-style architecture roof are identified, and the experimental results show that all structures are classified correctly.
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching (DIM) point clouds, orthophoto and original aerial images. The proposed framework is divided into three stages. In the first stage, the features from the original aerial image and DIM points are fused to detect buildings and obtain the so-called blob of an individual building. Then, a feature-level fusion strategy is applied to match the straight-line segments from original aerial images so that the matched straight-line segment can be used in the later stage. Finally, a new footprint generation algorithm is proposed to generate the building footprint by combining the matched straight-line segments and the boundary of the blob of the individual building. The performance of our framework is evaluated on a vertical aerial image dataset (Vaihingen) and two oblique aerial image datasets (Potsdam and Lunen). The experimental results reveal 89% to 96% per-area completeness with accuracy above almost 93%. Relative to six existing methods, our proposed method not only is more robust but also can obtain a similar performance to the methods based on LiDAR and images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.