Color medical images better reflect a patient's lesion information and facilitate communication between doctors and patients. The combination of medical image processing and the Internet has been widely used for clinical medicine on Internet of medical things. The classical Welsh method uses matching pixels to achieve color migration of grayscale images, but it exists problems such as unclear boundary and single coloring effect. Therefore, the key information of medical images after coloring can't be reflected efficiently. To address this issue, we propose an image coloring method based on Gabor filtering combined with Welsh coloring and apply it to medical grayscale images. By using Gabor filtering, which is similar to the visual stimulus response of simple cells in the human visual system, filtering in 4 directions and 6 scales is used to stratify the grayscale images and extract local spatial and frequency domain information. In addition, the Welsh coloring method is used to render the image with obvious textural features in the layered image. Our experiments show that the color transboundary problem can be solved effectively after the layered processing. Compared to images without stratification, the coloring results of the processed images are closer to the real image.
<abstract><p>The image super-resolution reconstruction method can improve the image quality in the Internet of Things (IoT). It improves the data transmission efficiency, and is of great significance to data transmission encryption. Aiming at the problem of low image quality in image super-resolution using neural networks, a self-attention-based image reconstruction method is proposed for secure data transmission in IoT environment. The network model is improved, and the residual network structure and sub-pixel convolution are used to extract the feature of the image. The self-attention module is used extract detailed information in the image. Using generative confrontation method and image feature perception method to improve the image reconstruction effect. The experimental results on the public data set show that the improved network model improves the quality of the reconstructed image and can effectively restore the details of the image.</p></abstract>
Image style transfer can realize the mutual transfer between different styles of images and is an essential application for big data systems. The use of neural network-based image data mining technology can effectively mine the useful information in the image and improve the utilization rate of information. However, when using the deep learning method to transform the image style, the content information is often lost. To address this problem, this paper introduces L1 loss on the basis of the VGG-19 network to reduce the difference between image style and content and adds perceptual loss to calculate the semantic information of the feature map to improve the model’s perceptual ability. Experiments show that the proposal in this paper improves the ability of style transfer, while maintaining image content information. The stylization of the improved model can better meet people’s requirements for stylization, and the evaluation indexes of structural similarity, cosine similarity, and mutual information value have increased by 0.323%, 0.094%, and 3.591%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.