There are few network resources in wireless multimedia sensor networks (WMSNs). Compressing media data can reduce the reliance of user’s Quality of Experience (QoE) on network resources. Existing video coding software, such as H.264 and H.265, focuses only on spatial and short-term information redundancy. However, video usually contains redundancy over a long period of time. Therefore, compressing video information redundancy with a long period of time without compromising the user experience and adaptive delivery is a challenge in WMSNs. In this paper, a semantic-aware super-resolution transmission for adaptive video streaming system (SASRT) for WMSNs is presented. In the SASRT, some deep learning algorithms are used to extract video semantic information and enrich the video quality. On the multimedia sensor, different bit-rate semantic information and video data are encoded and uploaded to user. Semantic information can also be identified on the user side, further reducing the amount of data that needs to be transferred. However, identifying semantic information on the user side may increase the computational cost of the user side. On the user side, video quality is enriched with super-resolution technologies. The major challenges faced by SASRT include where the semantic information is identified, how to choose the bit rates of semantic and video information, and how network resources should be allocated to video and semantic information. The optimization problem is formulated as a complexity-constrained nonlinear NP-hard problem. Three adaptive strategies and a heuristic algorithm are proposed to solve the optimization problem. Simulation results demonstrate that SASRT can compress video information redundancy with a long period of time effectively and enrich the user experience with limited network resources while simultaneously improving the utilization of these network resources.
In the current age of data explosion, the amount of data has reached incredible proportions. Digital image data constitute most of these data. With the development of science and technology, the demand for networked work and life continues to grow. Cloud computing technology plays an increasingly important role in life and work. This paper studies the optimization methods for cloud computing image data recognition models. The parallelization and task scheduling of the remote-sensing image classification model SCSRC based on spatial correlation regularization and sparse representation are studied in a cloud computing platform. First, cloud detection technology, combined with the dynamic features of the edge overlap region, is implemented in cloud computing mode. For image edge overlap region detection, the SCSRC method is implemented on a single machine, and the time performance of the method is analysed experimentally, which provides a basis for parallelization research under the cloud computing platform. Finally, the speedup and expansion ratio of the SK-SCSRC algorithm are determined by experiment, and MR-SCSRC and SK-SCSRC are compared. The simulation results show that, compared to previous methods, the method of image edge overlap detection is more accurate and the image fusion is better, which improves the image recognition ability in the overlap region and demonstrates the performance improvement of the MR-SCSRC algorithm under scheduling. This method addresses the shortcomings of Hadoop’s existing scheduler and can be integrated into remote-sensing cloud computing systems in the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.