The advanced driving assistant system (ADAS) is an important vehicle safety technology that can effectively reduce traffic accidents. This system can perceive information about the surrounding environment through in-vehicle cameras. However, these cameras are easily affected by severe weather conditions, such as those involving fog, rain, and snow. The quality of the images acquired by the system is degraded, and the function of the ADAS is thus weakened. In response to this problem, we propose a comprehensive imaging model that can represent the features of fog, rain streaks, raindrops and snowflakes in an image. Subsequently, an algorithm called RASWNet is proposed, which can remove all severe weather features from a degraded image. Based on the generative adversarial network, RASWNet combines the focus capture ability of a visual attention mechanism, the memory ability of the recurrent neural network and the feature extraction ability of the dense blocks approach. We verify the network structure through several ablation studies and use various synthetic and real images to test it. The results of these experiments show that our algorithm is not only better than the commonly used algorithms in terms of its clarity enhancement capacity but is also suitable for all severe weather conditions. INDEX TERMS Generative adversarial network, remove all severe weather features, degraded image, RASWNet, visual attention mechanism.
The density-based spatial clustering of application with noise (DBSCAN) algorithm has good robustness and is widely employed to cluster vehicle trajectories for vehicle movement pattern recognition. However, the distance or similarity between two trajectories varies from tens to hundreds of thousands, and there is no effective method for determining the values of the hyperparameters eps and MinPts of DBSCAN. In addition, with increasing sizes of trajectory datasets, some trajectory clustering methods that directly analyse points and line segments incur large computational costs and time overhead. To solve these two dilemmas, the authors propose an effective trajectory dimensionality reduction method and a DBSCAN hyperparameter initial value setting method. The trajectory dimensionality reduction algorithm processes trajectories with different lengths into the same dimensionality (the same number of feature points). The reserved points preserve the spatial and temporal information of these trajectories as much as possible. The DBSCAN hyperparameter initial value setting algorithm obtains the effective initial values of eps and MinPts for facilitating subsequent adjustments. Finally, we validate these proposed methods on two trajectory datasets collected from two real-world scenes, and the experimental results are promising and effective.
AVS-M is the recent mobile video coding standard of China. Currently, ARM cores are widely used in mobile applications because of their low power consumption. In this paper, a scheme of the AVS-M decoder realtime implementation on 32 bit MCU RISC processor ARM920T (S3C2440) is presented. The algorithm, redundancy, structure and memory optimization methods to implement AVS-M realtime are discussed in detail. The experiment results demonstrate the success of our optimization techniques and the realtime implementation. The ADS, MCPS and simulation results show that the proposed AVS-M decoder can decode the QVGA image sequence in real-time with high image quality and has low complexity and less memory requirement. AVS conformance test result confirms the proposed AVS-M decoder full compliance with AVS. The proposed AVS-M decoder can be employed in many real-time applications in the third generation communication.
Background modeling techniques are important for object detection and tracking in video surveillances. Traditional background subtraction approaches are suffered from problems, such as persistent dynamic backgrounds, quick illumination changes, occlusions, noise etc. In this paper, we address the problem of detection and localization of moving objects in a video stream without apperception of background statistics. Three major contributions are presented. First, introducing the sequential Monte Carlo sampling techniques greatly reduce the computation complexity while compromise the expected accuracy. Second, the robust salient motion is considered when resampling the feature points by removing those who do not move in a relative constant velocity and emphasis those in consistent motion. Finally, the proposed joint feature model enforced spatial consistency. Promising results demonstrate the potentials of the proposed algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.