Video Enhancement is an important computer vision task aiming at the removal of the artifacts from a lossy compressed video and the improvement of the visual properties by a photo-realistic restoration of the video contents. Decades of research produced a multitude of efficient algorithms, enabling the reduction of the memory footprint of the transferred video contents in a contiguously increasing network of video streaming services. In this work, we propose VETRAN -a low latency real-time online Video Enhancement TRANsformer based on spatial and temporal attention mechanisms. We validate our method on recent Video Enhancement NTIRE and AIM challenge benchmarks, i.e. REDS/REDS4, LDV, and IntVID. We improve over the compared state-of-the-art methods both quantitatively and qualitatively, while maintaining a low inference time.
Multiple low-vision tasks such as denoising, deblurring and super-resolution depart from RGB images and further reduce the degradations, improving the quality. However, modeling the degradations in the sRGB domain is complicated because of the Image Signal Processor (ISP) transformations. Despite of this known issue, very few methods in the literature work directly with sensor RAW images. In this work we tackle image restoration directly in the RAW domain. We design a new realistic degradation pipeline for training deep blind RAW restoration models. Our pipeline considers realistic sensor noise, motion blur, camera shake, and other common degradations. The models trained with our pipeline and data from multiple sensors, can successfully reduce noise and blur, and recover details in RAW images captured from different cameras. To the best of our knowledge, this is the most exhaustive analysis on RAW image restoration. Code available at https://github.com/mv-lab/AISP
Style= 1 "Night" Style= 2 "Cyberpunk" Blend Styles 1&2Figure 1: NILUTs encode multiple 3D LUTs in a single representation with the ability to blend between "styles" implicitely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.