In this paper, we address the space-time video super-resolution, which aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence. A naive method is to decompose it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). Nevertheless, temporal interpolation and spatial upscaling are intra-related in this problem. Two-stage approaches cannot fully make use of this natural property. Besides, state-of-the-art VFI or VSR deep networks usually have a large frame reconstruction module in order to obtain high-quality photo-realistic video frames, which makes the two-stage approaches have large models and thus be relatively time-consuming. To overcome the issues, we present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video. Instead of reconstructing missing LR intermediate frames as VFI models do, we temporally interpolate LR frame features of the missing LR frames capturing local temporal contexts by a feature temporal interpolation module. Extensive experiments on widely used benchmarks demonstrate that the proposed framework not only achieves better qualitative and quantitative performance on both clean and noisy LR frames but also is several times faster than recent state-of-the-art two-stage networks. The source code is released in https:// github.com/ Mukosame/ Zooming-Slow-Mo-CVPR-2020.
A new approach to the design of computer-generated holograms makes optimal use of the available device resolution. An iterative search algorithm minimizes an error criterion by directly manipulating the binary hologram and observing the effect on the desired reconstruction. Several measures of error and efficiency useful in assessing the optimality of digital holograms are defined. Methods for designing digital holograms that are based on projections and error diffusion are presented as established techniques for comparison to direct binary search.
In this.work, we propose a new method to generate halftone images which are visually optimized for the display dcvice. The algorithm searches for a binary array of pixel values that minimizes the difference between the perceived displayed continuous-tone image and the perceived displayed halftone image. The algorithm is based on the direct binary search (DBS) heuristic. Since the algorithm is iterative, it is computationally intensive. This limits the complexity of the visual model that can be used. It also impacts the choice of the metric used to measure distortion between two perceived images. In particular, we use a linear, shift-invariant model with a point spread function based on measurement of contrast sensitivity as a function of spatial frequency. The non-ideal spot shape rendered by the output devices can also have a major effect on the displayed halftone image. This source of non-ideality is explicitly accounted for in our model for the display device.By recursively computing the change in perceived mean-squared error due to a change in the value of a binary pixel, we achieve a substantial reduction in computational complexity. The effect of a trial change may be evaluated with only table lookups and a few additions. INTRODUCTIONRecently, there has been a great deal of interest in making models for the output device and the human visual system an intrinsic part of halftoning algorithms, and in exploiting new computational approaches, either as a step in the design of the algorithm, or as a part of the algorithm itself. Of course, any halftoning algorithm may be viewed within the framework of an explicit or implicit model for the output device and the human visual system. In terms of these models, the distinction between earlier work, such as that reviewed in [1], and subsequent research activity is largely a matter of the level of detail captured by the models, and how significant a role these models play in the halftoning algorithm. To put the work presented in this paper in its proper context, we review some of this literature, with particular emphasis on what is strongly modelbased or computationally novel. For purposes of this review, it is helpful to organize the work in a hierarchy according to the computational complexity of the algorithms, exclusive of the computation required to design the algorithm.At the lowest level, we find simulated annealing used by Sullivan et al [2] and the genetic algorithm used by Chu [3] to design a family of minimally visible binary textures, each with a specified average absorptance, spanning the range from 0 to 1. These binary textures can form the basis for a halftoning algorithm. At each pixel location, we use the gray value of the continuous-tone image to index into the stack of binary patterns. However, the lack of continuity between the binary textures used for adjacent gray levels can result in a poor quality image.
In today's digital world securing different forms of content is very important in terms of protecting copyright and verifying authenticity. Many techniques have been developed to protect audio, video, digital documents, images, and programs (executable code). One example is watermarking of digital audio and images. We believe that a similar type of protection for printed documents is very important. The goals of our work are to securely print and trace documents on low cost consumer printers such as inkjet and electrophotographic (laser) printers. We will accomplish this through the use of intrinsic and extrinsic features obtained from modelling the printing process. In this paper we describe the use of image texture analysis to identify the printer used to print a document. In particular we will describe a set of features that can be used to provide forensic information about a document. We will demonstrate our methods using 10 EP printers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.