Video halftoning is a technology used to render a video onto a display device that can only display limited number of levels. Conventional video halftoning algorithms produce blue noise video halftones which are prone to flickering. Dedicated deflickering processes are hence required to reduce flickering. These processes share a common approach in which pixels are artificially made stable subject to some quality constraints. Due to the difficulty to control the extent of stability, artifacts caused by overstability such as dirty window effect, subtle motion and residual shadow are easily found in video halftones. In this paper, we suggest producing green noise video halftones instead of blue noise video halftones. By doing so, we can effectively reduce flickering and eliminate artifacts caused by overstability from the root simultaneously.
I. I TRODUCTIOVideo halftoning is a technique used to convert a gray-level video into a bi-level video. It is widely used to render a gray-level video onto a display device that can only support binary levels such as electronic paper.A natural realization of video halftoning is to halftone each frame of the original video sequence separately with a well-developed halftoning algorithm such as error diffusion. However, as output frames are made up of pixels of binary levels (0 and 1), when frames are halftoned independently, it is very likely that in the halftoned video the intensity values of a pixel toggles between 0 and 1 frequently. As a consequence, flickering occurs when the halftoned video is played at a medium frame rate.Various techniques have been proposed to reduce the flickering artifacts[1]- [8]. The basic idea behind them is to reduce the unnecessary toggle of pixel values along the time axis by making use of the temporal correlation among frames. For example, a three-dimensional (3-D) error diffusion algorithm was proposed in [1] to minimize the flickering artifacts. In [2], Gotsman proposed an iterative video halftoning algorithm in which the halftoning output of a particular frame is initialized to be the halftoning output of its previous frame and then rectified iteratively to minimize the temporal flicker between two successive frames. In [4], different 3-D error diffusion filters were designed to produce video halftones for being played at different frame rates to solve the problem. In [5], a direct binary search algorithm is applied for video halftoning. In [7], Sun proposed a video halftoning algorithm in which quantization error of a pixel is diffused to its spatiotemporal neighbors by separable one-dimensional temporal and two-dimensional spatial error diffusions. Motion adaptive gain control is employed to enhance the temporal consistency of the visual patterns by minimizing the flickering artifacts. In [8], a reference frame is derived from a group of frames and then used to control the temporal stability of the pixels in the group of frames based on a flickering sensitivity-based human visual model.As mentioned earlier, all these algorithms share the sa...