Blur detection in a single image is challenging especially when the blur is spatially-varying. Developing discriminative blur features is an open problem. In this paper, we propose a new kernel-specific feature vector consisting of the information of a blur kernel and the information of an image patch. Specifically, the kernel specific-feature is composed of the multiplication of the variance of filtered kernel and the variance of filtered patch gradients. The feature origins from a blur-classification theorem and its discrimination can also be intuitively explained. To make the kernel-specific features useful for real applications, we build a pool of kernels consisting of motion-blur kernels, defocus-blur (out-of-focus) kernels, and their combinations. By extracting such features followed by the classifiers, the proposed algorithm outperforms the state-of-the-art blur detection method. Experimental results on public databases demonstrate the effectiveness of the proposed method.
Motion is an important clue for industrial inspection, video surveillance, and service machines to localize and recognize products and objects. Because blur cooccurs with motion, it is desirable for developing efficient and robust motion blur detection algorithm. However, existing algorithms are in-efficient for detecting spatial-varying motion blur. To deal with the problem, this paper presents a theorem according to which motion blur can be efficiently detected and segmented. According to the Theorem, the proposed algorithm requires a simple filtering operation and variance computation. Classification as either blurred or un-blurred pixel can be done by substituting the variance into the proposed simple formula and checking the sign of the resulting value. Moreover, a geometric interpretation and two extensions of the algorithm are given. Importantly, based on the geometric interpretation of the indicator function, we develop a one-class classifier which is more effective than the indicator function and has comparable computational cost of the indicator function. Experimental results on detecting motion-blurred cars, motorcycles, bicycles, bags, and persons demonstrate that the proposed algorithm is very efficient without loss of effectiveness.
Human parsing is for pixel-wise human semantic understanding. As human bodies are underlying hierarchically structured, how to model human structures is the central theme in this task. Focusing on this, we seek to simultaneously exploit the representational capacity of deep graph networks and the hierarchical human structures. In particular, we provide following two contributions. First, three kinds of part relations, i.e., decomposition, composition, and dependency, are, for the first time, completely and precisely described by three distinct relation networks. This is in stark contrast to previous parsers, which only focus on a portion of the relations and adopt a type-agnostic relation modeling strategy. More expressive relation information can be captured by explicitly imposing the parameters in the relation networks to satisfy the specific characteristics of different relations. Second, previous parsers largely ignore the need for an approximation algorithm over the loopy human hierarchy, while we instead address an iterative reasoning process, by assimilating generic message-passing networks with their edgetyped, convolutional counterparts. With these efforts, our parser lays the foundation for more sophisticated and flexible human relation patterns of reasoning. Comprehensive experiments on five datasets demonstrate that our parser sets a new state-of-the-art on each.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.