Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.
As a fine-grained segmentation task, human parsing is still faced with two challenges: inter-part indistinction and intra-part inconsistency, due to the ambiguous definitions and confusing relationships between similar human parts. To tackle these two problems, this paper proposes a novel Affinity-aware Compression and Expansion Network (ACENet), which mainly consists of two modules: Local Compression Module (LCM) and Global Expansion Module (GEM). Specifically, LCM compresses parts-correlation information through structural skeleton points, obtained from an extra skeleton branch. It can decrease the inter-part interference, and strengthen structural relationships between ambiguous parts. Furthermore, GEM expands semantic information of each part into a complete piece by incorporating the spatial affinity with boundary guidance, which can effectively enhance the semantic consistency of intra-part as well. ACENet achieves new state-of-the-art performance on the challenging LIP and Pascal-Person-Part datasets. In particular, 58.1% mean IoU is achieved on the LIP benchmark.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.