ACM SIGGRAPH Asia 2009 Papers 2009
DOI: 10.1145/1661412.1618464
|View full text |Cite
|
Sign up to set email alerts
|

Efficient affinity-based edit propagation using K-D tree

et al.

Abstract: Figure 1: Affinity-based edit propagation methods such as allow one to change the appearance of an image or video (e.g., the color of the bird here) using only a few strokes, yet consuming prohibitive amount of time and memory for large data (e.g., 48 minutes and 23GB for this video containing 61M pixels). Our approximation scheme drastically reduces the cost of edit propagation methods (to 8 seconds and 22MB in this example) by exploring adaptive clustering in the affinity space. Video courtesy of BBC Motion… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
74
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(74 citation statements)
references
References 20 publications
(11 reference statements)
0
74
0
Order By: Relevance
“…(D + 1)N indices, where N is the number of pixels. Our method is also generalizable to video input, similar as proposed by Levin et al or Xu et al [12,38]. We relegate the implementation of video-processing to future work, but we do expect to retain the same performance.…”
Section: Preprocmentioning
confidence: 86%
See 2 more Smart Citations
“…(D + 1)N indices, where N is the number of pixels. Our method is also generalizable to video input, similar as proposed by Levin et al or Xu et al [12,38]. We relegate the implementation of video-processing to future work, but we do expect to retain the same performance.…”
Section: Preprocmentioning
confidence: 86%
“…Further work are speed-up methods, like Xu et al [38] who proposed an acceleration to the approach of [2] based on kd-tree-subdivision of the image, but they still utilize the dense solver. Li et al [14] formulate the problem as Radial-Basis-Function kernels interpolation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…5). This structure is the key difference with other methods targeting non-structured media, such as unstructured light fields [217], [218], video [209], [219], [220], [221] or image collections [222], [223], [224]. These are more complex structures, and might require the use of techniques such as depth estimation, optical flow or feature matching, to transfer edits between images.…”
Section: Beyond Structured Light Fieldsmentioning
confidence: 99%
“…For a given illumination, we try to infer the a and b colour channels of input images based on a convolutional neural networks. [11][12][13] In the rest part of this paper, CNN-based colorization scheme will be introduced in section 2. Experimental results are provided in section 3.…”
Section: Introductionmentioning
confidence: 99%