IET 4th European Conference on Visual Media Production (CVMP 2007) 2007
DOI: 10.1049/cp:20070035
|View full text |Cite
|
Sign up to set email alerts
|

Multi-viewpoint silhouette extraction with 3D context-aware error detection, correction, and shadow suppression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…we expect to see similar colours in all camera views at a certain point in space), it becomes possible to correct errors in the segmentation in one camera view using the information present in another. As argued by Nobuhara et al (21), however, merely relying on colour consistency between camera views may lead to errors. In the monocular case, background subtraction algorithms operate on the assumption that the colour of the background is different from the colour of the foreground.…”
Section: Overviewmentioning
confidence: 98%
See 1 more Smart Citation
“…we expect to see similar colours in all camera views at a certain point in space), it becomes possible to correct errors in the segmentation in one camera view using the information present in another. As argued by Nobuhara et al (21), however, merely relying on colour consistency between camera views may lead to errors. In the monocular case, background subtraction algorithms operate on the assumption that the colour of the background is different from the colour of the foreground.…”
Section: Overviewmentioning
confidence: 98%
“…Though the authors did not quantify this "significant effect", it is reasonable to assume that higher-quality silhouettes would produce higher-quality visual hulls, thus making silhouette accuracy a critical bottleneck in the reconstruction of a visual hull. When constructing a visual hull, a 3D point is labelled as part of the visual hull if and only if its projection lies within the silhouette on all the camera views; therefore, a view having errors in its silhouette could spoil the quality of the entire visual hull (21). Several authors who have applied the visual hull do not mention the silhouette extraction method used in their studies (22) (23), while others have used basic silhouette extraction methods (16)(24)(25) (26).…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the GGF model needs to be trained in each environment. An object silhouette extraction method with error detection and correction using multiviewpoint images was proposed by Nobuhara (Nobuhara et al, 2007). In this approach, two constraints were introduced: "intersection," which assumes that the projection of the visual hull on every viewpoint was equal to the silhouette on each viewpoint; and "projection," which implies that projection of the visual hull should have an outline that matches with the apparent edges of the captured image on each viewpoint.…”
Section: Refinement Of Silhouette Extractionmentioning
confidence: 99%
“…In addition, the computational cost is not very large because the number of required iterations is quite small, as discussed in 4. Whereas (Nobuhara et al, 2007) updates the silhouette image one by one sequentially, which is therefore time consuming, the proposed method updated all the silhouette images in each iteration.…”
Section: Proposed Work In This Chaptermentioning
confidence: 99%
“…The main methods for segmentation are: edge detection/filtering, image subtraction [75], [140], color segmentation [95], blob segmentation [107], optical or motion flow [18], labelling via graph-cut [16], etc. Also in [102], they introduced 3D context awareness to extract the silhouette from multi-view images. The common higher-level representations that are derived from the segmented images will be in the form of: feature points, silhouette, bounding box, blobs, motion flow fields, texture and edges.…”
Section: Typical Vision-based Mocap Frameworkmentioning
confidence: 99%