2020
DOI: 10.1007/978-3-030-58621-8_1
|View full text |Cite
|
Sign up to set email alerts
|

MessyTable: Instance Association in Multiple Camera Views

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…We additionally compare with (Appearance Only), or the Hungarian algorithm with thresholding on the appearance embedding distances. This approach outperformed other methods like [5] and [42] in [24].…”
Section: Input Views Proposed Sparse Planesmentioning
confidence: 81%
See 2 more Smart Citations
“…We additionally compare with (Appearance Only), or the Hungarian algorithm with thresholding on the appearance embedding distances. This approach outperformed other methods like [5] and [42] in [24].…”
Section: Input Views Proposed Sparse Planesmentioning
confidence: 81%
“…We evaluate correspondence separately. We follow Cai et al [5] and use IPAA-X, or the fraction of image pairs with no less than X% of planes associated correctly. We use ground-truth plane boxes in this setting since otherwise this metric measures both plane detection and plane correspondence.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Camera Angle. Recent studies [4] have shown the enormous impact of camera angles on model performance, yet its effect in 3D human recovery is under-explored due to the unavailability of such a dataset. It is common to have datasets with fixed camera positions [18,22,40,56], where the only variation comes from the relative positions of the camera and the subjects.…”
Section: Diverse Data Generationmentioning
confidence: 99%
“…Compared with Protocol 1, we observe that training with fewer actions and testing on unseen actions degrade the precision significantly, especially for crossevaluation on the whole body category which seems to have a large action distribution misalignment with the other two categories. Furthermore, deep learning models are sensitive to viewing angles[8,64], we thus report results of crossview (P3) in Table9. When the model is only trained on one view (i.e., View 0), we observe a considerable domain gap across different views as the errors increase as the deviation from the test view from the training view increases.…”
mentioning
confidence: 99%