2019
DOI: 10.1007/978-3-030-20205-7_13
|View full text |Cite
|
Sign up to set email alerts
|

Material-Based Segmentation of Objects

Abstract:  Users may download and print one copy of any publication from the public portal for the purpose of private study or research.  You may not further distribute the material or use it for any profit-making activity or commercial gain  You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Synthetic data, and CGI images, are extensively used in training machine learning for computer vision [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. CGI data is created by combining human-made assets [35][36][37][38][39][40], simulation, and procedural generation rules.…”
Section: Synthetic Data and Domain Gapmentioning
confidence: 99%
See 1 more Smart Citation
“…Synthetic data, and CGI images, are extensively used in training machine learning for computer vision [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. CGI data is created by combining human-made assets [35][36][37][38][39][40], simulation, and procedural generation rules.…”
Section: Synthetic Data and Domain Gapmentioning
confidence: 99%
“…This approach is limited by the cost of human labor and by the fact that humans perform poorly in segmenting scatter patterns (cracks, drops) as well as soft boundaries and gradual transitions between states [5]. The alternative approach uses synthetic data using CGI and simulation [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. This approach has no limitation on the amount of data it can generate and the precision of the annotations but is limited to procedural generation roles that cannot capture the vast complexity and patterns of the real world [23][24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%