2022 14th International Conference on Machine Learning and Computing (ICMLC) 2022
DOI: 10.1145/3529836.3529854
|View full text |Cite
|
Sign up to set email alerts
|

Faster R-CNN with Generative Adversarial Occlusion Network for Object Detection

Abstract: Performing object detection on partially occluded objects is a challenging task due to the amount of variation in location, scale, and ratio present in real-world occlusion. A typical solution to this problem is to provide a large enough dataset with ample occluded samples for feature learning. However, this is rather costly given the amount of time and effort involved in the data collection process. In addition, even with such a dataset, there is no guarantee that it covers all possible cases of common occlus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(27 citation statements)
references
References 12 publications
0
27
0
Order By: Relevance
“…Maximum-Mean Discrepancy (MMD) is to quantify the similarity between two distributions by comparing all of their moments [11,17]. It can be efficiently implemented using a kernel trick.…”
Section: Condition-specific MMD Regularizationmentioning
confidence: 99%
“…Maximum-Mean Discrepancy (MMD) is to quantify the similarity between two distributions by comparing all of their moments [11,17]. It can be efficiently implemented using a kernel trick.…”
Section: Condition-specific MMD Regularizationmentioning
confidence: 99%
“…It estimates the distance between probability distributions of domains, encouraging low entropy and consistency on domain predictions [13]. • moment matching [14], [15] focuses on the moments of extracted features. Moment matching estimates one or more moments of features, estimating the discrepancy of feature distributions across domains to lessen the inconsistency between domains.…”
Section: A Unsupervised Domain Adaptationmentioning
confidence: 99%
“…Moreover, Li et al (2015) considered k(X, Y) = T t=1 k G σ(t) (X, Y) in ( 2) and then used it to train the generator. Independently of Li et al (2015), Dziugaite et al (2015) proposed the same method to train G w but based on using a Bayesian optimization sequential design to set a value for σ and the number of neurons in each layer of network G w . Li et al (2017) proposed a learning technique by optimizing arg min…”
Section: Previous Workmentioning
confidence: 99%
“…Previous work in applying MMD as a training criterion in GANs has only looked at the MMD as a test statistic in a frequentist two-sample test (Li et al, 2015;Dziugaite et al, 2015;Li et al, 2017;Bińkowski et al, 2018;Briol et al, 2019). However, these procedures are still liable to the drawbacks of frequentist testing.…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation