2018
DOI: 10.1109/tcsvt.2017.2711659
|View full text |Cite
|
Sign up to set email alerts
|

WeSamBE: A Weight-Sample-Based Method for Background Subtraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
97
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 142 publications
(97 citation statements)
references
References 38 publications
0
97
0
Order By: Relevance
“…The proposed algorithm showed lower performance than PAWCS [10], but better performance than other algorithms. The proposed algorithm showed lower performance than PAWCS [10] and WeSamBE [15] in terms of precision and FPR. However, the proposed method showed better performance than these algorithms in terms of FNR.…”
Section: 1) Evaluation For Cdnet Datasetmentioning
confidence: 84%
See 2 more Smart Citations
“…The proposed algorithm showed lower performance than PAWCS [10], but better performance than other algorithms. The proposed algorithm showed lower performance than PAWCS [10] and WeSamBE [15] in terms of precision and FPR. However, the proposed method showed better performance than these algorithms in terms of FNR.…”
Section: 1) Evaluation For Cdnet Datasetmentioning
confidence: 84%
“…Feedback Process is a module that calculates ( ) and t ( ) parameters used in Background samples and BG/FG classification module. Equations (14) and (15) are formulas used to calculate parameters. The updating method of ( ) and t ( ) is the same as that of SuBSENSE [7].…”
Section: 4) Feedback Processmentioning
confidence: 99%
See 1 more Smart Citation
“…While computation time is a critical factor for justifying background estimation, most of the literature focuses on incrementally improving accuracy with new algorithms at any cost. Popular pixel-level models such as the Gaussian Mixture Model (GMM) [1] have been around for decades, but recent approaches have included applying adaptive weights and parameters [2,3], deep convolutional neural networks [4][5][6], and ensemble models with stochastic model optimisation [7], all of which significantly increase computation time while only marginally improving accuracy, failing to address the challenges of real-world implementation.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, a recent history model (RHM) is generated by keeping the last five-pixel intensities as samples. The RHM is computed in Eq (5)…”
mentioning
confidence: 99%