2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR) 2016
DOI: 10.1109/icaipr.2016.7585207
|View full text |Cite
|
Sign up to set email alerts
|

Motion background modeling based on context-encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…Deep learning models has a remarkable ability in background generation to cover various dynamics of outdoor environments with a series of layers. A context-encoder [31] was proven to be feasible for modelling the background of a motion-based video by learning visual features of scene context to construct the overall scene of a video. Xu et al [32] used an adaptive Restricted Boltzmann Machine, which performs approximate learning with an aim of capturing the temporal correlation between adjacent video frames to construct background.…”
Section: Methodsmentioning
confidence: 99%
“…Deep learning models has a remarkable ability in background generation to cover various dynamics of outdoor environments with a series of layers. A context-encoder [31] was proven to be feasible for modelling the background of a motion-based video by learning visual features of scene context to construct the overall scene of a video. Xu et al [32] used an adaptive Restricted Boltzmann Machine, which performs approximate learning with an aim of capturing the temporal correlation between adjacent video frames to construct background.…”
Section: Methodsmentioning
confidence: 99%
“…It allows information to be propagated within activations of each feature map. Experiments provided by Qu et al [151] are limited but convincing.…”
Section: Deep Auto Encoder Network (Dae)mentioning
confidence: 96%
“…Qu et al [151] employed a context-encoder network for a motion-based background generation method by removing the moving foreground objects and learning the feature. After removing the foreground, a context-encoder is also used to predict the missing pixels of the empty region, and to generate a background model of each frame.…”
Section: Deep Auto Encoder Network (Dae)mentioning
confidence: 99%
See 2 more Smart Citations