2018
DOI: 10.1016/j.patcog.2017.10.030
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional neural network on three orthogonal planes for dynamic texture classification

Abstract: Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
73
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 64 publications
(74 citation statements)
references
References 66 publications
1
73
0
Order By: Relevance
“…50-class: It can be verified that our obtained results (99.5% and 99%) are similar to methods' using deep learning techniques, i.e. PCANet-TOP [10] and DT-CNN [9]. Although our descriptor takes dimension of 12,760 bins, larger than MBSIF-TOP's [4] with 7-scale (5,376 bins), to obtain the same accuracy (99.5%), it is clear that our method is more efficient in other DT datasets (DynTex, DynTex++) in which MBSIF-TOP [4] achieves those with different multi-scale settings.…”
Section: Resultssupporting
confidence: 76%
See 4 more Smart Citations
“…50-class: It can be verified that our obtained results (99.5% and 99%) are similar to methods' using deep learning techniques, i.e. PCANet-TOP [10] and DT-CNN [9]. Although our descriptor takes dimension of 12,760 bins, larger than MBSIF-TOP's [4] with 7-scale (5,376 bins), to obtain the same accuracy (99.5%), it is clear that our method is more efficient in other DT datasets (DynTex, DynTex++) in which MBSIF-TOP [4] achieves those with different multi-scale settings.…”
Section: Resultssupporting
confidence: 76%
“…4(c)) for encoding although we reduced the min distance parameter of the tool to minimum value for extracting more trajectories (see more detail in 4.1). In LBP-based approaches, MEWLSP [18] points out the highest rate of 98.48% in this scheme, even higher than the rate of DT-CNN [9] (98.18%) using AlexNet for learning patterns. However, it is not better than ours on UCLA dataset as well has not been tested on other challenging DynTex variants (i.e.…”
Section: Resultsmentioning
confidence: 76%
See 3 more Smart Citations