2018
DOI: 10.1016/j.cviu.2018.07.003
|View full text |Cite
|
Sign up to set email alerts
|

CuDi3D: Curvilinear displacement based approach for online 3D action detection

Abstract: To cite this version:Said Yacine Boulahia, Eric Anquetil, Franck Multon, Richard Kulpa. CuDi3D: Curvilinear displacement based approach for online 3D action detection. Computer Vision and Image Understanding, Elsevier, 2018, 174, pp.57-69 AbstractBeing able to interactively detect and recognize 3D actions based on skeleton data, in unsegmented streams, has become an important computer vision topic. It raises three scientific problems in relation with variability. The first one is the temporal variability that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…The discovery of the semantic segment boundaries can be based on prior knowledge that exploits pre-learned motion characteristics from known training data [24]- [26]. No-prior-knowledge solutions are based on significant changes in intrinsic dimensionality [27], discovery of repeating patterns [28], or significant accumulation of specific feature characteristics [29]. The category-blind semantic segmentation typically combines unsupervised feature learning with data mining to learn frequent motion patterns [30]- [32].…”
Section: Semantic Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…The discovery of the semantic segment boundaries can be based on prior knowledge that exploits pre-learned motion characteristics from known training data [24]- [26]. No-prior-knowledge solutions are based on significant changes in intrinsic dimensionality [27], discovery of repeating patterns [28], or significant accumulation of specific feature characteristics [29]. The category-blind semantic segmentation typically combines unsupervised feature learning with data mining to learn frequent motion patterns [30]- [32].…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…Segment-level detectors model the temporal context by partitioning the stream into overlapping segments mechanically obtained by a sliding window principle [25], [69], [84], or into disjoint semantic segments [26], [29], [83]. The segments are then directly classified (e.g., using Naive Bayes [83]) or matched against the pre-processed actions or templates using various distance functions, such as the Dynamic Time Warping in [25], Euclidean distance in [20], or fusion of linear classifiers in [29]. The event is finally detected if the distance satisfies some predefined threshold.…”
Section: B Stream Filteringmentioning
confidence: 99%
“…Most recognition approaches classify only the short motions that correspond to a single action. Only few of them [3,7,33,35,36,41] can detect and recognize actions within a long unsegmented motion. Such semantic segmentation is more difficult as the beginnings and endings of actions are unknown and have to be determined.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…In order to face those challenges, many researchers have exploited the availability of 3D skeletons provided by RGB-D sensors [13,7,4,14]. This high-level representation has the advantage of being compact, largely discriminative and capturing both lateral and radial motion.…”
Section: Introductionmentioning
confidence: 99%
“…where l denotes one of the M actions of interest {a 1 , ..., a M } or a background activity a 0 and G is the function which labels the frame R t . In general, there are two categories of approaches for finding G; singlestage [13,14,4] and multi-stage [19,21]. Single-stage approaches are usually able to operate in an online manner, whereas multi-stage ones separate the detection from the recognition step in order to generate mostly noise-free action segments.…”
Section: Introductionmentioning
confidence: 99%