2020
DOI: 10.1109/access.2020.2987870
|View full text |Cite
|
Sign up to set email alerts
|

Fast Temporal Video Segmentation Based on Krawtchouk-Tchebichef Moments

Abstract: With the increasing growth of multimedia data, the current real-world video sharing websites are being huge in repository size, more specifically video databases. This growth necessitates to look for superior techniques in processing video because video contains a lot of useful information. Temporal video segmentation (TVS) is considered essential stage in content-based video indexing and retrieval system. TVS aims to detect boundaries between successive video shots. TVS algorithm design is still challenging b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(36 citation statements)
references
References 56 publications
0
36
0
Order By: Relevance
“…Andriyenko et al [25] formulate MOT as a continuous energy function minimization problem and find a strong local minimum by the conjugate gradient method to approximate the global optimal correlation solution. Recently, with the development of object detection and further research on visual object appearance, more attention has been paid to mining strong appearance characteristics which can be obtained by extracting features of objects [45,46,47] and establishing robust appearance similarity measurement in online MOT. Deep learning has been used as the appearance model of MOT [26,27], and compared with previous learning methods [28], it demonstrates better performance.…”
Section: Related Workmentioning
confidence: 99%
“…Andriyenko et al [25] formulate MOT as a continuous energy function minimization problem and find a strong local minimum by the conjugate gradient method to approximate the global optimal correlation solution. Recently, with the development of object detection and further research on visual object appearance, more attention has been paid to mining strong appearance characteristics which can be obtained by extracting features of objects [45,46,47] and establishing robust appearance similarity measurement in online MOT. Deep learning has been used as the appearance model of MOT [26,27], and compared with previous learning methods [28], it demonstrates better performance.…”
Section: Related Workmentioning
confidence: 99%
“…In the application, feature detection and extraction usually can be divided into two directions based on processing compressed video domain and non-compressed video domain. The non-compressed domain method refers to the algorithms based on visual features, such as histogram [4] [9][10] [11] [12], pixel [13] [14] [15] [16], edge shape [17], motion [18] as well as orthogonal polynomial [19] [20] [21] [22]. While the compressed domain method refers to the algorithms based on compression coding, such as entropy coding including discrete cosine transform (DCT) and discrete Fourier transform (DFT) [12], macroblock coding [23], motion vector coding [24].…”
Section: Related Workmentioning
confidence: 99%
“…Dhiman et al [12] implement the detection of abrupt and gradual shot boundary according to DCT feature matching and histogram feature matching respectively. Abdulhussain et al [22] realize the video segmentation and shot boundary detection by calculating the similarity of orthogonal polynomial features.…”
Section: A Shot Boundary Detection Based On Distance Similaritymentioning
confidence: 99%
“…Nowadays, many applications including self-driving vehicles [2], surveillance systems [3], and crowd analysis [4] require various video processing technologies such as person re-identification [5], video segmentation [6], [7] and efficient feature processing [8]. Multiple object tracking (MOT) [9] is one of the important problems for video analysis to estimate the states (or bounding boxes) of as many objects as possible in a video sequence.…”
Section: Introductionmentioning
confidence: 99%