2019
DOI: 10.3906/elk-1807-1
|View full text |Cite
|
Sign up to set email alerts
|

Low-cost multiple object tracking for embedded vision applications

Abstract: This paper presents a low-cost multiple object tracking (MOT) technique by employing a novel appearance update model for object appearance modeling using K-means. The state-of-the-art work has attained a very high accuracy without considering the real-time aspects necessitated by currently trending embedded vision platforms. The major research on multiple object tracking is used to update the appearance model in every frame while discounting its persistent nature. The proposed appearance update model reduces t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…At the beginning, a smaller window containing Kinect's output position for object's region of interest is identified in order to form a basis for performing further operations within a quite smaller solution space rather than whole stage image window Kinect-acquired RGB images are converted to 8-bit grayscale and Depth images are quantized into 8-bit images in order to decrease computational load. Active window images are then subtracted from the corresponding background images to obtain moving only regions as indicated in the network of Shehzad et al [27]. In the next step, a separate cue which means an extraction of requested object coordinates is achieved by using the RGB and depth moving only active window images while Kinect's output is directly taken as a distinct cue in this phase.…”
Section: Information Flowmentioning
confidence: 99%
“…At the beginning, a smaller window containing Kinect's output position for object's region of interest is identified in order to form a basis for performing further operations within a quite smaller solution space rather than whole stage image window Kinect-acquired RGB images are converted to 8-bit grayscale and Depth images are quantized into 8-bit images in order to decrease computational load. Active window images are then subtracted from the corresponding background images to obtain moving only regions as indicated in the network of Shehzad et al [27]. In the next step, a separate cue which means an extraction of requested object coordinates is achieved by using the RGB and depth moving only active window images while Kinect's output is directly taken as a distinct cue in this phase.…”
Section: Information Flowmentioning
confidence: 99%