2012
DOI: 10.1109/mm.2012.90
|View full text |Cite
|
Sign up to set email alerts
|

Low-Power, Real-Time Object-Recognition Processors for Mobile Vision Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…In addition to consuming power in the hundreds of Watts (e.g., Nvidia 8800 GTX GPU consumes 185W [8]), which is not suitable for embedded applications, they often cannot reach high definition (HD) resolutions. Implementation in [2] achieves high throughput on a GPU at 100 fps using the approach presented in [4] to speed up feature extraction, but with a resolution of 640×480 pixels.…”
Section: Previous Workmentioning
confidence: 99%
“…In addition to consuming power in the hundreds of Watts (e.g., Nvidia 8800 GTX GPU consumes 185W [8]), which is not suitable for embedded applications, they often cannot reach high definition (HD) resolutions. Implementation in [2] achieves high throughput on a GPU at 100 fps using the approach presented in [4] to speed up feature extraction, but with a resolution of 640×480 pixels.…”
Section: Previous Workmentioning
confidence: 99%
“…In the field of object recognition, visual attention algorithms have been adopted to achieve high-throughput performance and low power consumption. [8][9][10] They divide the input image into hundreds of rectangular image tiles and selects the regions of interest among them to avoid processing background clutters in the input image. This approach successfully reduces the processing workload and also contributes to the high recognition accuracy by eliminating background image tiles without object keypoints.…”
Section: Background and Challengesmentioning
confidence: 99%
“…Previous approaches considered only speedup improved by the instruction-, data-, and task-level parallelism using figures of merit such as gigaoperations per second (GOPS). [8][9][10] In the case of recent vision processors employing the task-level pipeline, however, we must also consider NoC efficiency because multiple pairs of producer-consumer cores compete for data transaction, which results in NoC congestion. Because NoC congestion degrades the task-level pipeline's throughput, we must consider the NoC speedup of data transactions between the multiple producer-consumer cores.…”
Section: Contribution Of This Workmentioning
confidence: 99%
See 2 more Smart Citations