2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium 2015
DOI: 10.1109/hpcc-css-icess.2015.95
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Performance Prediction of ADAS Algorithms on Embedded Parallel Architectures

Abstract: International audienceADAS (Advanced Driver Assistance Systems) algorithms increasingly use heavy image processing operations. To embed this type of algorithms, semiconductor companies offer many heterogeneous architectures. These SoCs (System on Chip) are composed of different processing units, with different capabilities, and often with massively parallel computing unit. Due to the complexity of these SoCs, predicting if a given algorithm can be executed in real time on a given architecture is not trivial. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…These values have to be chosen in accordance with the mean range magnitude of all the intervals of s and d. Under these practical conditions, the kernel mapping optimization consists in finding the mapping function M that will minimize the cost function F : In order to solve this optimization, one might implement each kernel and then measure each s and d for different mappings, but this requires significant effort and time. As discussed in [30,31], we propose to solve this minimization problem without any kernel implementation or profiling on the targeted architecture. Thus, we explore automatically all the mappings belonging to the set M by using our performance prediction methodology described further.…”
Section: Cost Function Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…These values have to be chosen in accordance with the mean range magnitude of all the intervals of s and d. Under these practical conditions, the kernel mapping optimization consists in finding the mapping function M that will minimize the cost function F : In order to solve this optimization, one might implement each kernel and then measure each s and d for different mappings, but this requires significant effort and time. As discussed in [30,31], we propose to solve this minimization problem without any kernel implementation or profiling on the targeted architecture. Thus, we explore automatically all the mappings belonging to the set M by using our performance prediction methodology described further.…”
Section: Cost Function Approachmentioning
confidence: 99%
“…In [30,31], we present a methodology to predict computing time of a kernel on different processors. Our methodology is defined as follows.…”
Section: Computing Time Predictionmentioning
confidence: 99%
“…When working with heterogeneous architectures, a challenge needs to be addressed: how to partition the different tasks (or kernels) of the algorithm to embed on the different processors of the heterogeneous architecture. That is what we call the kernel mapping problem, introduced in [1]- [3].…”
Section: E Kernel Mapping Optimizationmentioning
confidence: 99%
“…As discussed in [2], [3], the parameters of the function f can be predicted for different M . In these papers we have presented a methodology to estimate the execution τ (M ) and transfer δ(M ) times with little knowledge of target architectures.…”
Section: E Kernel Mapping Optimizationmentioning
confidence: 99%
See 1 more Smart Citation