Proceedings of the 2005 ACM Symposium on Applied Computing 2005
DOI: 10.1145/1066677.1066879
|View full text |Cite
|
Sign up to set email alerts
|

Profiling and mapping of parallel workloads on network processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2007
2007
2014
2014

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(25 citation statements)
references
References 9 publications
0
25
0
Order By: Relevance
“…In [14], the authors use an annotated directed acyclic graph (ADAG) to generate a group of pipelined tasks by the dynamic profiling and instruction tracing method. While in this paper profiling, grouping, and allocating tasks are carried out manually considering the overall performance of the NePA system and the workload which is scheduled to PEs.…”
Section: Methodology Of Parallel and Pipeline Processing For Block CImentioning
confidence: 99%
See 2 more Smart Citations
“…In [14], the authors use an annotated directed acyclic graph (ADAG) to generate a group of pipelined tasks by the dynamic profiling and instruction tracing method. While in this paper profiling, grouping, and allocating tasks are carried out manually considering the overall performance of the NePA system and the workload which is scheduled to PEs.…”
Section: Methodology Of Parallel and Pipeline Processing For Block CImentioning
confidence: 99%
“…The authors of [8] have suggested maximizing the throughput of a pipelined multiprocessor system by effective assignment of flow tasks to pipeline stages on an NP platform. In the paper [14], they have introduced a methodology for profiling and scheduling networking workloads and applications on a highly parallel network processor architecture.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Ramaswamy et al [19] and Weng et al [20] presented randomization algorithms for task allocation on network processors with an objective of minimizing latency (not throughput) without considering multi-threading or process transformations. Ramamurthi et al [21] presented heuristic techniques for mapping applications to block multi-threaded multiprocessor architectures.…”
Section: Previous Workmentioning
confidence: 99%
“…The basic idea of randomized mapping is to randomly choose a valid mapping and evaluate its performance and repeat this process certain times. [7] and [8] present randomized mapping algorithms with different models for performance evaluation. Genetic algorithm maintains a population of candidate solutions that evolves over time and ultimately converges.…”
Section: Related Workmentioning
confidence: 99%