2011 23rd International Symposium on Computer Architecture and High Performance Computing 2011
DOI: 10.1109/sbac-pad.2011.17
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Maximum Likelihood Based Phylogenetic Kernels Using Network-on-Chip

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…For instance, a typical genome assembly algorithm farms out billions of pairwise sequence alignment tasks, each of which aligns two strings of small lengths (e.g., 100-500 base pairs) and can use a small number of cores (e.g., 8-16) [112]. As another example, consider the problem of computing phylogenetic inference using maximum likelihood (ML) [113], where one typically needs to carry out billions of independent tree evaluations, each of which internally performs a small number of floating point calculations using a few cores. In such applications, enhancing overall throughput in computation translates to shorter time to solution.…”
Section: Application Use-case Modelmentioning
confidence: 99%
“…For instance, a typical genome assembly algorithm farms out billions of pairwise sequence alignment tasks, each of which aligns two strings of small lengths (e.g., 100-500 base pairs) and can use a small number of cores (e.g., 8-16) [112]. As another example, consider the problem of computing phylogenetic inference using maximum likelihood (ML) [113], where one typically needs to carry out billions of independent tree evaluations, each of which internally performs a small number of floating point calculations using a few cores. In such applications, enhancing overall throughput in computation translates to shorter time to solution.…”
Section: Application Use-case Modelmentioning
confidence: 99%
“…Expectedly, numerous efforts have been made to accelerate the PLF, employing various technologies, from multi-core processors [20,21] and supercomputers [22,23], to FPGAs [24,25] and GPUs [19,26], to CGRA-based solutions [27,28] and dedicated NoCs [29,30]. These efforts, however, predominantly concentrated on accelerating computation, thus remaining bounded by the memory accesses since the PLF is a data-intensive, memory-bound operation.…”
Section: Introductionmentioning
confidence: 99%