Proceedings of the 49th Annual Design Automation Conference 2012
DOI: 10.1145/2228360.2228512
|View full text |Cite
|
Sign up to set email alerts
|

Architecture support for accelerator-rich CMPs

Abstract: This work discusses a hardware architectural support for acceleratorrich CMPs (ARC). First, we present a hardware resource management scheme for accelerator sharing. This scheme supports sharing and arbitration of multiple cores for a common set of accelerators, and it uses a hardware-based arbitration mechanism to provide feedback to cores to indicate the wait time before a particular resource becomes available. Second, we propose a light-weight interrupt system to reduce the OS overhead of handling interrupt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
92
0
2

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 90 publications
(94 citation statements)
references
References 22 publications
0
92
0
2
Order By: Relevance
“…This approach is complementary to our work, as our accelerators are essentially C functions with traditional function-call and memory-access semantics. Similarly, recent research by Cong et al has shown much potential for performance and energy improvement via architectural support in accelerator-rich CMPs [41].…”
Section: Related Workmentioning
confidence: 92%
“…This approach is complementary to our work, as our accelerators are essentially C functions with traditional function-call and memory-access semantics. Similarly, recent research by Cong et al has shown much potential for performance and energy improvement via architectural support in accelerator-rich CMPs [41].…”
Section: Related Workmentioning
confidence: 92%
“…In this respect, the X-FILES are related to similar extensions or frameworks like HiPPAI [29] and ARC [30]. However, the needs of HiPPAI and ARC are distinctly different from the X-FILES.…”
Section: A Accelerator Interfaces and Managementmentioning
confidence: 99%
“…We further subdivide memory transfers into those involving virtual or physical addresses. While this would normally be a hard, design-time decision (as is common with similar accelerator interfaces [29], [30]) we prefer to not restrict the generality of the X-FILES during definition. Instead, and as with register/memory mode, we take the view that these transfer modes should be exposed to agents (like the hardware designer, library writer, compiler, or OS) that can make the most informed decision regarding interface choice.…”
Section: Introductionmentioning
confidence: 99%
“…We share their attention to memory reuse, but take the alternative approach of integrating accelerators as NoC nodes, enabling GP-CPUs to reuse the accelerator's private memory blocks as NUCA slices. Our NoC-based coupling of accelerators and GP-CPUs is similar to the one proposed by Cong et al [4], although they do not consider memory reuse and focus instead on architectural support for accelerator abstraction.…”
Section: Related Workmentioning
confidence: 99%
“…Given the predicted fall of multicore scaling at the hands of dark silicon [5], superior efficiency via specialization has materialized as a compelling solution toward sustaining performance gains [16]; once restricted to embedded systems, accelerators are currently seeing wider exposure (e.g., [3]), and many-accelerator architectures are in the research agenda ( [4], [13]). …”
Section: Introductionmentioning
confidence: 99%