The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033643
|View full text |Cite
|
Sign up to set email alerts
|

Simulation of large neuronal networks with biophysically accurate models on graphics processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
12
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…The large latency makes them difficult to be used in real-time brain-machine interfaces (BMI). GPUs, on the other hand, are capable of parallel computing but are constrained by memory and communication bandwidth issues [13]. Models can be implemented directly onto CMOS, [14], [15] but a single implementation can be time consuming.…”
Section: Introductionmentioning
confidence: 99%
“…The large latency makes them difficult to be used in real-time brain-machine interfaces (BMI). GPUs, on the other hand, are capable of parallel computing but are constrained by memory and communication bandwidth issues [13]. Models can be implemented directly onto CMOS, [14], [15] but a single implementation can be time consuming.…”
Section: Introductionmentioning
confidence: 99%
“…By imitating such structures, neuromorphic computing systems are anticipated to be superior to the conventional computer systems in tasks such as image recognition and natural language understanding. As the most resourceconsuming part in neuromorphic algorithms [2], matrix operations are normally processed by hardware accelerators like CPU/GPU/FPGA [3] or VLSI circuits [4]. The straightforward hardware realization of neural networks, however, commonly consumes a large volume of memory and computing resources, incurring high design complexity and hardware cost.…”
Section: Introductionmentioning
confidence: 99%
“…These enhancements, in combination with parallel computing (Bower and Beeman, 1998; Migliore et al, 2006), have become a necessity to cope with the higher computational and the communication demands of neuroapplications. Recently, a number of developers have investigated the possibility of simulating spiking neural networks on a single Graphical Processing Unit (GPU) (Bernhard and Keriven, 2005; Fernandez et al, 2008; Fidjeland et al, 2009; Nageswaran et al, 2009a,b; Tiesel and Maida, 2009; Bhuiyan et al, 2010; Fidjeland and Shanahan, 2010; Han and Taha, 2010a,b; Hoffmann et al, 2010; Mutch et al, 2010; Scorciono, 2010; Yudanov et al, 2010; Nowotny, 2011; Ahmadi and Soleimani, 2011; Igarashi et al, 2011; Thibeault et al, 2011; Wang et al, 2011) or on multiple Graphics Processing Units (GPUs) (Brette and Goodman, 2012b). All these current simulators have shown significant improvements over their CPU only counterparts by integrating the utilization of GPUs.…”
Section: Introductionmentioning
confidence: 99%