2014 International Joint Conference on Neural Networks (IJCNN) 2014
DOI: 10.1109/ijcnn.2014.6889433
|View full text |Cite
|
Sign up to set email alerts
|

Large scale recurrent neural network on GPU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 55 publications
(21 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…Previously, Li et al proposed a pipeline architecture to improve the parallelism of RNN [15]. As illustrated in Figure 4, it partitions the feed-forward phase into two stages: the data flow from input to hidden layer represented by gray boxes and the computation from hidden to output layer denoted in white boxes.…”
Section: A Increase Parallelism Between Hidden and Output Layersmentioning
confidence: 99%
See 4 more Smart Citations
“…Previously, Li et al proposed a pipeline architecture to improve the parallelism of RNN [15]. As illustrated in Figure 4, it partitions the feed-forward phase into two stages: the data flow from input to hidden layer represented by gray boxes and the computation from hidden to output layer denoted in white boxes.…”
Section: A Increase Parallelism Between Hidden and Output Layersmentioning
confidence: 99%
“…Thus, the computation complexity of h(t) is mainly determined Figure 4. The data flow of a two-stage pipelined RNN structure [15]. The computation complexity of white boxes in output layer is much bigger than that of gray boxes in hidden layer.…”
Section: A Increase Parallelism Between Hidden and Output Layersmentioning
confidence: 99%
See 3 more Smart Citations