Deep Learning With Python 2017
DOI: 10.1007/978-1-4842-2766-4_12
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to PyTorch

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
239
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 448 publications
(240 citation statements)
references
References 0 publications
0
239
0
1
Order By: Relevance
“…We implement our model in python 3.4 and cuDNN 6.0 using pyTorch [16] and use NYU-Depth-v2 dataset to train our model. As for training, we use an NVIDIA TITAN xp with 12GB memory.…”
Section: Methodsmentioning
confidence: 99%
“…We implement our model in python 3.4 and cuDNN 6.0 using pyTorch [16] and use NYU-Depth-v2 dataset to train our model. As for training, we use an NVIDIA TITAN xp with 12GB memory.…”
Section: Methodsmentioning
confidence: 99%
“…Here, we will assume that the total variational energy of the system, E is such that there exists a unique solution of the problem defined in Eq. (6) and is the same as the solution of Eq. (1).…”
Section: Energy Approachmentioning
confidence: 99%
“…The data sets are summarized in Table 4. All the experiments for LibriSpeech were performed using ESPnet, the End-to-End Speech Processing Toolkit [34], and the recipe for a baseline LibriSpeech setup with PyTorch backend [35]. According to the baseline recipe, we trained an 8-layer BLSTM encoder including 320 cells in each layer and direction, and the linear projection layer with 320 units followed by each BLSTM layer.…”
Section: Evaluation With Librispeechmentioning
confidence: 99%