2020
DOI: 10.3390/info11020108
|View full text |Cite
|
Sign up to set email alerts
|

Fastai: A Layered API for Deep Learning

Abstract: fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep lea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
750
0
6

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 932 publications
(757 citation statements)
references
References 34 publications
1
750
0
6
Order By: Relevance
“…To that end, we have prepared an annotated Jupyter notebook (see section II-F) detailing the process of dataset collection, ground-truth labeling and model training that requires minimal knowledge of deep learning to implement. For simplicity, many of the parameters used during training (hyperparameters) have been pre-set to reflect current best practices [18] while the remaining hyperparameters (learning rate, data augmentation parameters, class weights) may be optimized using submodules provided within the notebook. When collecting your hyper-labeled image stacks, the major requirement is that the subimage stacks used in training be representative of the expected experimental images.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…To that end, we have prepared an annotated Jupyter notebook (see section II-F) detailing the process of dataset collection, ground-truth labeling and model training that requires minimal knowledge of deep learning to implement. For simplicity, many of the parameters used during training (hyperparameters) have been pre-set to reflect current best practices [18] while the remaining hyperparameters (learning rate, data augmentation parameters, class weights) may be optimized using submodules provided within the notebook. When collecting your hyper-labeled image stacks, the major requirement is that the subimage stacks used in training be representative of the expected experimental images.…”
Section: Discussionmentioning
confidence: 99%
“…However, the learning rate, number of training epochs, and cross entropy weights were determined experimentally. Learning rate was scheduled as a variation of the 1cycle policy [11], [18] ( Supplementary Fig. S2a,b).…”
Section: B Model and Training Parametersmentioning
confidence: 99%
See 2 more Smart Citations
“…This work was supported by the Bundesministerium fĂŒr Bildung und Forschung (BMBF) through the Berlin Big Data Center under Grant 01IS14013A and the Berlin Center for Machine Learning under Grant 01IS18037I. USMPep was implemented using PyTorch [24] and fast.ai [25].…”
Section: Acknowledgementsmentioning
confidence: 99%