2021
DOI: 10.48550/arxiv.2109.02752
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Deep Networks from Zero to Hero: avoiding pitfalls and going beyond

Abstract: Training deep neural networks may be challenging in real world data. Using models as black-boxes, even with transfer learning, can result in poor generalization or inconclusive results when it comes to small datasets or specific applications. This tutorial covers the basic steps as well as more recent options to improve models, in particular, but not restricted to, supervised learning. It can be particularly useful in datasets that are not as well-prepared as those in challenges, and also under scarce annotati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…Also, even for small networks, training is still known to be NP-hard [Bottou et al 2018]. Therefore it is paramount to make educated choices of optimization methods and training strategies [Ponti et al 2021]. Many studies focus on finding the best network architectures, while neglecting the optimization details, with unjustified or arbitrary choices, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…Also, even for small networks, training is still known to be NP-hard [Bottou et al 2018]. Therefore it is paramount to make educated choices of optimization methods and training strategies [Ponti et al 2021]. Many studies focus on finding the best network architectures, while neglecting the optimization details, with unjustified or arbitrary choices, e.g.…”
Section: Introductionmentioning
confidence: 99%