2019
DOI: 10.1007/978-3-030-30484-3_16
|View full text |Cite
|
Sign up to set email alerts
|

Post-synaptic Potential Regularization Has Potential

Abstract: Improving generalization is one of the main challenges for training deep neural networks on classification tasks. In particular, a number of techniques have been proposed, aiming to boost the performance on unseen data: from standard data augmentation techniques to the 2 regularization, dropout, batch normalization, entropy-driven SGD and many more. In this work we propose an elegant, simple and principled approach: post-synaptic potential regularization (PSP). We tested this regularization on a number of diff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“… Of course many other issues have to be taken into account at training time, like the use of a validation-set to tune the hyper-parameters, using a good regularization policy etc. but these very general issues have been exhaustively discussed in many other works [ 32 , 33 , 34 ]. …”
Section: Resultsmentioning
confidence: 99%
“… Of course many other issues have to be taken into account at training time, like the use of a validation-set to tune the hyper-parameters, using a good regularization policy etc. but these very general issues have been exhaustively discussed in many other works [ 32 , 33 , 34 ]. …”
Section: Resultsmentioning
confidence: 99%
“…Such a problem can not be tackled directly attempting to hide the mutual information between samples in the same discriminatory class: the non-differentiability of such a measure as well as the introduced computational complexity is a huge obstacle, and NDR proposes itself as a proxy for such a measure. Nonetheless, previous works have already shown that adding further constraints to the learning problem could be effective [7] as, typically, the trained ANN models are over-sized and allows a large number of solutions to the same learning task [8]. Our experiments with NDR show that in practical cases it is possible to strike a good balance between non discriminatory constraint and target performance.…”
Section: Introductionmentioning
confidence: 76%
“…In this work we aim at overcoming the above limitations proposing a regularization method that produces a structured sparsification, focusing on removing entire neurons instead of single parameters. We also leverage our recent research showing that post-synaptic potential regularization is able to boost generalization over other regularizers [31].…”
Section: Related Workmentioning
confidence: 99%