2022
DOI: 10.48550/arxiv.2202.05766
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning via nonlinear conjugate gradients and depth-varying neural ODEs

Abstract: The inverse problem of supervised reconstruction of depth-variable (time-dependent) parameters in a neural ordinary differential equation (NODE) is considered, that means finding the weights of a residual network with time continuous layers. The NODE is treated as an isolated entity describing the full network as opposed to earlier research, which embedded it between pre-and post-appended layers trained by conventional methods. The proposed parameter reconstruction is done for a general first order differentia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…As we will demonstrate, this enables a general sensitivity analysis of the mapping. There is a recent example where NODEs were used in isolation from conventional network layers (Baravdish et al, 2022) but only for small networks on 2D toy datasets. We substantially scale the size of the NODE network, and apply it to image input data.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…As we will demonstrate, this enables a general sensitivity analysis of the mapping. There is a recent example where NODEs were used in isolation from conventional network layers (Baravdish et al, 2022) but only for small networks on 2D toy datasets. We substantially scale the size of the NODE network, and apply it to image input data.…”
Section: Related Workmentioning
confidence: 99%
“…This optimization method is further extended to include the Sobolev gradient for trainable weights. Such problem formulation has only been considered very recently in (Baravdish et al, 2022), and we significantly expand on this.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations