2018
DOI: 10.48550/arxiv.1802.08831
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convolutional Neural Networks combined with Runge-Kutta Methods

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 20 publications
1
16
0
Order By: Relevance
“…However, we found that dense layers with a layer combination are better. As reported in the previous section, our best accuracy was all achieved by RK4, which internally constructs connections similar to DenseNet or FractalNet as described in Table 1 [26,28,47]. This is well aligned with the observation in ODEs that the explicit Euler method is inferior to RK4 in solve integral problems.…”
Section: Discussion On Linear Vs Densesupporting
confidence: 77%
See 1 more Smart Citation
“…However, we found that dense layers with a layer combination are better. As reported in the previous section, our best accuracy was all achieved by RK4, which internally constructs connections similar to DenseNet or FractalNet as described in Table 1 [26,28,47]. This is well aligned with the observation in ODEs that the explicit Euler method is inferior to RK4 in solve integral problems.…”
Section: Discussion On Linear Vs Densesupporting
confidence: 77%
“…𝜽 𝑓 ), and 𝑓 4 = 𝑓 (𝒉(𝑑) + 𝑠 𝑓 3 , 𝑑 + 𝑠; 𝜽 𝑓 ). It is also known that dense convolutional networks (DenseNets [47]) and fractal neural networks (FractalNet [26]) are similar to RK4 (as so are residual networks to the explicit Euler method) [28]. For simplicity but without loss of generality, however, we use the explicit Euler method as our running example.…”
Section: Residual/dense Connections and Ode Solversmentioning
confidence: 99%
“…[16,5,47] introduce non-regression losses inspired by Hamiltonian mechanics [18]. [38,12] have designed specific architectures for predicting and identifying dynamical systems inspired by numerical schemes for solving PDEs and residual neural networks [4,33,26,60]. [11] propose the separation of variables as a general paradigm based on a resolution method for partial differential equations for video prediction and disentanglement.…”
Section: Related Workmentioning
confidence: 99%
“…The new architecture was used to achieve higher accuracy on image classification. Zhu [30] introduced Runge-Kutta method to build a convolutional neural network and achieved superior accuracy. Chen [31] demonstrated the neural ordinary differential equations in continuous-depth residual networks and continuous-time latent variable models.…”
Section: Background and Related Workmentioning
confidence: 99%