2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00070
|View full text |Cite
|
Sign up to set email alerts
|

Deep Equilibrium Optical Flow Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(47 citation statements)
references
References 27 publications
0
47
0
Order By: Relevance
“…Despite this, it is common in the literature for these models to still be called "implicit", perhaps in reference to the fact that the geometry of the scene is defined "implicitly" by the weights of a neural network (a different definition of "implicit" than is used by the SDF literature). Also note that this is a distinct definition of "implicit" than what is commonly used by the deep learning and statistic communities, where "implicit" usually refers to models whose outputs are implicitly defined as fixed points of dynamic systems, and whose gradients are computed using the implicit function theorem [BKK19].…”
Section: Representing Volumesmentioning
confidence: 99%
“…Despite this, it is common in the literature for these models to still be called "implicit", perhaps in reference to the fact that the geometry of the scene is defined "implicitly" by the weights of a neural network (a different definition of "implicit" than is used by the SDF literature). Also note that this is a distinct definition of "implicit" than what is commonly used by the deep learning and statistic communities, where "implicit" usually refers to models whose outputs are implicitly defined as fixed points of dynamic systems, and whose gradients are computed using the implicit function theorem [BKK19].…”
Section: Representing Volumesmentioning
confidence: 99%
“…In contrast, the naive backpropagation algorithm and ACA are not applicable to implicit functions because they assume explicit ones. (For the backpropagation algorithm for implicit functions, see [58]).…”
Section: B Symplectic Runge-kutta Methods For Adjoint Systemmentioning
confidence: 99%
“…We train the regularizer h θ by minimizing the discrepancy between a fixed-point x = T θ (x) obtained via (10) and the ground-truth image x using MSE loss…”
Section: Jacobian-free Deep Equilibrium Learningmentioning
confidence: 99%
“…DU is a DL paradigm that has gained popularity due to its ability to systematically connect iterative algorithms and deep neural network architectures (see reviews in [4,26]). DEQ [10] is a related approach that enables training of infinite-depth, weight-tied networks by analytically backpropagating through the fixed points using implicit differentiation. The DEQ output is specified implicitly as a fixed point of an operator T θ parameterized by weights θ…”
Section: Introductionmentioning
confidence: 99%