2021
DOI: 10.48550/arxiv.2106.08314
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Causal Navigation by Continuous-time Neural Networks

Charles Vorbach,
Ramin Hasani,
Alexander Amini
et al.

Abstract: Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discret… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1
1

Relationship

5
1

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…This is caused especially by the high structural complexity leaving many structural details unresolved or concealed, which causes a high uncertainty of the manual annotations. Unsupervised and self-supervised machine learning procedures such as spatiotemporal vision transformers 58 or liquid neural networks 59 are promising tools to trace and identify individual cells in heavily cluttered images of BNNs.…”
Section: Discussionmentioning
confidence: 99%
“…This is caused especially by the high structural complexity leaving many structural details unresolved or concealed, which causes a high uncertainty of the manual annotations. Unsupervised and self-supervised machine learning procedures such as spatiotemporal vision transformers 58 or liquid neural networks 59 are promising tools to trace and identify individual cells in heavily cluttered images of BNNs.…”
Section: Discussionmentioning
confidence: 99%
“…To avoid this, for tasks that require long-term dependencies, it is better to use them together with mixed memory networks (See CfC-mmRNN). Moreover, we speculate that inferring causality from ODE-based networks might be more straightforward than a closed-form solution (Vorbach et al, 2021). It would also be beneficial to assess if verifying a continuous neural flow (Grunbacher et al, 2021) is more tractable by an ODE representation of the system or their closed form.…”
Section: Scope Discussion and Conclusionmentioning
confidence: 99%
“…A potential future avenue of research emerging from this work will be to simultaneously learn the system dynamics and unsafe sets with BarrierNets. This can be enabled using the expressive class of continuous-time neural network models (Chen et al, 2018;Lechner et al, 2020a;Hasani et al, 2021b;Vorbach et al, 2021).…”
Section: Discussionmentioning
confidence: 99%