2021
DOI: 10.1093/mnras/stab427
|View full text |Cite
|
Sign up to set email alerts
|

A self-supervised, physics-aware, Bayesian neural network architecture for modelling galaxy emission-line kinematics

Abstract: In the upcoming decades large facilities, such as the SKA, will provide resolved observations of the kinematics of millions of galaxies. In order to assist in the timely exploitation of these vast datasets we blackexplore the use of a self-supervised, physics aware neural network capable of Bayesian kinematic modelling of galaxies. We demonstrate the network’s ability to model the kinematics of cold gas in galaxies with an emphasis on recovering physical parameters and accompanying modelling errors. The model … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 54 publications
0
3
0
Order By: Relevance
“…The concept of IF is proposed instead of being constructed with a simple amplitude or frequency. The expression of IF is shown in the Equation (22).…”
Section: Subjective Complexity Evaluation Indicatormentioning
confidence: 99%
See 1 more Smart Citation
“…The concept of IF is proposed instead of being constructed with a simple amplitude or frequency. The expression of IF is shown in the Equation (22).…”
Section: Subjective Complexity Evaluation Indicatormentioning
confidence: 99%
“…From the perspective of evaluation methods, there are many evaluation methods currently studied, such as AHP [18], Dempster-Shafer (D-S) evidence theory method [19], multiple connection number method [20], cloud barycenter evaluation method, neural network [21,22] etc. Dawson J.M [23] uses AHP to search the complexity of the electromagnetic environment, then establishes a hierarchical structure, and gives specific evaluation criteria.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, in such a complex product selectivity model, the PI acquisition function tended to explore near the maximum value, which provided a practical compromise for the optimization properties of BNN. To obtain better optimization performance, BNN's complex model architecture [50] requires more datasets with more samples and more training rounds to demonstrate its powerful learning ability. Overall, the surrogate model performs more consistently compared to BNN in bivariate optimization due to GP's superior performance on small datasets.…”
Section: Optimization Of Product Selectivity Including Catalyst Deact...mentioning
confidence: 99%