2021
DOI: 10.48550/arxiv.2103.11965
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gauge covariant neural network for 4 dimensional non-abelian gauge theory

Abstract: We develop a gauge covariant neural network for four dimensional non-abelian gauge theory, which realizes a map between rank-2 tensor valued vector fields.We find that the conventional smearing procedure and gradient flow for gauge fields can be regarded as known neural networks, residual networks and neural ordinal differential equations for rank-2 tensors with fixed parameters.In terms of machine learning context, projection or normalization functions in the smearing schemes correspond to an activation funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 63 publications
0
8
0
Order By: Relevance
“…One possible direction is the incorporation of larger Wilson loops and the Polyakov lines as input to the neural network. Another direction is the adoption of novel approaches which respect the gauge symmetry in the neural network, such as the lattice gauge equivariant convolutional neural networks [36] and the gauge covariant neural network [46]. In addition, there is a different approach that modifies the action instead of the integral path [47][48][49].…”
Section: Discussionmentioning
confidence: 99%
“…One possible direction is the incorporation of larger Wilson loops and the Polyakov lines as input to the neural network. Another direction is the adoption of novel approaches which respect the gauge symmetry in the neural network, such as the lattice gauge equivariant convolutional neural networks [36] and the gauge covariant neural network [46]. In addition, there is a different approach that modifies the action instead of the integral path [47][48][49].…”
Section: Discussionmentioning
confidence: 99%
“…A more general convolutional architecture was introduced in Reference [98], which allows for equivariance under a wider class of global symmetries, including rotations and discrete transformations. Building on these foundations, more recent developments include convolutional networks that are equivariant under Lie groups [99][100][101][102]. 15 There is no reason why the couplings of the theory have to remain constant during training.…”
Section: B Improving On the Scalability With Physicsmentioning
confidence: 99%
“…The target densities defined in Table I are all invariant under translations with appropriate boundary conditions, as discussed in Section II C. Previous works have shown that exactly incorporating known symmetries into machine learning models can accelerate their training and improve their final quality [86][87][88][89][90][91]. In the context of normalizing flows, ensuring that the model density is invariant under a symmetry group is achieved by choosing an invariant prior distribution and building transformation layers that are equivariant under the symmetry.…”
Section: B Building Blocksmentioning
confidence: 99%