2019
DOI: 10.1016/j.nima.2018.10.043
|View full text |Cite
|
Sign up to set email alerts
|

The use of adversaries for optimal neural network training

Abstract: B-decay data from the Belle experiment at the KEKB collider have a substantial background from e + e − → qq events. To suppress this we employ deep neural network algorithms. These provide improved signal from background discrimination. However, the deep neural network develops a substantial correlation with the ∆E kinematic variable used to distinguish signal from background in the final fit due to its relationship with input variables. The effect of this correlation is reduced by deploying an adversarial neu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…An established way to test a network is to exclude known, well-defined pieces of information from it through adversarial networks [51][52][53][54][55][56][57]. They consist of two networks playing against each other.…”
Section: De-correlating the Massmentioning
confidence: 99%
See 1 more Smart Citation
“…An established way to test a network is to exclude known, well-defined pieces of information from it through adversarial networks [51][52][53][54][55][56][57]. They consist of two networks playing against each other.…”
Section: De-correlating the Massmentioning
confidence: 99%
“…In this paper we will go even further and show how autoencoders work in the absence of a signal sample. The second challenge can for example be addressed with adversarial networks, de-correlating for example kinematic information or theory assumptions [50][51][52][53][54][55][56][57]. Alternatively, refiner networks [84] can be used to improve the quality of simulation.…”
Section: Introductionmentioning
confidence: 99%
“…The number of input data sets and performance in the quantum approach was limited due to the size and error rates of current quantum computers. If expanded to use more inputs and larger dataset sizes it is foreseeable that the QSVM may be able to compete with current state-of-the-art classical techniques, which can achieve AUCs of 0.930 [10]. There are a plethora of alternative quantum machine learning methods that could utilise the encoding circuits discussed here.…”
Section: Discussionmentioning
confidence: 99%
“…We refer to these particles as originating from the B candidate. If we use particles from the B candidate itself to perform the classification then we run the risk of sculpting the background to look like signal [10]. We therefore exclude particles associated with the B candidate and use the variables from the other B meson which are not correlated with the kinematic variables of the signal B.…”
Section: Introductionmentioning
confidence: 99%
“…The amount of inputs in our approach was limited due to the size and error rates of current quantum computers. If expanded to use all the available data, it is foreseeable that the QSVM may be able to compete with current state-of-the-art classical techniques, which can achieve AUCs of 0.930 [27]. There are also a plethora of alternatives to using an SVM method; quantum generated features created from the encoding gates explored here could be passed to another classical or quantum classifying algorithm.…”
Section: Discussionmentioning
confidence: 99%