2018
DOI: 10.2139/ssrn.3335592
|View full text |Cite
|
Sign up to set email alerts
|

Towards Explainable AI: Significance Tests for Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(27 citation statements)
references
References 29 publications
0
27
0
Order By: Relevance
“…We rank the importance of firm-specific and macroeconomic variables for the pricing kernel based on the sensitivity of the SDF weight ω with respect to these variables. Our sensitivity analysis is similar to Sirignano et al (2016) and Horel and Giesecke (2019) and based on the average absolute gradient. More specifically, we define the sensitivity of a particular variable as the average absolute derivative of the weight w with respect to this variable:…”
Section: F Variable Importancementioning
confidence: 99%
See 1 more Smart Citation
“…We rank the importance of firm-specific and macroeconomic variables for the pricing kernel based on the sensitivity of the SDF weight ω with respect to these variables. Our sensitivity analysis is similar to Sirignano et al (2016) and Horel and Giesecke (2019) and based on the average absolute gradient. More specifically, we define the sensitivity of a particular variable as the average absolute derivative of the weight w with respect to this variable:…”
Section: F Variable Importancementioning
confidence: 99%
“…In order to deal with the infinite number of moment conditions we extend the classcial GMM setup of Hansen (1982) and Chamberlain (1987) by an adversarial 3 Other related work includes Sirignano et al (2016) who estimate mortgage prepayments, delinquencies, and foreclosures with deep neural networks, Moritz and Zimmerman (2016) who apply tree-based models to portfolio sorting and Heaton et al (2017) who automate portfolio selection with a deep neural network. Horel and Giesecke (2019) propose a significance test in neural networks and apply it to house price valuation.…”
Section: Introductionmentioning
confidence: 99%
“…Let us stress that our trading system is not a black box; the logic of its decisions concerning trading stocks (any instruments) can be fully reconstructed and understood; cf. (Horel and Giesecke 2019). We found not many situations where its decisions could be questioned on the basis of the usual technical analysis, though the system uses the stockcharts and its own prior decisions in novel ways.…”
Section: The Key: Bidding Tablesmentioning
confidence: 93%
“…We note that the trades of our system are fully explainable; it is not a "black box". Only such AI can be really trustworthy; see, e.g., (Horel and Giesecke 2019).…”
Section: Momentum Investingmentioning
confidence: 99%
See 1 more Smart Citation