Extraction of sentiment signals for stock movement prediction from news text, stock message boards, and business reports have been a rising field of interest in finance. Building upon past literature, the most recent work attempt to better capture sentiment from sentences with complex syntactic structures by introducing aspect-level sentiment classification (ASC). Despite the growing interest, however, finegrained sentiment analysis has not been fully explored in non-English literature due to the shortage of annotated financespecific data. Accordingly, it is necessary for non-English languages to leverage datasets and pre-trained language models (PLM) of different domains, languages, and tasks to improve their performance. To facilitate finance-specific ASC research in the Korean language, we build KorF inASC, a Korean aspect-level sentiment classification dataset for finance consisting of 12,613 human-annotated samples, and explore methods of intermediate transfer learning. Our experiments indicate that past research has been ignorant towards the potentially wrong knowledge of financial entities encoded during the training phase, which has overestimated the predictive power of PLMs. In our work, we use the term "non-stationary knowledge" to refer to information that was previously correct but is likely to change, and present "TGT-Masking", a novel masking pattern to restrict PLMs from speculating knowledge of the kind. Finally, through a series of transfer learning with TGT-Masking applied we improve 22.63% of classification accuracy compared to standalone models on KorF inASC.
The Black-Scholes model, defined under the assumption of a perfect financial market, theoretically creates a flawless hedging strategy allowing the trader to evade risks in a portfolio of options. However, the concept of a "perfect financial market," which requires zero transaction and continuous trading, is challenging to meet in the real world. Despite such widely known limitations, academics have failed to develop alternative models successful enough to be long-established. In this paper, we explore the landscape of Deep Neural Networks(DNN) based hedging systems by testing the hedging capacity of the following neural architectures: Recurrent Neural Networks, Temporal Convolutional Networks, Attention Networks, and Span Multi-Layer Perceptron Networks In addition, we attempt to achieve even more promising results by combining traditional derivative hedging models with DNN based approaches. Lastly, we construct NNHedge, a deep learning framework that provides seamless pipelines for model development and assessment for the experiments.Keywords Black-Scholes • Neural Derivative Hedging • NNHedge 1. Timely evidence that neural networks score lower profit and loss, when taught traditional hedging strategies.2. Observations that neural networks concentrate more on historical data to calculate present-day delta values.3. The open-sourcing of NNHedge, a deep learning framework for neural derivative hedging. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.