“…Our code is available at https://github.com/yonxie/ AdvFinTweet It is now known that text-based deep learning models can be vulnerable to adversarial attacks (Szegedy et al, 2014;Goodfellow et al, 2015). The perturbation can be at the sentence level (e.g., Xu et al, 2021;Iyyer et al, 2018;Ribeiro et al, 2018), the word level (e.g., Zhang et al, 2019;Alzantot et al, 2018;Zang et al, 2020;Jin et al, 2020;Lei et al, 2019;Zhang et al, 2021;Lin et al, 2021), or both (Chen et al, 2021). We are interested in whether such adversarial attack vulnerability also exists in stock prediction models, as these models embrace more and more human-generated media data (e.g., Twitter, Reddit, Stocktwit, Yahoo News (Xu and Cohen, 2018;Sawhney et al, 2021)).…”