Many researchers have incorporated deep neural networks (DNNs) with reinforcement learning (RL) in automatic trading systems. However, such methods result in complicated algorithmic trading models with several defects, especially when a DNN model is vulnerable to malicious adversarial samples. Researches have rarely focused on planning for long-term attacks against RL-based trading systems. To neutralize these attacks, researchers must consider generating imperceptible perturbations while simultaneously reducing the number of modified steps. In this research, an adversary is used to attack an RLbased trading agent. First, we propose an extension of the ensemble of the identical independent evaluators (EIIE) method, called enhanced EIIE, in which information on the best bids and asks is incorporated. Enhanced EIIE was demonstrated to produce an authoritative trading agent that yields better portfolio performance relative to that of an EIIE agent. Enhanced EIIE was then applied to the adversarial agent for the agent to learn when and how much to attack (in the form of introducing perturbations).In our experiments, our proposed adversarial attack mechanisms were > 30% more effective at reducing accumulated portfolio value relative to the conventional attack mechanisms of the fast gradient sign method (FSGM) and iterative FSGM, which are currently more commonly researched and adapted to compare and improve.
The data of recommendation systems typically only contain the purchased item as positive data and other un-purchased items as unlabeled data. In order to train a good recommendation model, in addition to the known positive information, we also need high-quality negative information. Capturing negative signals in positive and unlabeled data is challenging for recommendation systems. Most studies have used specific data and proposed negative sampling methods suitable to the data characteristics. Existing negative sampling strategies cannot automatically select suitable approaches for different data. However, this one-size-fits-all strategy often makes potential positive samples considered as negative, or truly negative samples considered as potential positive samples and recommend to users. In this way, it will not only turn down the recommendation result, but even also have an adverse effect. Accordingly, we propose a novel negative sampling model, Reinforced PU-learning with Hybrid Negative Sampling Strategies for Recommendation (RHNSR), which can combine multiple sampling strategies and dynamically adjust the proportions used by different sampling strategies. In addition, ensemble learning, which integrates various model sampling strategies for obtaining an improved solution, was applied to RHNSR. Extensive experiments were conducted on three real-world recommendation datasets, and the experimental results indicated that the proposed model significantly outperformed state-of-the-art baseline models, and revealed significant improvements in precision and hit ratio ( \(49.02\% \) and \(37.41\% \) , respectively).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.