Background BBV152 is a whole-virion inactivated SARS-CoV-2 vaccine (3 µg or 6 µg) formulated with a toll-like receptor 7/8 agonist molecule (IMDG) adsorbed to alum (Algel). We previously reported findings from a doubleblind, multicentre, randomised, controlled phase 1 trial on the safety and immunogenicity of three different formulations of BBV152 (3 μg with Algel-IMDG, 6 μg with Algel-IMDG, or 6 μg with Algel) and one Algel-only control (no antigen), with the first dose administered on day 0 and the second dose on day 14. The 3 µg and 6 µg with Algel-IMDG formulations were selected for this phase 2 study. Herein, we report interim findings of the phase 2 trial on the immunogenicity and safety of BBV152, with the first dose administered on day 0 and the second dose on day 28.Methods We did a double-blind, randomised, multicentre, phase 2 clinical trial to evaluate the immunogenicity and safety of BBV152 in healthy adults and adolescents (aged 12-65 years) at nine hospitals in India. Participants with positive SARS-CoV-2 nucleic acid and serology tests were excluded. Participants were randomly assigned (1:1) to receive either 3 µg with Algel-IMDG or 6 µg with Algel-IMDG. Block randomisation was done by use of an interactive web response system. Participants, investigators, study coordinators, study-related personnel, and the sponsor were masked to treatment group allocation. Two intramuscular doses of vaccine were administered on day 0 and day 28. The primary outcome was SARS-CoV-2 wild-type neutralising antibody titres and seroconversion rates (defined as a post-vaccination titre that was at least four-fold higher than the baseline titre) at 4 weeks after the second dose (day 56), measured by use of the plaque-reduction neutralisation test (PRNT 50 ) and the microneutralisation test (MNT 50 ). The primary outcome was assessed in all participants who had received both doses of the vaccine. Cell-mediated responses were a secondary outcome and were assessed by T-helper-1 (Th1)/Th2 profiling at 2 weeks after the second dose (day 42). Safety was assessed in all participants who received at least one dose of the vaccine. In addition, we report immunogenicity results from a follow-up blood draw collected from phase 1 trial participants at 3 months after they received the second dose (day 104). This trial is registered at ClinicalTrials.gov, NCT04471519.
FindingsBetween Sept 5 and 12, 2020, 921 participants were screened, of whom 380 were enrolled and randomly assigned to the 3 µg with Algel-IMDG group (n=190) or 6 µg with Algel-IMDG group (n=190). Geometric mean titres (GMTs; PRNT 50 ) at day 56 were significantly higher in the 6 µg with Algel-IMDG group (197•0 [95% CI 155•6-249•4]) than the 3 µg with Algel-IMDG group (100•9 [74•1-137•4]; p=0•0041). Seroconversion based on PRNT 50 at day 56 was reported in 171 (92•9% [95% CI 88•2-96•2] of 184 participants in the 3 µg with Algel-IMDG group and 174 (98•3% [95•1-99•6]) of 177 participants in the 6 µg with Algel-IMDG group. GMTs (MNT 50 ) at day 56 were 92•5 (95% CI 77•7-11...
Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r = +1 for matching the demonstrated action in a demonstrated state, and a constant reward of r = 0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.