Integrating evidence from multiple sources to guide decisions is something humans do on a daily basis. Existing research suggests that not all sources of information are weighted equally in decision-making tasks, and that observers are subject to biases in the face of internal and external noise. Here we describe two experiments that measured observers' ability to integrate successive visual signals. Participants viewed pairs of gratings presented sequentially and reproduced their average orientation. Experiment 1 revealed a recency bias in evidence integration, such that observers' average judgments were closer to the orientation of the second grating than the first. Mixture distribution modeling revealed that this was caused by a recency bias in averaging, as well as a tendency to disregard the first stimulus altogether in some trials. In Experiment 2 we replicated these findings, and quantified orientation-specific patterns of neural activity recorded during the task using population-tuning curve modeling of electroencephalography data. This analysis yielded robust orientation tuning to both the presented gratings and observers' decisions, and suggested that observers were storing both grating stimuli for subsequent averaging rather than computing a running average. The neural representation of the second grating was not reliably stronger than that of the first, suggesting that the recency bias is not due to a difference in the strength of encoding of the second stimulus, and instead may arise at a later decision stage where information is retrieved or integrated.