Neural network-based language models such as BERT (Bidirectional Encoder Representations from Transformers) use attention mechanisms to create contextualized representations of inputs, conceptually analogous to humans reading words in context. For the task of classifying the sentiment of texts, we ask whether BERT's attention can be informed by human cognitive data. During training, we supervise attention with eye-tracking and/or brain imaging data and combine binary sentiment classification loss with these attention losses. We find that attention supervision can be used to manipulate BERT attention to be more similar to the ground truth human data, but that there are no significant differences in sentiment classification accuracy. However, models with cognitive attention supervision more frequently misclassify different samples from the baseline models-they more often make different errors-and the errors from models with supervised attention have a higher ratio of false negatives.