Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, can we use QAMR to improve named entity recognition? We suggest that simply further pre-training BERT is often not the best option, and propose the questionanswer driven sentence encoding (QUASE) framework. QUASE learns representations from QA data, using BERT or other state-ofthe-art contextual language models. In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a single-or multisentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks. This work may point out an alternative way to supervise NLP tasks. 1