Since the introduction of context-aware token representation techniques such as Embeddings from Language Models (ELMo) and Bidirectional Encoder Representations from Transformers (BERT), there have been numerous reports on improved performance on a variety of natural language tasks. Nevertheless, the degree to which the resulting context-aware representations can encode information about morpho-syntactic properties of the tokens in a sentence remains unclear. In this paper, we investigate the application and impact of state-of-the-art neural token representations for automatic cueconditional speculation and negation scope detection coupled with the independently computed morpho-syntactic information. Through this work, We establish a new state-of-the-art for the BioScope and NegPar corpora. Furthermore, we provide a thorough analysis of neural representations and additional features interactions, cue-representation for conditioning, discussing model behavior on different datasets and, finally, address the annotation-induced biases in the learned representations.