Fact checking and fake news detection has garnered increasing interest within the natural language processing (NLP) community in recent years, yet other aspects of misinformation remain unexplored. One such phenomenon is `bullshit', which different disciplines have tried to define since it first entered academic discussion nearly four decades ago. Fact checking bullshitters is useless, because factual reality typically plays no part in their assertions: Where liars deceive about content, bullshitters deceive about their goals. Bullshitting is misleading about language itself, which necessitates identifying the points at which pragmatic conventions are broken with deceptive intent. This paper aims to introduce bullshitology into the field of NLP by tying it to questions in a QUD-based definition, providing two approaches to bullshit annotation, and finally outlining which combinations of NLP methods will be helpful to classify which kinds of linguistic bullshit.