In artificial intelligence, it is important to handle and analyse inconsistency in knowledge bases. Inconsistent pieces of information suggest questions like "where is the inconsistency?" and "how severe is it?". Inconsistency measures have been proposed to tackle the latter issue, but the former seems underdeveloped and is the focus of this paper. Minimal inconsistent sets have been the main tool to localise inconsistency, but we argue that they are like the exposed part of an iceberg, failing to capture contradictions hidden under the water. Using classical propositional logic, we develop methods to characterise when a formula is contributing to the inconsistency in a knowledge base and when a set of formulas can be regarded as a primitive conflict. To achieve this, we employ an abstract consequence operation to "look beneath the water level", generalising the minimal inconsistent set concept and the related free formula notion. We apply the framework presented to the problem of measuring inconsistency in knowledge bases, putting forward relaxed forms for two debatable postulates for inconsistency measures. Finally, we discuss the computational complexity issues related to the introduced concepts.