The construction and maintenance of ontologies is an error-prone task. As such, it is not uncommon to detect unwanted or erroneous consequences in large-scale ontologies which are already deployed in production. While waiting for a corrected version, these ontologies should still be available for use in a "safe" manner, which avoids the known errors. At the same time, the knowledge engineer in charge of producing the new version requires support to explore only the potentially problematic axioms, and reduce the number of exploration steps. In this paper, we explore the problem of deriving meaningful consequences from ontologies which contain known errors. Our work extends the ideas from inconsistency-tolerant reasoning to allow for arbitrary entailments as errors, and allows for any part of the ontology (be it the terminological elements or the facts) to be the causes of the error. Our study shows that, with a few exceptions, tasks related to this kind of reasoning are intractable in general, even for very inexpressive description logics.