Abstract. The Semantic Web is a distributed environment for knowledge representation and reasoning. The distributed nature brings with it failing data sources and inconsistencies between autonomous knowledge bases. To reduce problems resulting from unavailable sources and to improve performance, caching can be used. Caches, however, raise new problems of imprecise or outdated information. We propose to distinguish between certain and cached information when reasoning on the semantic web, by extending the well known FOUR bilattice of truth and knowledge orders to FOUR − C, taking into account cached information. We discuss how users can be offered additional information about the reliability of inferred information, based on the availability of the corresponding information sources. We then extend the framework towards FOUR − T , allowing for multiple levels of trust on data sources. In this extended setting, knowledge about trust in information sources can be used to compute, how well an inferred statement can be trusted and to resolve inconsistencies arising from connecting multiple data sources. We redefine the stable model and well founded semantics on the basis of FOUR − T , and reformalize the Web Ontology Language OWL2 based on logical bilattices, to augment OWL knowledge bases with trust based reasoning.The Semantic Web is envisioned to be a Web of Data [2]. As such, it integrates information from various sources, may it be through rules, data replication or similar mechanisms. Obviously, in a distributed scenario, information sources may become unavailable. In order to still be able to answer queries in such cases, mechanisms like caching can be used to reduce the negative implications of failure. Alternatively, some default truth value could be assumed for unavailable information. However, cached values may be inaccurate or outdated, default assumptions can be wrong. Moreover, also available information sources may be trusted to different extents. We propose a framework for reasoning with such trust levels, which allows to give additional information on the reliability of results to the user. In particular, we are able to tell whether a statement's truth value is inferred based on really accessible information, or whether it might change in the future, when cached or default values are updated. Consequently, we extend the framework towards multiple levels of trust, taking into account a trust order over information sources, which can possibly be partial. While assigning absolute trust values has little semantics, users are usually good at comparing the trustworthiness of two information sources.