As autonomous and semi-autonomous agents become more integrated with society, validation of their safety is increasingly important. The scenarios under which they are used, however, can be quite complicated; as such, formal verification may be impossible. To this end, simulationbased safety verification is being used more frequently to understand failure scenarios for the most complex problems. Recent approaches, such as adaptive stress testing (AST), use reinforcement learning, making them prone to excessive exploitation of known failures, limiting coverage of the space of failures. To overcome this, the work below defines a class of Markov decision processes, the knowledge MDP, which captures information about the learned model to reason over. More specifically, by leveraging, the "knows what it knows" (KWIK) framework, the learner estimates its knowledge (model estimates and confidence, as well as assumptions) about the underlying system. This formulation is vetted through MF-KWIK-AST which extends bidirectional learning in multiple fidelities (MF) of simulators to the safety verification problem. The knowledge MDP formulation is applied to detect convergence of the model, penalizing this behavior to encourage further exploration. Results are evaluated in a grid world, training an adversary to intercept a system under test. Monte Carlo trials compare the relative sample efficiency of MF-KWIK-AST to learning with a single-fidelity simulator, as well as demonstrate the utility of incorporating knowledge about learned models into the decision making process.