The importance of models within automated deduction is generally acknowledged both in constructing countermodels (rather than just giving the answer "NO", if a given formula is found to be not a theorem) and in speeding up the deduction process itself (e.g. by semantic resolution refinement).However, so far little attention has been paid to the efficiency of algorithms to actually work with models. There are two fundamental decision problems as far as models are concerned, namely: the equivalence of two models and the truth evaluation of an arbitrary clause within a given model. This paper focuses on the efficiency of algorithms for these problems in the case of Herbrand models given through atomic representations.Both problems have been shown to be coNP-hard by Gottlob and Pichler (1999), so there is a certain limit to the efficiency that we can possibly expect. Nevertheless, what we can do is find out the real "source" of complexity and make use of this theoretical result for devising an algorithm which, in general, has a considerably smaller upper bound on the complexity than previously known algorithms, e.g. the partial saturation method in Fermüller and Leitsch (1996) and the transformation into equational problems in Caferra and Zabel (1991).The main results of this paper are algorithms for these two decision problems, where the complexity depends non-polynomially on the number of atoms (rather than on the total size) of the input model equivalence problem or clause evaluation problem, respectively. Hence, in contrast to the above-mentioned algorithms, the complexity of the expressions involved (e.g. the arity of the predicate symbols and, in particular, the term depth of the arguments) only has polynomial influence on the overall complexity of the algorithms.