Previous approaches to robustness in natural language processing usually treat deviant input by relaxing grammatical constraints whenever a successful analysis cannot be provided by "normal" means. This schema implies, that error detection always comes prior to error handling, a behaviour which hardly can compete with its human model, where many erroneous situations are treated without even noticing them. The paper analyses the necessary preconditions for achieving a higher degree of robustness in natural language processing and suggests a quite different approach based on a procedure for structural disambiguation. It not only offers the possibility to cope with robustness issues in a more natural way but eventually might be suited to accommodate quite different aspects of robust behaviour within a single framework.1 The difficulties with a straightforward generalization of this approach to e.g. syntactic or semantic anomalies are obvious: It would require huge amounts of sufficiently deviant utterances being available as training data. This renders the approach technically infeasible and cognitively implausible.For similar reasons connectionist approaches are not considered here: At the moment they seem to be limited to approximate solutions for flat representations (cf. [27]). 2 For a good overview see [25].