Abstract-Spoken Language Understanding (SLU) is concerned with the extraction of meaning structures from spoken utterances. Recent computational approaches to SLU, e.g. Conditional Random Fields (CRF), optimize local models by encoding several features, mainly based on simple n-grams. In contrast, recent works have shown that the accuracy of CRF can be significantly improved by modeling long-distance dependency features. In this paper, we propose novel approaches to encode all possible dependencies between features and most importantly among parts of the meaning structure, e.g. concepts and their combination. We rerank hypotheses generated by local models, e.g. Stochastic Finite State Transducers (SFSTs) or Conditional Random Fields (CRF), with a global model. The latter encodes a very large number of dependencies (in the form of trees or sequences) by applying kernel methods to the space of all meaning (sub) structures. We performed comparative experiments between SFST, CRF, Support Vector Machines (SVMs) and our proposed discriminative reranking models (DRMs) on representative conversational speech corpora in three different languages: the ATIS (English), the MEDIA (French) and the LUNA (Italian) corpora. These corpora have been collected within three different domain applications of increasing complexity: informational, transactional and problemsolving tasks, respectively. The results show that our DRMs consistently outperform the state-of-the-art models based on CRF.