“…AMR unifies, in a single structure, a rich set of information coming from different tasks, such as Named Entity Recognition (NER), Semantic Role Labeling (SRL), Word Sense Disambiguation (WSD) and coreference resolution. Such representations are actively integrated in several Natural Language Processing (NLP) applications, inter alia, information extraction (Rao et al, 2017), text summarization (Hardy and Vlachos, 2018;Liao et al, 2018), paraphrase detection (Issa et al, 2018), spoken language understanding (Damonte et al, 2019), machine translation (Song et al, 2019b) and human-robot interaction (Bonial et al, 2020). It is therefore desirable to extend AMR semantic representations across languages along the lines of cross-lingual representations for grammatical annotation (de Marneffe et al, 2014), concepts (Conia and Navigli, 2020) and semantic roles (Akbik et al, 2015;Di Fabio et al, 2019).…”