“…To capture such alignments, several attention-based models were proposed (Shi et al, 2020;Lei et al, 2020;Liu et al, 2021), which employ the attention weights among tokens to indicate the alignments. Specifically, they use an attention module to perform schema linking at the encoding stage (Lei et al, 2020;Liu et al, 2021), and may use another attention to align each output token to its corresponding input tokens at the decoding stage (Shi et al, 2020). However, we argue that the attention mechanism is not an appropriate way to capture and leverage lexico-logical alignments.…”