Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.27
|View full text |Cite
|
Sign up to set email alerts
|

Understanding tables with intermediate pre-training

Abstract: Table entailment, the binary classification task of finding if a sentence is supported or refuted by the content of a table, requires parsing language and table structure as well as numerical and discrete reasoning. While there is extensive work on textual entailment, table entailment is less well studied. We adapt TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize entailment. Motivated by the benefits of data augmentation, we create a balanced dataset of millions of automatically created trai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(16 citation statements)
references
References 37 publications
1
15
0
Order By: Relevance
“…Depending on the focused downstream tasks, these models adopt different components, i.e., encoders or decoders, as summarized in Table 1. Most of them adopt the encoder part of transformers similar to BERT [30], including TURL [28], StruG [29], TAPAS [38,51], GraPPa [29],MATE [39], TUTA [95], ForTap [23], and TableFormer [6]. Typically, a single encoder is applied on the sequential inputs constructed from tables and associated texts, if any, to learn the contextual representations of the inputs.…”
Section: Encoder-decodermentioning
confidence: 99%
See 1 more Smart Citation
“…Depending on the focused downstream tasks, these models adopt different components, i.e., encoders or decoders, as summarized in Table 1. Most of them adopt the encoder part of transformers similar to BERT [30], including TURL [28], StruG [29], TAPAS [38,51], GraPPa [29],MATE [39], TUTA [95], ForTap [23], and TableFormer [6]. Typically, a single encoder is applied on the sequential inputs constructed from tables and associated texts, if any, to learn the contextual representations of the inputs.…”
Section: Encoder-decodermentioning
confidence: 99%
“…Unfortunately, large tables from webs, documents, and spreadsheets contain dozens of rows or columns, posing a significant challenge to the memory and computational efficiency [39]. A naive way [51,69] to deal with large tables is truncating the input tokens by a maximum sequence [38] ranks columns by Jaccard coefficient between the NL and each column tokens. The model is twice as fast to train as TaPas [51] while achieving similar performance.…”
Section: Model Efficiencymentioning
confidence: 99%
“…Visualization Recommendation (VisRec) [189], [295] [33], [38], [43], [47], [53], [72], [86], [88], [107], [110], [115], [129], [137], [141], [148], [151], [153], [167], [169], [170], [187], [188], [205], [237], [250], [256], [264], [265], [266], [268], [283], [294] Natural Language Interface for DataBase (NLIDB) [6], [296] [13], [14], [15], [19], [22], [56], [74], [75], [76], [82], [89], [94], [96], [118], [134], [135],…”
Section: Topic Related To V-nli Survey Representative Papersmentioning
confidence: 99%
“…Furthermore, to capture the special format of SQL statement, SQLnet [273] adopts a universal sketch as template and predicts values for each slot, while [255] employs a two-stage pipeline to subsequently predict the semantic structure and generate SQL statement with structure-enhanced query text. Recently, TaPas [56], [82] extends BERT's architecture to encode tables as input and trains from weak supervision. During the development of the community, there also generated some benchmarks like WikiSQL [293] and Spider [281], which can be utilized for further V-NLI research.…”
Section: Nli For Databasementioning
confidence: 99%
“…For the second component of the task, we follow state-of-the-art entailment models (Zhou et al, 2019;Eisenschlos et al, 2020):…”
Section: Entailmentmentioning
confidence: 99%