2019 IEEE International Conference on Data Mining (ICDM) 2019
DOI: 10.1109/icdm.2019.00115
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Attention Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…FRETS (Jauhar et al, 2016) uses a log-linear model conditioned on alignment scores between cells in tables and individual QA pairs in the training set. NEOP (Cho et al, 2018) uses a multi-layer sequential network with attention supervision to answer queries conditioned on tables. MANYMODALQA (Hannan et al, 2020) uses a modality selection network along with pretrained state-of-the-art text-based QA, Table-based QA, and Image-based QA models to jointly answer questions over text, tables, and images.…”
Section: Table Question Answeringmentioning
confidence: 99%
“…FRETS (Jauhar et al, 2016) uses a log-linear model conditioned on alignment scores between cells in tables and individual QA pairs in the training set. NEOP (Cho et al, 2018) uses a multi-layer sequential network with attention supervision to answer queries conditioned on tables. MANYMODALQA (Hannan et al, 2020) uses a modality selection network along with pretrained state-of-the-art text-based QA, Table-based QA, and Image-based QA models to jointly answer questions over text, tables, and images.…”
Section: Table Question Answeringmentioning
confidence: 99%
“…Alternatively, machine may mine or augment attention supervision: (Tang et al, 2019) automatically mines attention supervision by masking-out highly attentive words in a progressive manner. (Choi et al, 2019) augments counterfactual observations to debias human attention supervision via instance similarity. Our work is of combining the strength of the two works: we automatically improve attention supervision via self-supervision signals, but we build it with free task-level resources.…”
Section: Attention To/from Machinementioning
confidence: 99%
“…However, due to patent applications, these findings were not included in the paper, and only the CFD analysis results were presented at a conference. 20,21…”
Section: Introductionmentioning
confidence: 99%