Proceedings of the 14th International Conference on Natural Language Generation 2021
DOI: 10.18653/v1/2021.inlg-1.15
|View full text |Cite
|
Sign up to set email alerts
|

What can Neural Referential Form Selectors Learn?

Guanyi Chen,
Fahime Same,
Kees van Deemter

Abstract: Despite achieving encouraging results, neuralReferring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the RE form are learnt and captured by state-of-the-art RFS models. The results of 8 probing tasks show that all the defined features were learnt to some extent. The probing tasks pertaining to referential status and syntactic position exhibited the highest performance. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 23 publications
0
0
0
Order By: Relevance