Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2022
DOI: 10.18653/v1/2022.cmcl-1.10
|View full text |Cite
|
Sign up to set email alerts
|

About Time: Do Transformers Learn Temporal Verbal Aspect?

Abstract: Aspect is a linguistic concept that describes how an action, event, or state of a verb phrase is situated in time. In this paper, we explore whether different transformer models are capable of identifying aspectual features. We focus on two specific aspectual features: telicity and duration. Telicity marks whether the verb's action or state has an endpoint or not (telic/atelic), and duration denotes whether a verb expresses an action (dynamic) or a state (stative). These features are integral to the interpreta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…the location in The boy was fishing at the lake is more salient than in The boy had fished at the lake, as the event is represented as still ongoing). The work by Metheniti et al (2022) focused again on the BERT model and on the aspectual features of telicity and duration. Their setup included a classification task in English and French, and their results proved that in both languages BERT was adequately capturing information on telicity and duration, even in the non-finetuned forms, although it also showed some bias to verb tense and word order.…”
Section: Limitationsmentioning
confidence: 99%
“…the location in The boy was fishing at the lake is more salient than in The boy had fished at the lake, as the event is represented as still ongoing). The work by Metheniti et al (2022) focused again on the BERT model and on the aspectual features of telicity and duration. Their setup included a classification task in English and French, and their results proved that in both languages BERT was adequately capturing information on telicity and duration, even in the non-finetuned forms, although it also showed some bias to verb tense and word order.…”
Section: Limitationsmentioning
confidence: 99%
“…They show that this is useful for picking the correct tense in French translations of the English Simple Past. Several more recent studies have shown that distributional and neural models can be trained to predict telicity as annotated in available datasets (Kober et al, 2020;Metheniti et al, 2021;Metheniti, 2022). BERT-style models perform well on existing telicity datasets (with larger models outperforming smaller models), yet it is still unclear how or whether they actually capture aspect.…”
Section: Telicitymentioning
confidence: 99%
“…Lexical and grammatical aspect play essential roles in semantic interpretation (Smith, 2003), and yet even state-of-the-art natural language understanding (NLU) systems do not address these linguistic phenomena systematically (Metheniti, 2022). Consider this example: an NLU-based personal assistant, noticing the boarding time of a flight, tells a passenger (who is still shopping at the airport) "You miss flights" (i.e.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our use of distributional semantic representations is furthermore motivated by the fact that they are readily available in numerous languages, and that they, contrary to manually constructed lexicons such as VerbNet (Schuler and Palmer, 2005) or LCS (Dorr and Olsen, 1997), scale well with growing amounts of data and across different languages. Furthermore, there is a growing body of evidence that models based on the distributional hypothesis capture some facets of aspect (Kober et al, 2020;Metheniti et al, 2022), despite the fact that aspect is represented in a very diverse manner across languages.…”
Section: Computational Experimentsmentioning
confidence: 99%