Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.4
|View full text |Cite
|
Sign up to set email alerts
|

DiscoPrompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition

Abstract: Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize the discourse relations between the arguments with the absence of discourse connectives. The sense labels for each discourse relation follow a hierarchical classification scheme in the annotation process (Prasad et al., 2008), forming a hierarchy structure. Most existing works do not well incorporate the hierarchy structure but focus on the syntax features and the prior knowledge of connectives in the manner of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…Prompt Tuning By relaxing the constraint that prompts token embedding to be the natural language, Li and Liang (2021) and Hambardzumyan et al (2021) proposed combining a PLM's input token embeddings with additional continuous vectors. Some studies (Lester et al, 2021;Qin and Eisner, 2021;Li and Liang, 2021) proposed only tuning continuous prompts, while some works (Han et al, 2021;Zhong et al, 2021;Liu et al, 2021b;Chan et al, 2023b) explore combining discrete prompts and continuous prompts. They tune the embedding of these additional continuous vectors, and the parameters of PLMs are frozen in their task.…”
Section: Related Workmentioning
confidence: 99%
“…Prompt Tuning By relaxing the constraint that prompts token embedding to be the natural language, Li and Liang (2021) and Hambardzumyan et al (2021) proposed combining a PLM's input token embeddings with additional continuous vectors. Some studies (Lester et al, 2021;Qin and Eisner, 2021;Li and Liang, 2021) proposed only tuning continuous prompts, while some works (Han et al, 2021;Zhong et al, 2021;Liu et al, 2021b;Chan et al, 2023b) explore combining discrete prompts and continuous prompts. They tune the embedding of these additional continuous vectors, and the parameters of PLMs are frozen in their task.…”
Section: Related Workmentioning
confidence: 99%
“…Zhou et al (2022) proposed methods that use prompt-tuning to generate connectives and then exploit classifiers to predict discourse relations from the generated connectives. Chan et al (2023) viewed IDRR as a problem of predicting the hierarchical paths of connective and discourse relation labels, so they also proposed a prompt-tuning method. Jiang et al (2023) proposed methods that learn the hierarchical discourse relation representations through multitask learning and contrastive learning.…”
Section: Introductionmentioning
confidence: 99%