2023
DOI: 10.1109/access.2023.3253128
|View full text |Cite
|
Sign up to set email alerts
|

Multiple POS Dependency-Aware Mixture of Experts for Frame Identification

Abstract: Frame identification, which is finding the exact evoked frame for a target word in a given sentence, is a fundamental and crucial prerequisite for frame semantic parsing. It is generally seen as a classification task for target words, whose contextual representations are usually obtained using a neural network like BERT as an encoder, and enriched with a joint learning model or the knowledge of FrameNet. However, the distinction at a fine-grained level, such as the delicate differences in the information of sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…However, challenges in complexity and computational resource requirements have been identified, where the integration of multiple expert networks demands higher computational power [53,54,55,52]. Another significant observation is the improvement in context-aware processing, particularly in languages with complex grammatical structures [56,57,58]. Lastly, studies suggest that while MoE models offer advancements in accuracy and context sensitivity, integrating these models into existing systems poses practical challenges, including model training and deployment complexities [55,59,60].…”
Section: Mixture Of Experts (Moe) In Language Modelsmentioning
confidence: 99%
“…However, challenges in complexity and computational resource requirements have been identified, where the integration of multiple expert networks demands higher computational power [53,54,55,52]. Another significant observation is the improvement in context-aware processing, particularly in languages with complex grammatical structures [56,57,58]. Lastly, studies suggest that while MoE models offer advancements in accuracy and context sensitivity, integrating these models into existing systems poses practical challenges, including model training and deployment complexities [55,59,60].…”
Section: Mixture Of Experts (Moe) In Language Modelsmentioning
confidence: 99%