2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9671825
|View full text |Cite
|
Sign up to set email alerts
|

Distant-Supervised Slot-Filling for E-Commerce Queries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…The usefulness of these agents was at peak during pandemic when the life was restricted without human assistance for any problem. For designing these agents two main components are required namely slot filling (Firdaus et al, 2021;Manchanda et al, 2021) and intent detection (Popov et al, 2019;Reza et al, 2020) and their performance have a direct effect on human language understanding and efficiently performing various natural language related tasks (Yang et al, 2021;Abro et al,2022). Most of the research considers existing datasets and apply their slot filling and intent detection models on them either separately or in an integrated way.…”
Section: Introductionmentioning
confidence: 99%
“…The usefulness of these agents was at peak during pandemic when the life was restricted without human assistance for any problem. For designing these agents two main components are required namely slot filling (Firdaus et al, 2021;Manchanda et al, 2021) and intent detection (Popov et al, 2019;Reza et al, 2020) and their performance have a direct effect on human language understanding and efficiently performing various natural language related tasks (Yang et al, 2021;Abro et al,2022). Most of the research considers existing datasets and apply their slot filling and intent detection models on them either separately or in an integrated way.…”
Section: Introductionmentioning
confidence: 99%
“…These methods pretrain large neural networks on self-supervised tasks in order to encode common contextual knowledge in the structured input. Another approach imparts domain-specific knowledge by pretraining models on tasks related to the target task but for which abundant labeled data is available [21,32]. The pretrained model is then fine-tuned on the limited training data of the target task.…”
Section: Transfer Learningmentioning
confidence: 99%
“…In the case of small data sets, transfer learning has been very successful, as exemplified by Bert in natural language processing. Unlike the method of pretraining large neural networks on self-supervised tasks in these models, we pretrain the model on another relevant task with abundant labeled data, then finetune the pretrained model on limited data from the target task. , …”
mentioning
confidence: 99%