2020 28th European Signal Processing Conference (EUSIPCO) 2021
DOI: 10.23919/eusipco47968.2020.9287432
|View full text |Cite
|
Sign up to set email alerts
|

An Effective Contextual Language Modeling Framework for Speech Summarization with Augmented Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…The performance of the downstream NLP task is directly affected by the recognition errors [31]. Previous studies improved the robustness of the NLP back-end to ASR errors using various auxiliary information sources from the ASR system, e.g., probabilities, recognition hypotheses, and hidden states [4,6,8,10,11,[32][33][34][35][36][37].…”
Section: Conventional Interconnection Of Asr and Ts Systemsmentioning
confidence: 99%
See 4 more Smart Citations
“…The performance of the downstream NLP task is directly affected by the recognition errors [31]. Previous studies improved the robustness of the NLP back-end to ASR errors using various auxiliary information sources from the ASR system, e.g., probabilities, recognition hypotheses, and hidden states [4,6,8,10,11,[32][33][34][35][36][37].…”
Section: Conventional Interconnection Of Asr and Ts Systemsmentioning
confidence: 99%
“…For speech summarization, Weng et al proposed including a confidence embedding in the input of BERTSum [29] to achieve robust speech summarization [4]. They modified the input embedding vectors of BERTSum to be the sum of the sub-word embedding vector and a confidence embedding.…”
Section: Conventional Interconnection Of Asr and Ts Systemsmentioning
confidence: 99%
See 3 more Smart Citations