IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. 2005
DOI: 10.1109/asru.2005.1566470
|View full text |Cite
|
Sign up to set email alerts
|

The multi-channel Wall Street Journal audio visual corpus (MC-WSJ-AV): specification and initial experiments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
131
0
1

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 138 publications
(132 citation statements)
references
References 8 publications
0
131
0
1
Order By: Relevance
“…The first, used the single distant microphone data from the MC-WSJ-CAM0 task [15], here the data were recorded in a "real" environment. The second task, based on the AURORA4 data [16], had additive and reverberant artificially added.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first, used the single distant microphone data from the MC-WSJ-CAM0 task [15], here the data were recorded in a "real" environment. The second task, based on the AURORA4 data [16], had additive and reverberant artificially added.…”
Section: Resultsmentioning
confidence: 99%
“…The MC-WSJ-AV data [15] is divided into one development set (dev1) and two evaluation sets (evl1 and evl2) for each of three conditions. Only one of the conditions single speaker stationary was used in these evaluations.…”
Section: Mc-wsj-av Taskmentioning
confidence: 99%
“…The speech utterances are taken from the Wall Street Journal (WSJ) corpus (Lincoln et al, 2005). The This database provides a broad phonetic space for speech separation evaluation.…”
Section: Acoustic and Analysis Setupmentioning
confidence: 99%
“…We performed far-field automatic speech recognition experiments on the PASCAL Speech Separation Challenge 2 (SSC2) [11] corpus. The data contain recordings of two speakers simultaneously and the uttrances is from the 5,000 word vocabulary Wall Street Journal (WSJ) task.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Therefore, we propose to first separate the target speech and the interfering speech using MMI beamforming techniques, followed by a Zelinski and binary-masking based postfilter, and then to perform the mapping method for estimating the MFCCs of the clean speech. Our studies on the PASCAL SSC2 corpus [11] show the effectiveness of the proposed methods.…”
Section: Introductionmentioning
confidence: 95%