A tonal language is one in which the speaker’s intonation modifies the meaning of a word. In this work, we perform a rigorous analysis of intonation changes, or pitch contours, produced by native Mandarin speakers to predict the tone-contour type. Pitch contours are estimated using a number of different methods, also measuring each contour’s Mel-Frequency Cepstral Coefficients (MFCCs). The dataset used was autonomously generated from the Aishell open-source Mandarin speech corpus. Each sample was aligned with its transcript using Montreal Forced Alignment and segmented into individual words. The resulting corpus covers 11 topic domains, spoken by 400 individuals. Separate development, training, and testing datasets are created to ensure the integrity of our results. Pitch contours and their MFCCs are exposed to a number of machine learning techniques including clustered, regression, and traditional Deep Neural Network (DNN) approaches. MFCCs are additionally processed using convolutional neural networks. The models are used to predict the corresponding tone for a contour. Our work seeks to determine which intonation representations perform optimally for machine learning tasks. The tool is used to provide audio and visual feedback to learners of tonal languages. [Work supported by RPI Seed Grant and CISL].
The Rensselaer Mandarin Project enables a group of foreign language students to improve functional understanding, pronunciation and vocabulary in Mandarin Chinese through authentic speaking situations in a virtual visit to China. Students use speech, gestures, and combinations thereof to navigate an immersive, mixed reality, stylized realism game experience through interaction with AI agents, immersive technologies, and game mechanics. The environment was developed in a black box theater equipped with a human-scale 360◦ panoramic screen (140h, 200r), arrays of markerless motion tracking sensors, and speakers for spatial audio.
Learning to speak a second language involves a cycle of observation, mimicry, and feedback. A student observes a teacher, attempts to copy the teacher's performance, and then the teacher provides feedback on how the student performed. When the students' access to feedback is limited, so is their ability reinforce their learning. In this work a web based application, Speakeasy tools, in introduced to provide remote students with automated visual intonation feedback for multiple languages. For this study, participants are selected from the pool of Speakeasy users and their interactions with the application are observed over a set period of time. The application presents participants with native speaker examples generated via text-to-speech in the form of audio samples, fundamental frequency visualizations, and grapheme and phoneme level timelines. Participants are able to record and review an unlimited number of practice attempts which are processed using the same pipeline used for the native speaker examples. Using the Root-Mean-Square Error (RMSE) between native and participant fundamental frequencies over time, the practice attempt is assigned a score. Practice-attempt scores with respect to time spent using the application provide a metric for measuring a participant's progress. [Work supported by RPI Seed Grant and CISL.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.