The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, one of two tasks was devoted to learning dependency parsers for a large number of languages, in a realworld setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe data preparation, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.
Universal dependencies (UD) is a framework for morphosyntactic annotation of human language, which to date has been used to create treebanks for more than 100 languages. In this article, we outline the linguistic theory of the UD framework, which draws on a long tradition of typologically oriented grammatical theories. Grammatical relations between words are centrally used to explain how predicate–argument structures are encoded morphosyntactically in different languages while morphological features and part-of-speech classes give the properties of words. We argue that this theory is a good basis for cross-linguistically consistent annotation of typologically diverse languages in a way that supports computational natural language understanding as well as broader linguistic studies.
This paper investigates the relevance of three prosodic parameters (alignment, duration and scaling) in the conveyance of contrastive focus in Catalan, Italian and Spanish. In particular, we seek to determine how the Effort Code is instantiated in the expression of contrastive focus in both production and perception. According to the Effort Code, putting more effort into speech production will lead to greater articulatory precision (de Jong 1995, Gussenhoven 2004) and this is related to the expression of focus in the sense that wider pitch excursions will be used to signal meanings that are relevant from an informational point of view. A dual production and perception experiment based on an identification task was conducted. Results for the production part show that contrastive focus accents have earlier peaks for all three languages but f0 peaks are systematically lower only in Italian. Syllables bearing the contrastive focus accents are also longer in the three languages. Regarding the results for the perception part, converging evidence is found not only for an active perceptual use of the three prosodic parameters present in production but also for language-specific preferences for particular prosodic parameters.
The AG500 electromagnetic articulograph is widely used to reconstruct the movements of the articulatory organs. Nevertheless, some anomalies in its performance have been observed. It is well known that accuracy of the device is affected by electromagnetic interference and possible hardware failures or damage to the sensors. In this study, after eliminating any hardware or electromagnetic source of disturbance, a set of trials was carried out. The tests prove that anomalies in sensor position tracking are systematic in certain regions within the recording volume and, more importantly, show a specific pattern that can be clearly attributed to a wrong convergence of the calculation method.
The link between musical structure and evoked visual mental imagery (VMI), that is, seeing in the absence of a corresponding sensory stimulus, has yet to be thoroughly investigated. We explored this link by manipulating the characteristics of four pieces of music for synthesizer, guitars, and percussion (songs). Two original songs were selected on the basis of a pilot study, and two were new, specially composed to combine the musical and acoustical characteristics of the originals. A total of 135 participants were randomly assigned to one of the four groups who listened to one song each; 73% of participants reported experiencing VMI. There were similarities between participants’ descriptions of the mental imagery evoked by each song and clear differences between them. A combination of coding and content analysis produced 10 categories: Nature, Places and settings, Objects, Time, Movements and events, Color(s), Humans, Affects, Literal sound, and Film. Regardless of whether or not they had reported experiencing VMI, participants then carried out a card-sorting task in which they selected the terms they thought best described a scene or setting appropriate to the music they had heard and rated emotional dimensions. The results confirmed those of the content analysis. Taken together, participants’ ratings, descriptions of VMI, and selection of terms in the card-sorting task confirmed that new songs combining the characteristics of original songs evoke the elements of VMI associated with the latter. The findings are important for the understanding of the musical and acoustical characteristics that may influence our experiences of music, including VMI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.