Word order flexibility is one of the distinctive features of SOV languages. In this work we investigate whether the order and relative distance of preverbal dependents in Hindi, an SOV language, is affected by factors motivated by efficiency considerations during language comprehension and production. We investigate the influence of Head-Dependent Mutual Information (HDMI), similarity-based interference, accessibility and case-marking. Results show that preverbal dependents remain close to the verbal head when the HDMI between the verb and its dependent is high. This demonstrates the influence of locality constraints on dependency distance and word order in an SOV language. Additionally, dependency distance were found to be longer when the dependent was animate, when it was case-marked and when it was semantically similar to other preverbal dependents. Together the results highlight the cross-linguistic generalizability of these factors and provide evidence for a functionally motivated account of word order in SOV languages such as Hindi.
Language networks have been proposed to be the underlying representation for syntactic knowledge (Roelofs, 1992; Pickering and Branigan, 1998). Such networks are known to explain various word order related priming effects in psycholinguistics. Under the assumption that word order information is encoded in these networks, we explore if Greenbergian word order universals (Greenberg, 1963) can be induced from such networks. Language networks for 34 languages were constructed from the Universal Dependencies Treebank (Nivre et al., 2016) based on the assumptions in Roelofs (1992); Pickering and Branigan (1998). We conducted a series of experiments to investigate if certain network parameters can be used to cluster various languages based on the word order typology proposed by Greenberg. Our results show that some network parameters robustly cluster the languages correctly, thereby providing some support for language network as a valid representation for such linguistic generalizations.
Verbal prediction has been shown to be critical during online comprehension of Subject-Object-Verb (SOV) languages. In this work we present three computational models to predict clause final verbs in Hindi given its prior arguments. The models differ in their use of prior context during the prediction processthe context is either noisy or noise-free. Model predictions are compared with the sentence completion data obtained from Hindi native speakers. Results show that models that assume noisy context outperform the noise-free model. In particular, a lossy context model that assumes prior context to be affected by predictability and recency captures the distribution of the predicted verb class and error sources best. The success of the predictabilityrecency lossy context model is consistent with the noisy channel hypothesis for sentence comprehension and supports the idea that the reconstruction of the context during prediction is driven by prior linguistic exposure. These results also shed light on the nature of the noise that affects the reconstruction process. Overall the results pose a challenge to the adaptability hypothesis that assumes use of noise-free preverbal context for robust verbal prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.