Recent research has shown that seemingly identical suffixes such as word-final /s/ in English show systematic differences in their phonetic realisations. Most recently, durational differences between different types of /s/ have been found to also hold for pseudowords: the duration of /s/ is longest in non-morphemic contexts, shorter with suffixes, and shortest in clitics. At the theoretical level such systematic differences are unexpected and unaccounted for in current theories of speech production. Following a recent approach, we implemented a linear discriminative learning network trained on real word data in order to predict the duration of word-final non-morphemic and plural /s/ in pseudowords using production data by a previous production study. It is demonstrated that the duration of word-final /s/ in pseudowords can be predicted by LDL networks trained on real word data. That is, duration of word-final /s/ in pseudowords can be predicted based on their relations to the lexicon.
Previous research suggests that different types of word-final /s/ and /z/ (e.g. non-morphemic vs. plural or clitic morpheme) in English show realisational differences in duration. However, there is disagreement on the nature of these differences, as experimental studies have provided evidence for durational differences of the opposite direction as results from corpus studies (i.e. non-morphemic > plural > clitic /s/). The experimental study reported here focuses on four types of word-final /s/ in English, i.e. non-morphemic, plural, and is- and has-clitic /s/. We conducted a pseudoword production study with native speakers of Southern British English. The results show that non-morphemic /s/ is significantly longer than plural /s/, which in turn is longer than clitic /s/, while there is no durational difference between the two clitics. This aligns with previous corpus rather than experimental studies. Thus, the morphological category of a word-final /s/ appears to be a robust predictor for its phonetic realisation influencing speech production in such a way that systematic subphonemic differences arise. This finding calls for revisions of current models of speech production in which morphology plays no role in later stages of production.
Findings of previous behavioural studies suggest that the semantic nature of what is known as the ‘masculine generic’ in Modern Standard German is indeed not generic but biased towards a masculine reading. Such findings are cause of debates within and outside linguistic research as they run counter the grammarian assumption of the masculine generic form to be gender-neutral. The present paper aims to explore the semantics of masculine generics, relating them to those of masculine and feminine explicit counterparts. To achieve this aim, an approach novel to this area of linguistic research is made use of: discriminative learning. Analysing semantic vectors obtained via naive discriminative learning, semantic measures calculated via linear discriminative learning, and taking into account the stereotypicality of words under investigation, it is found that masculine generics are semantically much more similar to masculine explicits than to feminine explicits. The results presented in this paper thus support the notion of a masculine bias in masculine generics. Further, new insight into the semantic representations of masculine generics are provided and it is shown that stereotypicality does not modulate the masculine bias.
Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis that can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling but also from decisions regarding the quantification of the measured behavior. In this study, we gave the same speech-production data set to 46 teams of researchers and asked them to answer the same research question, resulting in substantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further found little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise, or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system, and calibrate their (un)certainty in their conclusions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.