How do speakers comprehend and produce complex words? In the theory ofthe Discriminative Lexicon this is hypothesized to be the results of mapping the phonology of whole word forms onto their semantics and vice versa, without recourse to morphemes. This raises the question whether this hypothesis also holds true in highly agglutinative languages, which are oǒten seen to exemplify the compositional nature of morphology. On the one hand, one could expect that the hypothesis for agglutinative languages is correct, since it remains unclear whether speakers are able to isolate the morphemes they need to achieve this. On the other hand, agglutinative languages have so many different words that it is not obvious how speakers can use their knowledge of words to comprehend and produce them.In this paper, we investigate comprehension and production of verbs in Kinyarwanda,an agglutinative Bantu language, by means of computational modeling within the theDiscriminative Lexicon, a theory of the mental lexicon, which is grounded in word andparadigm morphology, distributional semantics, error-driven learning, and uses insightsof psycholinguistic theories, and is implemented mathematically and computationallyas a shallow, two-layered network.In order to do this, we compiled a data set of 11528 verb forms and annotated for eachverb form its meaning and grammatical functions, and, additionally, we used our dataset to extract 573 verbs that are present in our full data set and for which meanings ofverbs are based on word embeddings. In order to assess comprehension and production of Kinyarwanda verbs, we fed both data sets into the Linear Discriminative Learningalgorithm, a two-layered, fully connected network. One layer represent the phonological form and the layer represents meaning. Comprehension is modeled as a mapping from phonology to meaning and production is modeled as a mapping from meaning to phonology. Both comprehension and production is learned with high accuracy in all data and in held-out data, both for the full data set, with manually annotated semantic features, and for the data set with meanings derived from word embeddings.Our findings provide support for the various hypotheses of the Discriminative Lexicon:Words are stored as wholes, meanings are a result of the distribution of words in utterances, comprehension and production can be successfully modeled from mappings from form to meaning and vice versa, which can be modeled in a shallow two-layered network, and these mappings are learned in by minimizing errors.