Human languages vary in terms of which meanings they lexicalize, but this variation is constrained. It has been argued that languages are under two competing pressures: the pressure to be simple (e.g., to have a small lexicon) and to allow for informative (i.e., precise) communication, and that which meanings get lexicalized may be explained by languages finding a good way to trade off between these two pressures. However, in certain semantic domains, languages can reach very high levels of informativeness even if they lexicalize very few meanings in that domain. This is due to productive morphosyntax and compositional semantics, which may allow for construction of meanings which are not lexicalized. Consider the semantic domain of natural numbers: many languages lexicalize few natural number meanings as monomorphemic expressions, but can precisely convey very many natural number meanings using morphosyntactically complex numerals. In such semantic domains, lexicon size is not in direct competition with informativeness. What explains which meanings are lexicalized in such semantic domains? We will propose that in such cases, languages need to solve a different kind of trade‐off problem: the trade‐off between the pressure to lexicalize as few meanings as possible (i.e, to minimize lexicon size) and the pressure to produce as morphosyntactically simple utterances as possible (i.e, to minimize average morphosyntactic complexity of utterances). To support this claim, we will present a case study of 128 natural languages' numeral systems, and show computationally that they achieve a near‐optimal trade‐off between lexicon size and average morphosyntactic complexity of numerals. This study in conjunction with previous work on communicative efficiency suggests that languages' lexicons are shaped by a trade‐off between not two but three pressures: be simple, be informative, and minimize average morphosyntactic complexity of utterances.