It has been suggested that the origins of number words can be traced back to an evolutionarily ancient approximate number system, which represents quantities on a compressed scale and complies with Weber's law. Here, we use a data-driven computational model, which learns to predict one event (a word in a text corpus) from associated events, to characterize verbal behavior relative to number words in natural language, without appeal to perception. We show that the way humans use number words in spontaneous language reliably depends on numerical ratio -a clear signature of Weber's law -thus perfectly mirroring the human and non-human psychophysical performance in comparative judgments of numbers. Most notably, the adherence to Weber's law is robustly replicated in a wide range of different languages. Together, these findings suggest that the everyday use of number words in language rests upon a pre-verbal approximate number system, which would thus affect the handling of numerical information not only at the input level but also at the level of verbal production.