The brain must represent the outside world in a way that enables an animal to survive and thrive. In early sensory systems, populations of neurons have a variety of receptive fields that are structured to detect features in input statistics. Alongside this structure, experimental recordings consistently show that these receptive fields also have a great deal of unexplained variability, which has often been ignored in classical models of sensory neurons. In this work, we model neuronal receptive fields as random samples from probability distributions in two sensory modalities, using data from insect mechanosensors and from neurons of mammalian primary visual cortex (V1). In particular, we build generative receptive field models where our random distributions are Gaussian processes with covariance functions that match the second-order statistics of experimental receptive data. We show theoretical results that these random feature neurons effectively perform randomized wavelet transform on the inputs in the temporal domain for mechanosensory neurons and spatial domain for V1 neurons. Such a transformation removes irrelevant components in the inputs, such as high-frequency noise, and boosts the signal. We demonstrate that these random feature neurons produce better learning from fewer training samples and with smaller networks in a variety of artificial tasks. The random feature model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.