Gesture recognition with miniaturised radar sensors has received increasing attention as a novel interaction medium. The practical use of radar technology, however, often requires sensing through materials. Yet, it is still not well understood how the internal structure of materials impacts recognition performance. To tackle this challenge, we collected a large dataset of 14,090 radar recordings for 6 paradigmatic gesture classes sensed through a variety of everyday materials, performed by humans (6 materials) and a robot system (75 materials). Next, we developed a hybrid CNN+LSTM deep learning model and derived a robust indirect method to measure signal distortions, which we used to compile a comprehensive catalogue of materials for radar-based interaction. Among other findings, our experiments show that it is possible to estimate how different materials would affect gesture recognition performance of arbitrary classifiers by selecting just 3 reference materials. Our catalogue, software, models, data collection platform, and labeled datasets are publicly available.