The use of spatially varying reflectance models (SVBRDF) is the state of the art in physically based rendering and the ultimate goal is to acquire them from real world samples. Recently several promising deep learning approaches have emerged that create such models from a few uncalibrated photos, after being trained on synthetic SVBRDF datasets. While the achieved results are already very impressive, the reconstruction accuracy that is achieved by these approaches is still far from that of specialized devices. On the other hand, fitting SVBRDF parameter maps to the gibabytes of calibrated HDR images per material acquired by state of the art high quality material scanners takes on the order of several hours for realistic spatial resolutions. In this paper, we present a first deep learning approach that is capable of producing SVBRDF parameter maps more than two orders of magnitude faster than state of the art approaches, while still providing results of equal quality and generalizing to new materials unseen during the training. This is made possible by training our network on a large‐scale database of material scans that we have gathered with a commercially available SVBRDF scanner. In particular, we train a convolutional neural network to map calibrated input images to the 13 parameter maps of an anisotropic Ward BRDF, modified to account for Fresnel reflections, and evaluate the results by comparing the measured images against re‐renderings from our SVBRDF predictions. The novel approach is extensively validated on real world data taken from our material database, which we make publicly available under https://cg.cs.uni‐bonn.de/svbrdfs/.