Point cloud (PC) quality assessment is of fundamental importance to enable the efficient processing, coding and transmission of 3D data for virtual/augmented reality, autonomous driving, cultural heritage, etc. The quality metrics proposed so far aim at quantifying the distortion in the PC geometry and/or attributes with respect to a reference pristine point cloud, using simple features extracted by the points. In this work, we target instead a blind (no-reference) scenario in which the original point cloud is not available. In addition, we learn features from data using deep neural networks. Given the limited availability of subjectively annotated datasets of corrupted point clouds, and the consequent difficulty to learn in an end-to-end fashion PC quality features, in this work we use instead a two-step procedure. First, we extract from local patches three relevant low-level features which have been commonly used in other PC quality metrics, i.e., geometric distance, local curvature and luminance values. Afterwards, we employ a deep neural network to learn, from these lowlevel features, a mapping to the PC ground truth mean opinion score. Our results on two state-of-the-art PC quality datasets show the potential of the proposed approach. The code is