Statistical learning of human body shape can be used for reconstructing or estimating body shapes from incomplete data, semantic parametric design, modifying images and videos, or simulation. A digital human body is normally represented in a high-dimensional space, and the number of vertices in a mesh is far larger than the number of human bodies in public available databases, which results in a model learned by Principle Component Analysis (PCA) can hardly reflect the true variety in human body shapes. While deep learning have been most successful on data with an underlying Euclidean or grid-like structure, the geometric nature of human body is non-Euclidean, it will be very challenging to perform deep learning techniques directly on such non-Euclidean domain. This paper presents a deep neural network (DNN) based hierarchical method for statistical learning of human body by using feature wireframe as one of the layers to separate the whole problem into smaller and more solvable sub-problems. The feature wireframe is a collection of feature curves which are semantically defined on the mesh of human body, and it is consistent to all human bodies. A set of patches can then be generated by clustering the whole mesh surface to separated ones that interpolate the feature wireframe. Since the surface is separated into patches, PCA only needs to be conducted on each patch but not on the whole surface. The spatial relationship between the semantic parameter, the wireframe and the patches are learned by DNN and linear regression respectively. An application of semantic parametric design is used to demonstrate the capability of the method, where the semantic parameters are linked to the feature wireframe instead of the mesh directly. Under this hierarchy, the feature wireframe acts like an agent between semantic parameters and the mesh, and also contains semantic meaning by itself. The proposed method of learning human body statistically with the help of feature wireframe is scalable and has a better quality.