Despite the spectacular successes, deep neural networks (DNN) with a huge number of adjustable parameters remain largely black boxes. To shed light on the hidden layers of DNN, we study supervised learning by a DNN of width N and depth L consisting of perceptrons with c inputs by a statistical mechanics approach called the teacher-student setting. We consider an ensemble of student machines that exactly reproduce M sets of N dimensional input/output relations provided by a teacher machine. We analyze the ensemble theoretically using a replica method ( Hajime Yoshino, SciPost Phys. Core 2, 005 (2020). [1])) and numerically performing greedy Monte Carlo simulations. The replica theory which works on high dimensional data N 1 becomes exact in 'dense limit' N c 1 and M 1 with fixed α = M/c. Both the theory and the simulation suggest learning by the DNN is quite heterogeneous in the network space: configurations of the machines are more correlated within the layers closer to the input/output boundaries while the central region remains much less correlated due to over-parametrization. Deep enough systems relax faster thanks to the less correlated central region. Remarkably both the theory and simulation suggest generalization-ability of the student machines does not vanish even in the deep limit L 1 where the system becomes strongly over-parametrized. We also consider the impact of effective dimension D(≤ N ) of data by incorporating the hidden manifold model (Sebastian Goldt, Marc Mézard, Florent Krzakala, and Lenka Zdevorová, Physical Review X 10, 041044 (2020). [2]) into our model. The replica theory implies that the loop corrections to the dense limit, which reflect correlations between different nodes in the network, become enhanced by either decreasing the width N or decreasing the effective dimension D of the data. Simulation suggests both leads to significant improvements in generalization-ability.