Recently, the deep neural network has achieved considerable success in the field of machine learning. However, for most tasks, the current neural network models are still considered as the "black-box" structure [1][2], essentially because the knowledge a neural network learns from data is often unpredictable and unexplainable. Similar to neural coding in the brain, one neuron may be simultaneously involved in encoding one or even several tasks. Information may be transmitted between neurons along a particular neural-path (neural circuit) to end neurons that encode a specific decision [3]. Inspired by this, we aim to explore the reason for the performance of a neural model based on a neural-path. First, for a trained neural model, we quantify the neural-path in which the role of a neuron was assumed to control the amount of information that can be passed through; Second, we define a Euclidean distance (ED) for every two neuralpaths, and by analyzing the EDs between two classes of a neural classifier, we explain the ease of prediction for some classes, whereas others are not. We performed extensive experiments for architectures of ResNet and DenseNet models on several benchmark datasets, and determined that the shorter the distance between the neural-paths of two classes, the easier it is to make mistakes. Finally, we proposed a method for controlling the formation of a "neural-path" to build a partially understandable neural model. For the new ResNet model, each feature map in redirect layers was assigned to participate in encoding only one class. The feasibility of the method is also verified through experiments. INDEX TERMS Interpretable neural network, Neural-path, ResNet, DenseNet.