Deep learning models are highly susceptible to adversarial perturbations. Among model attackers, the black-box assumption, where no prior knowledge of the victim model is available, is the most common and practical scenario. Substitute attack, as a mainstream black-box attack method, involves training a surrogate model and leveraging the transferability property of its generated adversarial examples to attack the target model. However, existing approaches typically require a large number of queries to the victim model, resulting in reduced attack efficiency and performance. In this work, we propose an effective side channel attack method to estimate the fine-grained structure of the victim model. We leverage the adjacent intermediate layer perturbation technique to enhance the performance of adversarial attacks. Specifically, we demonstrate that the model structure at the layer level can be analyzed and revealed through a power side channel attack. Our experimental results indicate that the structure information can be successfully restored with an average success rate of 94\%, leading to a significant improvement in the attack success rate across multiple classic datasets.