model of a rat hippocampus CA1 1. This model has about 447 thousand neurons, 304 million compartments, and 990 million synapses. To study such models at different scales, the community has developed various simulation software such as NEURON [1] for morphologically detailed neuron models, NEST [2] for point neuron models, and STEPS [3] for simulations at the molecular level. The simulation of morphologically detailed neuronal circuits like rat hippocampus CA1 is computationally expensive and requires access to a large computing cluster. Analyzing and optimizing such simulation software's performance on different hardware platforms is essential for delivering scientific results faster, and reducing the computational cost of such large scale simulations. In this paper, we present our efforts to analyze the performance of CoreNEURON [4], a compute engine of widely used NEURON simulator. Specifically, by using newly developed NMODL source to source compiler framework [5] with ISPC backend [6], we analyzed different performance metrics to evaluate Intel and Arm platforms. For decades scientific computing has been associated with mostly a single architecture, Intel x86. Since November 2018, the Armv8 architecture is part of the Top500 list 2 with the Astra supercomputer [7]. In June 2020, Fugaku by Fujitsu powered by the Arm Instruction Set Architecture (ISA), has been ranked as the fastest supercomputer in the world. Therefore, a valid question is how well complex applications such as neural simulations behave on high-performance systems powered by different architectures? There are several approaches in the literature [8], [9], mostly targeting Arm mobile SoCs. We employ server-grade Arm CPUs similar to [10], [11], but evaluating a different workflow. We isolated three layers that can affect the performance