The basic processing units in brain are neurons and synapses that are interconnected in a complex pattern and show many surprised information processing capabilities. The researchers attempt to mimic this efficiency and build artificial neural systems in hardware device to emulate the key information processing principles of the brain. However, the neural network hardware system has a challenge of interconnecting neurons and synapses efficiently. An efficient, low-cost routing architecture (ELRA) is proposed in this paper to provide a communication infrastructure for the hardware spiking neuron networks (SNN). A dynamic traffic arbitration strategy is employed in ELRA, where the traffic status weights of input ports are calculated in real-time according to the channel traffic statuses and the port with the largest traffic status weight is given a high priority to forward packets. This strategy enables the router to serve congested ports preferentially, which can balance the overall network traffic loads. Experimental results show the feasibility of ELRA under various traffic scenarios, and the hardware synthesis result using SAED 90nm technology demonstrates it has a low hardware area overhead which maintains scalability for large-scale SNN hardware implementations. Recently, researchers proposed the networks-on-chip (NoC) interconnect paradigm [1,[4][5][6] as a promising solution to solve the inter-neuron connectivity problems and achieved a satisfactory performance [1,4]. The NoC is similar to the computer network where the processing elements (e.g. neurons in the SNN) are connected by the routers and channels [6]. For the SNN, the spikes are packetized and can be forwarded from any source node to any destination node; thus the information exchange between the neurons is established. In general, a group of neurons (e.g.∼ten in the approach of Carrillo et al. [4]) are connected to one router, but the required routers and channels increase if the number of neurons increases, which leads to the hardware area and power dissipation consumption. Thus the NoC architecture (i.e. routers and channels) constraints the system scalability [1], and it should achieve a trade-off between performance (e.g. spike throughput and communication delay etc.) and resource consumption (e.g. hardware area and power consumption). Performance: the NoC routers are responsible for transmitting highly irregular spike events. An effective router should forward as many spike events as possible in a short time period for various traffic scenarios, i.e. to provide high throughput for the communications. In addition, the irregular spike patterns occasionally introduce traffic congestions [4], which require the router have the ability to monitor the different traffic status (e.g. busy or congested etc.) and deal with various traffic patterns [5]. Resource consumption: the number of required routers increases proportionally with the number of neurons and synapses [1]. A low-cost routing architecture is very crucial for large-scale SNN hardware sys...