Highlighted by success stories like AlphaGo, reinforcement learning (RL) has emerged as a powerful tool for decision making in complex environments. However, the success of RL has thus far been limited to small-scale or single-agent systems. To apply RL to large-scale networked systems such as energy, transportation, and communication networks, a critical hurdle is the curse of dimensionality, because for these systems, the state and action space can be exponentially large in the number of nodes in the network. This article attempts to break this curse of dimensionality and designs a scalable RL method, named scalable actor critic (SAC), for large networked systems. The key technical contribution is to exploit the network structure to derive an exponential decay property, which enables the design of the SAC approach.