We present algorithms and experiments for multi-scale assembly of complex structures by multi-robot teams. We focus on tasks where successful completion requires multiple types of assembly operations with a range of precision requirements. We develop a hierarchical planning approach to multi-scale perception in support of multi-scale manipulation, in which the resolution of the perception operation is * Corresponding author; e-mail: mdogar@csail.mit.edu 1 The first three authors contributed equally to this paper. This paper is a revision of Dogar et al. (2014) matched with the required resolution for the manipulation operation. We demonstrate these techniques in the context of a multi-step task where robots assemble large box-like objects, inspired by the assembly of an airplane wing. The robots begin by transporting a wing panel, a coarse manipulation operation that requires a wide field of view, and gradually shift to narrower field of view but more accurate sensors for part alignment and fastener insertion. Within this framework we also provide for failure detection and recovery: upon losing track of a feature, the robots retract to using wider field of view systems to re-localize. Finally, we contribute collaborative manipulation algorithms for assembling complex large objects. First, the team of robots coordinates to transport large assembly parts which are too 2 heavy for a single robot to carry. Second, the fasteners and parts are co-localized for robust insertion and fastening. We implement these ideas using four KUKA youBot robots and present experiments where our robots successfully complete all 80 of the attempted fastener insertion operations.