We propose a time-domain audio source separation method based on multiresolution analysis, which we call multiresolution deep layered analysis (MRDLA). The MRDLA model is based on one of the state-of-the-art time-domain deep neural networks (DNNs), Wave-U-Net, which successively downsamples features and up-samples them to have the original time resolution. From the signal processing viewpoint, we found that the down-sampling (DS) layers of Wave-U-Net cause aliasing and may discard information useful for source separation because they are implemented with decimation. These two problems are due to the decimation; thus, to achieve a more reliable source separation method, we should design DS layers capable of simultaneously overcoming these problems. With this motivation, focusing on the fact that the successive DS architecture of Wave-U-Net resembles that of multiresolution analysis, we develop DS layers based on discrete wavelet transforms (DWTs), which we call the DWT layers, because the DWTs have anti-aliasing filters and the perfect reconstruction property. We further extend the DWT layers such that their wavelet basis functions can be trained together with the other DNN components while maintaining the perfect reconstruction property. Since a straightforward trainable extension of the DWT layers does not guarantee the existence of anti-aliasing filters, we derive constraints for this guarantee in addition to the perfect reconstruction property. Through music source separation experiments including subjective evaluations, we show the efficacy of the proposed methods and the importance of simultaneously considering both the anti-aliasing filters and the perfect reconstruction property.