Cross-modal retrieval is a hot research direction at present, and unsupervised cross-modal hashing has better practical value for its storage, training cost, and query efficiency. However, many current unsupervised cross-modal hashing retrieval methods calculate the similarity by traditional convolution neural networks, which can hardly represent case feature relationships comprehensively. This paper proposes large-scale unsupervised deep cross-modal hashing retrieval with multiple dense networks (MUCH) based on DenseNet and TxtDenseNet. Considering that apply various similarity measures to exploit the structural information of multiple modalities from different perspectives, we design multiple dense feature sampling for cross-modal retrieval. Essentially, MUCH can achieve Feature Augmentation and addresses the inaccurate similarity problem by multiple deep dense networks (MDNs), which are pseudo-graph networks combining the advantages of GCN and CNN. When MUCH transforms data from high-dimensional space into the binary space, MDNs are tasked to abundantly extract feature embeddings for intermodal and inner-modal data, and regulate comprehensive similarity-preserving losses by auxiliary matrices and joint training. It is worth noting that this method applies DenseNet to the whole model for unsupervised cross-modal hash retrieval for the first time. Extensive experiments on three baseline datasets demonstrate that the proposed method significantly outperforms most of the state-of-the-art unsupervised methods.