The rapid development of the social network has brought great convenience to people's lives. A large amount of cross-media big data, such as text, image, and video data, has been accumulated. A cross-media search can facilitate a quick query of information so that users can obtain helpful content for social networks. However, cross-media data suffer from semantic gaps and sparsity in social networks, which bring challenges to cross-media searches. To alleviate the semantic gaps and sparsity, we propose a crossmedia search method based on complementary attention and generative adversarial networks (CAGS).To obtain high-quality feature representations, we build a complementary attention mechanism containing the focused and unfocused features of images to realize the consistent association of cross-media data in social networks. By designing the cross-media