Existing research on music visualization has primarily focused on creating animated visual illustrations to accompany the music being played based on fundamental attributes such as sound frequency or music structure, whereas the higher-level features, including mood and timbre, are mostly overlooked. In this paper, we propose visual signatures to describe the higher-level attributes of music, where the content and the color palette of the visual signatures are controlled by the music mood and timbre, respectively. We expect that the users with different cultural and educational backgrounds will be able to easily interpret the meaning of sound with the proposed visual signatures. In our work, we used a contrastive learning neural network for mood classification and an audio Transformer for timbre classification. The performance of the music classification models is examined by their accuracy, while multiple generated images are displayed to showcase the feasibility of visual signatures.