In the past decade, multimodal data analysis has gained importance, especially for including individuals with visual impairments in education and science dissemination. However, its application in scientific research is still limited due to a lack of conclusive evidence on its robustness and performance. Various sonification tools have been developed, including xSonify, StarSound, STRAUSS, and sonoUno, which aim to enhance accessibility for both sighted and visually impaired users. This contribution presents sonoUno (a data visualization and sonification tool) using data, and comparing to corresponding visuals displays, from established databases like SDSS, ASAS-SN, and Project Clea for astronomical data. We show that SonoUno is able to replicate the visual data displays and provide consistent auditory representations. Key features include marking absorption and emission lines (in both visual and sonification) and multicolumn sonification, which facilitates spectral comparisons through sound. This approach ensures consistency between visual and auditory data, making multimodal displays more viable for use in research, enabling greater inclusion in astronomical investigation. The study suggests that sonoUno could be broadly adopted in scientific research and used to develop multimodal training courses and improve data analysis methods.