Single-cell and spatial genomics datasets can be organized and interpreted by annotating single cells to distinct types, states, locations, or phenotypes. However, cell annotations are inherently ambiguous, as discrete labels with subjective interpretations are assigned to heterogeneous cell populations based on noisy, sparse, and high-dimensional data. Here, we show that incongruencies between cells and their input annotations can be identified by analyzing a rich but overlooked source of information: the difficulty of training a deep neural network to assign each cell to its input annotation, or annotation trainability. Furthermore, we demonstrate that annotation trainability encodes meaningful biological signals. Based on this observation, we introduce the concept of signal-aware graph embedding, which facilitates downstream analysis of diverse biological signals in single-cell and spatial omics data, such as the identification of cellular communities corresponding to a target signal. We developed Annotatability, a publicly-available implementation of annotation-trainability analysis. We address key challenges in the interpretation of genomic data, demonstrated over seven single-cell RNA-sequencing and spatial omics datasets, including auditing and rectifying erroneous cell annotations, identifying intermediate cell states, delineating complex temporal trajectories along development, characterizing cell diversity in diseased tissue, identifying disease-related genes, assessing treatment effectiveness, and identifying rare healthy-like cell populations. These results underscore the broad applicability of annotation-trainability analysis via Annotatability for unraveling cellular diversity and interpreting collective cell behaviors in health and disease.