Image compression based on transform coding appears to be approaching a bit-rate limit for visually acceptable distortion levels. Although an emerging compression technology called object-based compression (OBC) promises significantly improved bit rate and computational efficiency, OBC is epistemologically distinct in a way that renders existing image quality measures (IQMs) for compression transform optimization less suitable for OBC. In particular, OBC segments source image regions, then efficiently encodes each region's content and boundary. During decompression, region contents are often replaced by similar-appearing objects from a codebook, thus producing a reconstructed image that corresponds semantically to the source image, but has pixel-, featural-, and object-level differences that are apparent visually. OBC thus gains the advantage of fast decompression via efficient codebook-based substitutions, albeit at the cost of codebook search in the compression step and significant pixel-or region-level errors in decompression. Existing IQMs are pixel-and regionoriented, and thus tend to indicate high error due to OBC's lack of pixel-level correlation between source and reconstructed imagery. Thus, current IQMs do not necessarily measure the semantic correspondence that OBC is designed to produce. This paper presents image quality measures for estimating semantic correspondence between a source image and a corresponding OBC-decompressed image. In particular, we examine the semantic assumptions and models that underlie various approaches to OBC, especially those based on textural as well as high-level name and spatial similarities. We propose several measures that are designed to quantify this type of high-level similarity, and can be combined with existing IQMs for assessing compression transform performance. Discussion also highlights how these novel IQMs can be combined with time and space complexity measures for compression transform optimization.