Automatic machine translation (MT) metrics are widely used to distinguish the quality of machine translation systems across large test sets (i.e., system-level evaluation). However, it is unclear if automatic metrics can reliably distinguish good translations from bad at the sentence level (i.e., segment-level evaluation). We investigate how useful MT metrics are at detecting segment-level quality by correlating metrics with the translation utility for downstream tasks. We evaluate the segment-level performance of widespread MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we have access to a monolingual task-specific model and a translation model. We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for machine-translated test sentences. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of downstream outcomes. We also find that the scores provided by neural metrics are not interpretable, in large part due to having undefined ranges. We synthesise our analysis into recommendations for future MT metrics to produce labels rather than scores for more informative interaction between machine translation and multilingual language understanding.