Lines-of-code metrics (loc) are commonly reported in Programming Languages (PL), Software Engineering (SE), and Systems papers. This convention has several different, often contradictory, goals, including demonstrating the 'hardness' of a problem, and demonstrating the 'easiness' of a problem. In many cases, the reporting of loc metrics is done not with a clearly communicated intention, but instead in an automatic, checkbox-ticking, manner.In this paper we investigate the uses of code metrics in PL, SE, and System papers. We consider the different goals that reporting metrics aims to achieve, several various domains wherein metrics are relevant, and various alternative metrics and their pros and cons for the different goals and domains. We argue that communicating claims about research software is usually best achieved not by reporting quantitative metrics, but by reporting the qualitative experience of researchers, and propose guidelines for the cases when quantitative metrics are appropriate.We end with a case study of the one area in which lines of code are not the default measurement-code produced by papers' solutions-and identify how measurements offered are used to support an explicit claim about the algorithm. Inspired by this positive example, we call for other cogent measures to be developed to support other claims authors wish to make. CCS Concepts: • General and reference → Metrics.