Decimal numbers are generally assumed to be a straightforward extension of the base-ten system for whole numbers given their shared place value structure. However, in decimal notation, unlike whole numbers, the same magnitude can be expressed in multiple ways (e.g., 0.8, 0.80, 0.800, etc.). Here, we used a number line task with carefully selected stimuli to investigate how equivalent decimals (e.g., 0.8 and 0.80 on a 0-1 number line) and proportionally equivalent whole numbers (e.g., 80 on a 0-100 number line) are estimated. We find that young adults (n = 88, M age = 20.22 years, SD = 1.65, 57 female) have a linear response pattern for both decimals and whole numbers, but those double-digit decimals (e.g., 0.08, 0.82, 0.80) are systematically underestimated relative to proportionally equivalent whole numbers (e.g., 8, 82, 80). Moreover, decimal string length worsens the underestimation, such that single-digit decimals (e.g., 0.8) are perceived as smaller than their equivalent double-digit decimals (e.g., 0.80). Finally, we find that exposing participants to whole number stimuli before decimal stimuli induces magnitude-based underestimation, that is, greater underestimation for larger decimals. Together, these results suggest a small but persistent underestimation bias for decimals less than one, and further that decimal magnitude estimation is fragile and subject to greater underestimation when exposed to whole numbers.