Here we present a systematic plan to the experimental study of test–retest reliability in the multitasking domain, adopting the multitrait-multimethod (MTMM) approach to evaluate the psychometric properties of performance in Düker-type speeded multiple-act mental arithmetic. These form of tasks capacitate the experimental analysis of integrated multi-step processing by combining multiple mental operations in flexible ways in the service of the overarching goal of completing the task. A particular focus was on scoring methodology, particularly measures of response speed variability. To this end, we present data of two experiments with regard to (a) test–retest reliability, (b) between-measures correlational structure, (c) and stability (test–retest practice effects). Finally, we compared participants with high versus low performance variability to assess ability-related differences in measurement precision (typically used as proxy to “simulate” patient populations), which is especially relevant in the applied fields of clinical neuropsychology. The participants performed two classic integrated multi-act arithmetic tasks, combining addition and verification (Exp. 1) and addition and comparison (Exp. 2). The results revealed excellent test–retest reliability for the standard and the variability measures. The analysis of between-measures correlational structure revealed the typical pattern of convergent and discriminant relationships, and also, that absolute response speed variability was highly correlated with average speed (r > 0.85), indicating that these measures mainly deliver redundant information. In contrast, speed-adjusted (relativized) variability revealed discriminant validity being correlated to a much lesser degree with average speed, indicating that this measure delivers additional information not already provided by the speed measure. Furthermore, speed-adjusted variability was virtually unaffected by test–retest practice, which makes this measure interesting in situations with repeated testing.