Algorithms are increasingly affecting us in our daily lives. They seem to be everywhere, yet they are seldom seen by the humans dealing with the consequences that result from them. Yet, in recent theorisations, there is a risk that the algorithm is being given too much prominence. This article addresses the interaction between algorithmic outputs and the humans engaging with them by drawing on studies of two distinct empirical fields -self-quantification and audit controls of taxpayers. We explore recalibration as a way to understand the practices and processes involved when, on the one hand, decisions are made based on results from algorithmic calculations in counting and accounting software, and on the other hand, when decisions are made based on human experience/knowledge. In particular, we are concerned with moments when an algorithmic output differs from expectations of 'normalcy' and 'normativity' in any given situation. This could be a 'normal' relation between sales and VAT deductions for a business, or a 'normal' number of steps one takes in a day, or 'normative' as it is according to the book, following guidelines and recommendations from other sources. In these moments, we argue that a process of recalibration occurs -an effortful moment where, rather than treat the algorithmic output as given, individuals' tacit knowledge, experiences and intuition are brought into play to address the deviation from the normal and normative.