When Dennard scaling, a law describing the area-proportional growth of integrated circuit power use, broke down sometime in the last decade, we faced a situation where further transistor minimization suddenly required additional energy for operation and cooling. CPU manufacturers responded with multicore processors, as an alternative means to increase the floating-point operations per second (FLOPS) count. However, this too increases the energy consumption and, in addition, requires a larger silicon area. The most threatened by the stalled growth of per-Watt computing performance are pervasive mobile computers, nowadays present in anything from wearables to smartphones. Not only does these devices' small form factor prevent further component packing, but the need for mobility also precludes bundling devices with large batteries.Yet, a unique opportunity for optimization lurks behind the mobile aspect of today's computing. With computation executed in an array of environments, user expectations with respect to result accuracy vary, as the result is further manipulated, interpreted, and acted upon in different contexts of use. For instance, a user might tolerate a lower video decoding quality when calling to say "Hi" from a backpacking holiday, while she would expect a higher video quality when on a job interview call from an office. Similarly, when searching for nearby restaurant suggestions, rough location determination and a slightly shuffled ordering within the produced suggestion list would probably go unnoticed, whereas the same inaccuracies would not be tolerated when driving directions are searched for.The result of a computation need not be perfect, just good enough for things to work. This opens up opportunities to save resources, including CPU cycles and memory accesses, thus, consequently battery charge, by reducing the amount of computation to the point where the result accuracy is just above the minimum necessary to satisfy a user's requirements. This way of reasoning about computation is termed approximate computing (AC) and Approximate Computing Techniques (ACTs) have already been demonstrated on various levels of computer architecture, from the hardware where incorrect adders have been designed to sacrifice result correctness for reduced energy consumption [1], to compilerlevel optimizations that omit certain lines of code to speed up video encoding [2]. Experiments have shown significant resource savings, e.g. tripled energy efficiency with neural network-based approximations [3], or 2.5 times the speedup when certain task patterns are substituted with approximate code [4]. Ironically, to date, approximate computing remains mostly confined to desktop and data center computing, missing the opportunity to bring the benefits to mobile computing. It is exactly in this domain where, due to context-dependent user requirements the occasions for adaptable approximation are abundant, and where, due to the devices' physical constraints, the applicability of alternative solutions for increasing the computati...