Attempts to quantify the performance of dialysis therapy had started even before maintenance dialysis became widely available. Initial efforts in 1950s were centered around measuring the ability of a dialyzer to remove the solute mass, leading to coinage of the terms "clearance" and "dialysance". 1 The concept of adequacy, or the understanding that patient outcome was linked to performance of dialyzer, gathered pace in the 1970s on the basis of work by Babb and Scribner (square-meter hour hypothesis), Kopp (liter-kilogram concept) and Kjellstrand (time/ kg or liter). 1 It is interesting to note that all of these early models include time on dialysis as an important component, with the understanding that removal of "middle molecules" had important bearing on outcomes. The relative importance of, and interaction between, diffusion and ultrafiltration was also starting to be understood in the 1970s. The decades of 1980s and 1990s were heavily influenced by work of Gotch and Sargent who used mathematical modeling of urea kinetics on data from the National Cooperative Dialysis Study (NCDS) and that popularized the concept Kt/V urea (originally articulated by Babb-Scribner) of an adequate normalized dialysis dose (ie, dialyzer clearance of urea (K) and treatment time (t) with respect to its distribution volume (V)). 2 Daugirdas and Tattersall created simple equations that lent themselves to automatic calculation. In parallel, Lowrie came up with an alternate metric-urea reduction ratio (URR), but all of these continued to highlight the primacy of small solute clearance. The term "dialysis adequacy" was mostly used to denote "an appropriate dialysis dose" reflecting urea clearance. 3,4 The inadequacy of this measure was recognized relatively early, in particular from the work of Teschan and colleagues that led to the currently accepted standard dialysis frequency-thrice a week. They also