The lag phase is a temporary non-replicative period observed when a microbial population is introduced to a new nutrient-rich environment. Its duration can have a pronounced effect on population fitness, and it is often measured in laboratory conditions. However, calculating the lag phase length may be challenging and method and parameters dependent. Moreover, the details of these methods and parameters used throughout experimental studies are often under-reported. Here we discuss the most frequently used methods in experimental and theoretical studies, and we point out some inconsistencies between them. Using experimental and simulated data we study the performance of these methods depending on the frequency of population size measurements, and parameters determining the growth curve shape, such as growth rate. It turns out that the sensitivity to each of these parameters depends on the lag calculation methods. For example, lag duration calculation by parameter fitting to a logistic model is very robust to low frequency of measurements, but it may be highly biased for growth curves with low growth rate. On the contrary, the method based on finding the point where growth acceleration is the highest, is robust to low growth rate, but highly sensitive to low frequency of measurements and the level of noise in the data. Based on our results, we propose a decision tree to choose a method most suited to one's data. Finally, we developed a web tool where the lag duration can be calculated based on the user-specified growth curve data, and for various explicitly specified methods, parameters, and data pre-processing techniques.