Abstract. For two decades meteor radars have been routinely used to monitor temperatures around the 90 km altitude. A common method, based on a temperature-gradient model, is to use the height dependence of meteor decay time to obtain a height-averaged temperature in the peak meteor region. Traditionally this is done by fitting a linear regression model in the scattered plot of log10(1 / τ) and height, where τ is the half-amplitude decay time of the received signal. However, this method was found to be consistently biasing the slope estimate. The consequence of such bias is that it produces a systematic offset in the estimated temperature, and thus requiring calibration with other colocated measurements. The main reason for such a biasing effect is thought to be due to the failure of the classical regression model to take into account the measurement error in τ or the observed height. This is further complicated by the presence of various geophysical effects in the data, which are not taken into account in the physical model. The effect of such biasing is discussed on both theoretical and experimental grounds. An alternative regression method that incorporates various error terms in the statistical model is used for line fitting. This model is used to construct an analytic solution for the bias-corrected slope coefficient for this data. With this solution, meteor radar temperatures can be obtained independently without using any external calibration procedure. When compared with colocated lidar measurements, the temperature estimated using this method is found to be accurate within 7 % or better and without any systematic offset.