We consider the problem of estimating the unknown parameter of the one-dimensional analog of the Michaelis-Menten equation when the independent variables are measured with random errors. We study the behavior of the explicit estimates that we have found earlier in the case of known independent variables and establish almost necessary conditions under which the presence of the random errors does not affect the asymptotic normality of these explicit estimates.Keywords: nonlinear regression, Michaelis-Menten equation, random errors in independent variables, asymptotically normal estimates § 1. Introduction 1.1. Consider some variables {y i }, {a i }, and {b i } that satisfy the linear-fractional relationswhere the value of the parameter θ is unknown, while the values of the numerical sequences {y i }, {a i }, and {b i } are "known only approximately." The latter means that their exact values are unknown, but some observations Y i , X ai , and X bi are given that can be represented aswhere { yi }, { ai }, and { bi } are unobservable random errors. Call a i and b i the coefficients and call this regression model a model with random errors in coefficients. The problem is to estimate the unknown parameter θ in the linear-fractional regression model (1), (2) which is a particular case of nonlinear regression model.The reason for our interest in this regression model is that (1) defines a one-dimensional analog of the Michaelis-Menten equation known in the natural sciences and studied in many articles; for instance, see [1][2][3][4][5][6][7]. Several authors focus their attention on the problem of finding explicit estimates for the unknown parameters of this equation whose derivation could avoid complicated constructions and successive approximations. However, all explicit estimates for those parameters that we were aware of before the appearance of our articles [8,9] turned out to be biased under natural assumptions. Only in [8] we were able to solve the problem of explicit estimates for the model (1), (2) in the absence of random errors in the independent variables; i.e., when ai = bi = 0 for all i and n. It comes out that under sufficiently general assumptions the simple estimate