This paper is concerned with the statistical properties of experimental designs where the factor levels cannot be set precisely. When the errors in setting the factor levels cannot be measured, design robustness is explored. However, when the actual design could be measured at the end of the investigation, its optimality is of interest. D-optimality could be assessed in di erent ways. Several measures are compared. Evaluating them is di cult even in simple cases. Therefore, in general, simulations are used to obtain their values. It is shown that if D-optimality is measured by the expected value of the determinant of the information matrix of the experimental design, as has been suggested in the past, on average the designs appear to improve with the variance of the error in setting the factor levels. However, we argue that the criterion of D-optimality should be based on the inverse of the information matrix. In this case it is shown that the experiment could be better or worse than the planned one. It is also recognized that setting the factor levels with error could lead to an increased risk of losing observations, which on its own could reduce considerably the optimality of the experimental designs. Advice on choosing the design region in such a way that such a risk is controlled to an acceptable level is given.