Several automated test case generation techniques have been proposed to date, although the adoption of such techniques in the industry remains low. A key factor that has contributed to this low adoption rate is the difficulty experienced by the developer in terms of reading and understanding automatically generated test cases. For this reason, it is essential to construct a test case understandability model for improving the generated test case. In the present paper, we extracted 20 test case metrics, six developer related metrics and two understandability proxies from a white-box test case classification experiment. Based on these metrics, we employed classification and regression algorithms to build test case understandability model. From the experiment, we can conclude that combined metrics always exhibit better discriminatory performance in classification models as well as a higher correlation in regression models when compared to a model that involved only test case metrics or developer metrics. INDEX TERMS test case, understandability model, automated test case generation