We present a system that estimates upper human body measurements using a set of computer vision and machine learning techniques. In a nutshell, the main steps involve: (1) using a portable camera (such as from a smartphone); (2) improving the image quality; (3) isolating the human body from the surrounding environment; (4) performing a calibration step; (5) extracting features of the body from the image; (6) indicating markers on the image; (7) producing refined final results.
Using a single RGB camera to obtain accurate body dimensions rather than measuring these manually or via more complex multi-camera or more expensive laser-based sensors, has a high application potential for the apparel (fashion) industry.
We also present in this paper, ellipse-like approximations with the aim of minimizing the difference between the results of direct (hand) and software measurements. The human body circumferences can be well approximately represented by varying elliptic cross sections, and these can be adapted to each individual. We show that better results than the current state-of-the-art are obtained based on a simple criterion. In our study, we selected a set of two equations, out of many other possible choices, to best estimate upper human body horizontal cross sections.
We experimented with the system on a diverse sample of participants. The results for the upper human body measurements in comparison to the traditional manual method of tape measurements, when used as a reference, show ±1cm average differences, which is sufficient for a number of applications.