Fig. 1. ClothCap. From left to right: (1) An example 3D textured scan that is part of a 4D sequence. (2) Our multi-part aligned mesh model, layered over the body. (3) The estimated minimally clothed shape (MCS) under the clothing. (4) The body made fatter and dressed in the same clothing. Note that the clothing adapts in a natural way to the new body shape. (5) This new body shape posed in a new, never seen, pose. This illustrates how ClothCap supports a range of applications related to clothing capture, modeling, retargeting, reposing, and try-on.Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on.