Nowadays, the state-of-the-art mobile visual sensors technology makes it easy to collect a great number of clothing images. Accordingly, there is an increasing demand for a new efficient method to retrieve clothing images by using mobile visual sensors. Different from traditional keyword-based and content-based image retrieval techniques, sketch-based image retrieval provides a more intuitive and natural way for users to clarify their search need. However, this is a challenging problem due to the large discrepancy between sketches and images. To tackle this problem, we present a new sketch-based clothing image retrieval algorithm based on sketch component segmentation. The proposed strategy is to first collect a large scale of clothing sketches and images and tag with semantic component labels for training dataset, and then, we employ conditional random field model to train a classifier which is used to segment query sketch into different components. After that, several feature descriptors are fused to describe each component and capture the topological information. Finally, a dynamic component-weighting strategy is established to boost the effect of important components when measuring similarities. The approach is evaluated on a large, real-world clothing image dataset, and experimental results demonstrate the effectiveness and good performance of the proposed method.