Refugee-dwelling footprints derived from satellite imagery are beneficial for humanitarian operations. Recently, deep learning approaches have attracted much attention in this domain. However, most refugees are hosted by low- and middle-income countries where accurate label data are often unavailable. The Object-Based Image Analysis (OBIA) approach has been widely applied to this task for humanitarian operations over the last decade. However, the footprints were usually produced urgently, and thus, include delineation errors. Thus far, no research discusses whether these footprints generated by the OBIA approach (OBIA labels) can replace manually annotated labels (Manual labels) for this task. This research compares the performance of OBIA labels and Manual labels under multiple strategies by semantic segmentation. The results reveal that the OBIA labels can produce IoU values greater than 0.5, which can produce applicable results for humanitarian operations. Most falsely predicted pixels source from the boundary of the built-up structures, the occlusion of trees, and the structures with complicated ontology. In addition, we found that using a small number of Manual labels to fine-tune models initially trained with OBIA labels can outperform models trained with purely Manual labels. These findings show high values of the OBIA labels for deep-learning-based refugee-dwelling extraction tasks for future humanitarian operations.