This paper proposes a foreground-based approach to generating a depth map which will be used for 2D-to-3D conversion. For a given input image, the proposed approach determines if the image is an object-view (OV) scene or a nonobject-view (NOV) scene, depending on the existence of foreground objects which are clearly distinguishable from the background. If the input image is an OV scene, the proposed approach extracts a foreground using block-wise background modeling and performs segmentation using adaptive background region selection and color modeling. Then, it performs segmentwise depth merging and cross bilateral filtering (CBF) to generate a final depth map. On the other hand, for the NOV scene, the proposed approach uses a conventional color-based depth map generation method [9] which has simple operations but provides a 3D depth map of good quality. Human beings are usually more sensitive to depth map quality, and 3D images, for OV scenes than for NOV scenes. With the proposed approach, it is possible to improve the quality of a depth map for OV scenes than using the conventional methods only. The performance of the proposed approach was evaluated through the subjective evaluation after 2D-to-3D conversion using a 3D display, and the proposed one provided the best depth quality and visual comfort among the benchmark methods.