Object segmentation in 3D point clouds is a critical task for robots capable of 3D perception. Despite the impressive performance of deep learning-based approaches on object segmentation in 2D images, deep learning has not been applied nearly as successfully for 3D point cloud segmentation. Deep networks generally require large amounts of labeled training data, which are readily available for 2D images but are difficult to produce for 3D point clouds. In this paper, we present Label Diffusion Lidar Segmentation (LDLS), a novel approach for 3D point cloud segmentation which leverages 2D segmentation of an RGB image from an aligned camera to avoid the need for training on annotated 3D data. We obtain 2D segmentation predictions by applying Mask-RCNN to the RGB image, and then link this image to a 3D lidar point cloud by building a graph of connections among 3D points and 2D pixels. This graph then directs a semi-supervised label diffusion process, where the 2D pixels act as source nodes that diffuse object label information through the 3D point cloud, resulting in a complete 3D point cloud segmentation. We conduct empirical studies on the KITTI benchmark data set and on a mobile robot, demonstrating wide applicability and superior performance of LDLS compared to the previous state-of-the-art in 3D point cloud segmentation, without any need for either 3D training data or fine-tuning of the 2D image segmentation model.