In this study, the authors propose a novel method to perform foreground extraction for freely moving RGBD cameras. Although the field of foreground extraction or background subtraction has been explored by the computer vision researcher community since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this study, the authors make a novel use of RGB and depth data: from the RGB frame, they first extract corner features and then represent these features with the histogram of oriented gradients (HoG) descriptor. They then train a non-linear SVM on these HoG descriptors. During the test phase, they make use of the fact that the foreground object has a distinct depth ordering with respect to the rest of the scene. Hence, they use the positively classified features from accelerated segment test (FAST) features on the test frame to initiate a region growing algorithm to obtain an accurate foreground segmentation from the depth data alone. The authors demonstrate the proposed method on six datasets, and demonstrate encouraging quantitative and qualitative results.