The objective of an effective human-robot collaborative (HRC) task is the maximization of human-robot competencies while ensuring the user’s convenience. In photometrically challenging and unstructured HRC environments, data obtained from vision sensors often tend to get degraded due to illumination irregularities and spatio-temporal complexities. To extract useful and discriminative features from the data under such situations, locality-sensitive methods like locality preserving projections (LPP) become quite useful as it captures the local geometric structure of the high-dimensional data. In LPP, the local structural information is encoded in the form of weight values between two samples in the higher-dimensional Euclidean space. The weight values are learned in a regular and continuous manner which only depends on the spatial distribution of the data. Moreover, because of its weight dependency solely on the Euclidean distance, improper weight values can occur frequently, as the Euclidean distance is susceptible to noise, outliers, and different types of geometrical transformations. This paper proposes an adaptive weight learning method to be utilized in the weight computation of LPP, which allows it to adaptively select and extract more discriminative features from the higher-dimensional input data while preserving the intrinsic structural information of the data. Additionally, to alleviate the issues with spatial dependency, the concept of bilateral filtering that incorporates the range weights from the feature space along with the similarity weight in the Euclidean space has been utilized here. This paper proposes an augmented version of adaptive spatial-feature kernel-guided bilateral filtering inspired LPP which addresses two of these basic and fundamental issues of the conventional LPP.