Assisted and automated driving systems critically depend on high-quality sensor data to build accurate situational awareness. A key aspect of maintaining this quality is the ability to quantify the perception sensor degradation through detecting dissimilarities in sensor data. Amongst various perception sensors, LiDAR technology has gained traction, due to a significant reduction of its cost and the benefits of providing a detailed 3D understanding of the environment (point cloud). However, measuring the dissimilarity between LiDAR point clouds, especially in the context of data degradation due to noise factors, has been underexplored in the literature. A comprehensive point cloud dissimilarity score metric is essential for detecting severe sensor degradation, which could lead to hazardous events due to the compromised performance of perception tasks. Additionally, this score metric plays a central role in the use of virtual sensor models, where a thorough validation of sensor models is required for accuracy and reliability. To address this gap, this paper introduces a novel framework that evaluates point clouds dissimilarity based on highlevel geometries. Contrasting with traditional methods like the computationally expensive Hausdorff metric which involves correspondence-search algorithms, our framework uses a tailored downsampling method to ensure efficiency. This is followed by condensing point clouds into shape signatures which results in efficient comparison. In addition to controlled simulations, our framework demonstrated repeatability, robustness, and consistency, in highly noisy real-world scenarios, surpassing traditional methods.INDEX TERMS 3D point cloud, noise factor, perception and sensing, sensor degradation