Whether it be to feed data for an object detectionand-tracking system or to generate proper occupancy grids, 3D point cloud extraction of the ground and data classification are critical processing tasks, on their efficiency can drastically depend the whole perception chain. Flat-ground assumption or form recognition in point clouds can either lead to systematic error, or massive calculations. This paper describes an adaptive method for ground labeling in 3D Point clouds, based on a local ground elevation estimation. The system proposes to model the ground as a Spatio-Temporal Conditional Random Field (STCRF). Spatial and temporal dependencies within the segmentation process are unified by a dynamic probabilistic framework based on the conditional random field (CRF). Ground elevation parameters are estimated in parallel in each node, using an interconnected Expectation Maximization (EM) algorithm variant. The approach, designed to target high-speed vehicle constraints and performs efficiently with highly-dense (Velodyne-64) and sparser (Ibeo-Lux) 3D point clouds, has been implemented and deployed on experimental vehicle and platforms, and are currently tested on embedded systems (Nvidia Jetson TX1, TK1). The experiments on real road data, in various situations (city, countryside, mountain roads,...), show promising results.