Various advanced driver assistance systems (ADASs) have recently been developed, such as Adaptive Cruise Control and Precrash Safety System. However, most ADASs can operate in only some driving situations because of the difficulty of recognizing contextual information. For closer cooperation between a driver and vehicle, the vehicle should recognize a wider range of situations, similar to that recognized by the driver, and assist the driver with appropriate timing.In this paper, we assumed a double articulation structure in driving behavior data and segmented driving behavior into meaningful chunks for driving scene recognition in a similar manner to natural language processing (NLP). A double articulation analyzer translated the driving behavior into meaningless manemes, which are the smallest units of the driving behavior just like phonemes in NLP, and from them it constructed navemes, which are meaningful chunks of driving behavior just like morphemes. As a result of this two-phase analysis, we found that driving chunks equivalent to language words were closer to the complicated or contextual driving scene segmentation produced by human recognition.