Accurate image semantic segmentation in atmosphere turbulence conditions is challenging due to the severe degradation effects introduced by the random refractive-index fluctuations of atmosphere. In this paper, we present an end-to-end trainable methodology for turbulence-degraded image semantic segmentation that is capable of digging the physical imaging mechanism in atmosphere turbulence conditions, in order to improve semantic estimates. First, we investigate the physical imaging mechanism in kinds of turbulence conditions, including the isotropic turbulence and the anisotropic turbulence. Physical turbulence parameters are considered, such as the anisotropic factor, turbulence inner and outer scales, refractive-index structure constant, general spectral power law and imaging distance. Second, based on the physical imaging model in various turbulence conditions and image processing algorithms, we construct the turbulence-degraded image datasets, including the turbulence-degraded Pascal VOC 2012 and ADE20K. The new datasets cover a wide range of turbulence scenes. Third, in order to obtain more accurate boundary information, we propose the Boundary-aware DeepLabv3+ network that is trained on the constructed turbulence-degraded image datasets for semantic segmentation in turbulence media. The proposed model extends DeepLabv3+ by adding simple yet effective Edge Aware Loss and Border Auxiliary Supervision Module, which is helpful to acquire precise boundary segmentation effect while confining the target in this boundary region. Finally, without any preprocessing, the semantic segmentation accuracy reached a performance of 87.95% mIoU on the Turbulencedegraded Pascal VOC 2012 Dataset.