In recent years, the importance of the semantic segmentation field has been increasingly emphasized because autonomous vehicle and artificial intelligence (AI)-based robot technology are being researched extensively; and methods for accurately recognizing objects are required. Previous state-of-theart segmentation methods have been proven to be effective for databases obtained during daytime. However, in extremely low light or nighttime environments, the shape and color information of objects are very small or disappear due to an insufficient amount of external light, which makes it difficult to train the segmentation network and significantly degrades performance. In our previous work, segmentation performance in a low light environment was improved using the enhancement-based segmentation method. However, low light images could not be restored precisely and segmentation performance improvement was limited because only per-pixel loss functions were used when training the enhancement network. To overcome these drawbacks, we propose a low light image segmentation method based on a modified perceptual cycle generative adversarial network (CycleGAN). Perceptual image enhancement was performed using our network, which significantly improved segmentation performance. Unlike the existing perceptual loss, the Euclidean distance of the feature maps extracted from the pretrained segmentation network was used.In our experiments, we used low light databases generated from two famous road scene open databases, which are Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), and confirmed that our proposed method shows better segmentation performance in extremely low light environments than the existing state-of-the art methods.
INDEX TERMSSemantic segmentation; Low light; Modified perceptual CycleGAN; Perceptual loss; Road scene open database.