Object detection has so far achieved great success. However, almost all of current state-ofthe-art methods focus on images with normal illumination, while object detection under low-illumination is often ignored. In this paper, we have extensively investigated several important issues related to the challenge low-illumination detection task, such as the importance of illumination on detection, the applicabilities of illumination enhancement on low-illumination object detection task, and the influences of illumination balanced dataset and model's parameters initialization, etc. We further have proposed a Night Vision Detector (NVD) with specifically designed feature pyramid network and context fusion network for object detection under low-illuminance. Through conducting comprehensive experiments on a public real low-illuminance scene dataset ExDARK and a selected normal-illumination counterpart COCO*, we on one hand have reached some valuable conclusions for reference, on the other hand, have found specific solutions for low-illumination object detection. Our strategy improves detection performance by 0.5%~2.8% higher than basic model on all standard COCO evaluation criterions. Our work can be taken as effective baseline and shed light to future studies on low-illumination detection.
Automatic image colorization without manual interventions is an ill-conditioned and inherently ambiguous problem. Most of existing methods focus on formulating colorization as a regression problem and learn parametric mappings from grayscale to color through deep neural networks. Due to the multimodalities of color-grayscale space, in many applications, it is not required to recover exact ground-truth color. Pair-wise pixel-to-pixel learning-based algorithms lack rationality. Techniques such as color space conversion techniques are then proposed to avoid such direct pixel learning. However, the coloring results after color space conversion are blunt and unnatural. In this paper, we hold viewpoints that a reasonable solution is to generate some colorized result that looks natural. No matter what color a region is to be assigned, the colorized region should be semantically and spatially consistent. In this paper, we propose an effective semantic-aware automatic colorization model via unpaired cycle-consistent self-supervised network. Low-level monochrome loss, perceptual identity loss and high-level semantic-consistence loss, together with adversarial loss, are introduced to guide network selftraining. We train and test our model on randomly selected subsets from PASCAL VOC 2012. The
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.