In recent years, intelligent driving navigation and security monitoring have made considerable progress with the help of deep Convolutional Neural Networks (CNNs). As one of the state-of-the-art perception approaches, semantic segmentation unifies distinct detection tasks widely desired by both autonomous driving and security monitoring. Currently, semantic segmentation shows remarkable efficiency and reliability in standard scenarios such as daytime scenes with favorable illumination conditions. However, in face of adverse conditions such as the nighttime, semantic segmentation loses its accuracy significantly. One of the main causes of the problem is the lack of sufficient annotated segmentation datasets of nighttime scenes. In this paper, we propose a framework to alleviate the accuracy decline when semantic segmentation is taken to adverse conditions by using Generative Adversarial Networks (GANs). To bridge the daytime and nighttime image domains, we made key observation that compared to datasets in adverse conditions, there are considerable amount of segmentation datasets in standard conditions such as BDD and our collected ZJU datasets. Our GAN-based nighttime semantic segmentation framework includes two methods. In the first method, GANs were used to translate nighttime images to the daytime, thus semantic segmentation can be performed using robust models already trained on daytime datasets. In another method, we use GANs to translate different ratio of daytime images in the dataset to the nighttime but still with their labels. In this sense, synthetic nighttime segmentation datasets can be generated to yield models prepared to operate at nighttime conditions robustly. In our experiment, the later method significantly boosts the performance at the nighttime evidenced by quantitative results using Intersection over Union (IoU) and Pixel Accuracy (Acc). We show that the performance varies with respect to the proportion of synthetic nighttime images in the dataset, where the sweet spot corresponds to most robust performance across the day and night. The proposed framework not only makes contribution to the optimization of visual perception in intelligent vehicles, but also can be applied to diverse navigational assistance systems.
The white-rot fungus Cerrena unicolor BBP6 produced up to 243.4 U mL-1 laccase. A novel laccase isoform LacA was purified; LacA is a homodimer with an apparent molecular mass of 55 kDa and an isoelectric point of 4.7. Its optimal pH was 2.5, 4.0, and 5.5 when 2, 2’-Azinobis-(3-ethylbenzthiazoline-6-sulphonate) (ABTS), guaiacol, and 2, 6-dimethoxyphenol (2, 6-DMP) were used as the substrates, respectively. The optimal temperature was 60°C for ABTS and 80°C for both guaiacol and 2, 6-DMP. LacA retained 82–92% activity when pH was greater than 4 and 42%-92% activity at or below 50°C. LacA was completely inhibited by 0.1 mM L-cysteine, 1 mM Dithiothreitol, and 10 mM metal ions, Ca2+, Mg2+ and Co2+. LacA had good affinity for ABTS, with a Km of 49.1 μM and a kcat of 3078.9 s-1. It decolorized synthetic dyes at 32.3–87.1%. In the presence of 1-hydroxybenzotriazole (HBT), LacA decolorized recalcitrant dyes such as Safranine (97.1%), Methylene Blue (98.9%), Azure Blue (96.6%) and simulated textile effluent (84.6%). With supplemented manganese peroxidase (MnP), Mn2+ and HBT, the purified LacA and BBP6 fermentation broth showed great potential in denim bleaching, with an up to 5-fold increase in reflectance values.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.