In this paper, a compensation control model of secondary cooling process of billet continuous casting for quality steel has been presented. The effects on the spray control of the various parameters such as steel superheat, casting speed, cooling water temperature and chemical component of steel were considered. The parameters of control model were determined to associate with the two‐dimensional heat transfer equation and solved by finite‐difference method. Effects of steel superheat and cooling water temperature on surface temperature, solidification structure and solidifying end point were discussed. Results indicate that steel superheat significantly affects solidification structure and solidifying end point but has a little effect on slab surface temperature. Moreover, secondary cooling water temperature affects surface temperature and solidifying end point but has a little effect on solidification structure. The surface temperature and solidifying end point can be maintain stabilized through applying the compensation control model when steel superheat and cooling water temperature vary. The models have been validated by industrial measurements. The results show that the simulations are in very good agreement with the real casting situation.
Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.
Human vision is often adversely affected by complex environmental factors, especially in night vision scenarios. Thus, infrared cameras are often leveraged to help enhance the visual effects via detecting infrared radiation in the surrounding environment, but the infrared videos are undesirable due to the lack of detailed semantic information. In such a case, an effective video-to-video translation method from the infrared domain to the visible light counterpart is strongly needed by overcoming the intrinsic huge gap between infrared and visible fields. To address this challenging problem, we propose an infrared-to-visible (I2V) video translation method I2V-GAN to generate fine-grained and spatial-temporal consistent visible light videos by given unpaired infrared videos. Technically, our model capitalizes on three types of constraints: 1) adversarial constraint to generate synthetic frames that are similar to the real ones, 2) cyclic consistency with the introduced perceptual loss for effective content conversion as well as style preservation, and 3) similarity constraints across and within domains to enhance the content and motion consistency in both spatial and temporal spaces at a finegrained level. Furthermore, the current public available infrared and visible light datasets are mainly used for object detection or tracking, and some are composed of discontinuous images which are not suitable for video tasks. Thus, we provide a new dataset for infrared-to-visible video translation, which is named IRVI. Specifically, it has 12 consecutive video clips of vehicle and monitoring scenes, and both infrared and visible light videos could be apart into 24352 frames. Comprehensive experiments on IRVI validate that I2V-GAN is superior to the compared state-of-the-art methods in the translation of infrared-to-visible videos with higher fluency and finer semantic details. Moreover, additional experimental results on the flower-to-flower dataset indicate I2V-GAN is also applicable to other video translation tasks. The code and IRVI dataset are available at https://github.com/BIT-DA/I2V-GAN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.