The turning test of stainless steel was carried out by using the central composite surface design of response surface method (RSM) and Taguchi design method of central combination design. The influence of cutting parameters (cutting speed, feed rate, and cutting depth) on the surface roughness was analyzed. The surface roughness prediction model was established based on the second-order RSM. According to the test results, the regression coefficient was estimated by the least square method, and the regression equation was curve fitted. Meanwhile, the significance analysis was conducted to test the fitting degree and response surface design and analysis, in addition to establishing a response surface map and three-dimensional surface map. The life of the machining tool was analyzed based on the optimized parameters. The results show that the influence of feed rate on the surface roughness is very significant. Cutting depth is the second, and the influence of cutting speed is the least. Therefore, the cutting parameters are optimized and tool life is analyzed to realize the efficient and economical cutting of difficult-to-process materials under the premise of ensuring the processing quality.
Objective The haze weather environment leads to the deterioration of the visual effect of the image, and it is difficult to carry out the work of the advanced vision task. Therefore, dehazing the haze image is an important step before the execution of the advanced vision task. Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors, but the effect is unstable when dealing with complex scenes. In the method based on convolutional neural network, the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image, and the image spatial information is lost in the encoding stage. In order to overcome these problems, this paper proposes a novel end-to-end two-stream convolutional neural network for single image dehazing. Method The network model is composed of a spatial information feature stream and a high-level semantic feature stream. The spatial information feature stream retains the detailed information of the dehazing image, and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image. A spatial information auxiliary module is designed between the feature streams. This module uses the attention mechanism to construct a unified expression of different types of information, and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network. A parallel residual twicing module is proposed, which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images. Result The peak signal-to-noise ratio and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image. The structure similarity and peak signal-to-noise ratio of the method in this paper reached 0.852 and 17.557dB on the Hazerd dataset, which were higher than all comparison algorithms. On the SOTS dataset, the indicators are 0.955 and 27.348dB, which are sub-optimal results. In experiments with real haze images, this method can also achieve excellent visual restoration effects.
Difficult object detection and class imbalance in object detection are the two main challenges faced by aerial image object detection. Difficult objects include small objects, objects of scale variation and objects with serious background interference. Class imbalances come from the number of different classes of objects and sampling of positive and negative samples. Due to these challenges, conventional object detection models usually cannot effectively detect objects in aerial images, especially in the balance between network speed and accuracy. In this paper, the YOLOv3 network structure was improved and an object detection method under the aerial visual scene (AVS-YOLO) was proposed. By introducing a type of densely connected feature pyramid strategy, a scale-aware attention module was constructed, considering both residual dense network blocks and the median-frequency-balancing mechanism. On this basis, an algorithm with ideal speed and accuracy for object detection is obtained. To verify the effectiveness of the algorithm, AVS-YOLO and YOLOv3 were both used to test the VisDrone-DET2019 and UAVDT. The experimental results show that the AP of AVS-YOLO increases by 6.22% and 5.09% on the VisDrone2019 and UAVDT datasets, respectively, compared with YOLOv3. In addition, the AP of AVS-YOLO is 1.82% higher than that of YOLOv4 on the VisDrone2019 dataset. In terms of detection speed, AVS-YOLO can process 31.8 frames per second on a single Nvidia GTX 2080Ti GPU, compared with 44.1 frames per second for YOLOv3. Compared with the other one-stage network in the field of object detection, AVS-YOLO currently achieves the state-of-the-art performance with similar calculation amount on this dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.