Environmental air quality affects people's life, obtaining real-time and accurate environmental air quality has a profound guiding significance for the development of social activities. At present, environmental air quality measurement mainly adopts the method that setting air quality detector at specific monitoring points in cities and timing sampling analysis, which is easy to be restricted by time and space factors. Some air quality measurement algorithms related to deep learning mostly adopt a single convolutional neural network to train the whole image, which will ignore the difference of different parts of the image. In this paper, we propose a method for air quality measurement based on double-channel convolutional neural network ensemble learning to solve the problem of feature extraction for different parts of environmental images. Our method mainly includes two aspects: ensemble learning of double-channel convolutional neural network and selflearning weighted feature fusion. We constructed a double-channel convolutional neural network, used each channel to train different parts of the environment images for feature extraction. We propose a feature weight self-learning method, which weights and concatenates the extracted feature vectors, and uses the fused feature vectors to measure air quality. Our method can be applied to the two tasks of air quality grade measurement and air quality index (AQI) measurement. Moreover, we build an environmental image dataset of random time and location condition. The experiments show that our method can achieve nearly 82% accuracy and a small mean absolute error (MAE) on our test dataset. At the same time, through comparative experiment, we proved that our proposed method gained considerable improvement in performance compared with single channel convolutional neural network air quality measurements.
As an important part of the Chinese painting and calligraphy, the seals not only have a high value of art but also contain a lot of information about the artwork itself. At this digital age, we would like not only be able to represent the seals in the digital format, but also like to use image processing techniques to help us better understand them. With the development of deep learning, convolutional neural networks have been widely used in the fields of feature learning, object localization, and classification. Based on deep learning technology, this paper proposes a highly accurate Chinese seal recognition system (CSRS). With our CSRS, users could simply input a single seal image into the system, then CSRS would automatically recognize the seal and report the relevant information in real-time. The CSRS mainly contains three units. 1) A new Siamese network with multi-task learning (Siamese-MTL), which can effectively solve the similarity measurement problem and improve the generalization of the model. 2) A new online data generation algorithm called automatic background generation (ABG) which could generate numerous seal images with different backgrounds for effective training. 3) A new training method for Siamese network which based on a central constraint. In order to validate the effectiveness of the proposed method, we have established two large scale seals image databases, including 15,000 Chinese seal images and 1,700 background images, respectively. We evaluate our method and compare with the variant methods on these datasets, achieving the highest performance. The extensive experimental results indicate that our proposed method is effective and has a great potential for the practical application in Chinese seals recognition.INDEX TERMS Multi-task learning, Siamese network, Chinese seal recognition.
Recently, air quality analysis based on image sensing devices has attracted much attention. Since most smoke images in real scenes have challenging variances, which is difficult for existing object detection methods. To keep real-time factory smoke under efficient and universal social supervision, this paper proposes a mobile-platform-running efficient smoke detection algorithm based on image analysis techniques. We introduce the two-stage smoke detection (TSSD) algorithm based on the lightweight detection framework, in which the prior knowledge and contextual information are modeled into the relation-guided module to reduce the smoke search space, which can therefore significantly improve the performance of the single-stage method. Experimental results show that the proposed TSSD algorithm can robustly improve the detection accuracy of the single-stage method and the model has good compatibility for image resolution inputs. Compared with various state-of-the-art detection methods, the accuracy $$AP_{mean}$$ A P mean of our proposed TSSD model reaches 59.24$$\%$$ % , even surpassing the current detection model Faster RCNN. In addition, the detection speed of our proposed model can reach 50 ms (20 FPS), meeting the real-time requirements. This knowledge-based system has the advantages of high stability, high accuracy, fast detection speed. It can be widely used in some scenes with smoke detection requirements, such as on the mobile terminal carrier, providing great potential for practical environmental applications.
Recently, using image sensing devices to analyze air quality has attracted much attention of researchers. To keep real-time factory smoke under universal social supervision, this paper proposes a mobile-platform-running efficient smoke detection algorithm based on image analysis techniques. Since most smoke images in real scenes have challenging variances, it’s difficult for existing object detection methods. To this end, we introduce the two-stage smoke detection (TSSD) algorithm based on the lightweight framework, in which the prior knowledge and contextual information are modeled into the relation-guided module to reduce the smoke search space, which can therefore significantly improve the shortcomings of the single-stage method. Experimental results show that the TSSD algorithm can robustly improve the detection accuracy of the single-stage method and has good compatibility for different image resolution inputs. Compared with various state-of-the-art detection methods, the accuracy AP mean of the TSSD model reaches 59.24%, even surpassing the current detection model Faster R-CNN. In addition, the detection speed of our proposed model can reach 50 ms (20 FPS), which meets the real-time requirements, and can be deployed in the mobile terminal carrier. This model can be widely used in some scenes with smoke detection requirements, providing great potential for practical environmental applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.