Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.
Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification
Electroencephalogram (EEG) signal-based emotion recognition has attracted wide interests in recent years and has been broadly adopted in medical, affective computing, and other relevant fields. However, the majority of the research reported in this field tends to focus on the accuracy of classification whilst neglecting the interpretability of emotion progression. In this paper, we propose a new interpretable emotion recognition approach with the activation mechanism by using machine learning and EEG signals. This paper innovatively proposes the emotional activation curve to demonstrate the activation process of emotions. The algorithm first extracts features from EEG signals and classifies emotions using machine learning techniques, in which different parts of a trial are used to train the proposed model and assess its impact on emotion recognition results. Second, novel activation curves of emotions are constructed based on the classification results, and two emotion coefficients, i.e., the correlation coefficients and entropy coefficients. The activation curve can not only classify emotions but also reveals to a certain extent the emotional activation mechanism. Finally, a weight coefficient is obtained from the two coefficients to improve the accuracy of emotion recognition. To validate the proposed method, experiments have been carried out on the DEAP and SEED dataset. The results support the point that emotions are progressively activated throughout the experiment, and the weighting coefficients based on the correlation coefficient and the entropy coefficient can effectively improve the EEG-based emotion recognition accuracy.
Recognizing complex human actions is very challenging, since training a robust learning model requires a large amount of labeled data, which is difficult to acquire. Considering that each complex action is composed of a sequence of simple actions which can be easily obtained from existing data sets, this paper presents a simple to complex action transfer learning model (SCA-TLM) for complex human action recognition. SCA-TLM improves the performance of complex action recognition by leveraging the abundant labeled simple actions. In particular, it optimizes the weight parameters, enabling the complex actions to be learned to be reconstructed by simple actions. The optimal reconstruct coefficients are acquired by minimizing the objective function, and the target weight parameters are then represented as a combination of source weight parameters. The main advantage of the proposed SCA-TLM compared with existing approaches is that we exploit simple actions to recognize complex actions instead of only using complex actions as training samples. To validate the proposed SCA-TLM, we conduct extensive experiments on two well-known complex action data sets: 1) Olympic Sports data set and 2) UCF50 data set. The results show the effectiveness of the proposed SCA-TLM for complex action recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.