Atmospheric fine particles (PM2.5) have been found to be harmful to the environment and human health. Recently, remote sensing technology and machine learning models have been used to monitor PM2.5 concentrations. Partial dependence plots (PDP) were used to explore the meteorology mechanisms between predictor variables and PM2.5 concentration in the “black box” models. However, there are two key shortcomings in the original PDP. (1) it calculates the marginal effect of feature(s) on the predicted outcome of a machine learning model, therefore some local effects might be hidden. (2) it requires that the feature(s) for which the partial dependence is computed are not correlated with other features, otherwise the estimated feature effect has a great bias. In this study, the original PDP’s shortcomings were analyzed. Results show the contradictory correlation between the temperature and the PM2.5 concentration that can be given by the original PDP. Furthermore, the spatiotemporal heterogeneity of PM2.5-AOD relationship cannot be displayed well by the original PDP. The drawbacks of the original PDP make it unsuitable for exploring large-area feature effects. To resolve the above issue, multi-way PDP is recommended, which can characterize how the PM2.5 concentrations changed with the temporal and spatial variations of major meteorological factors in China.
In recent years, deep neural networks (DNN) have been widely used in many fields. Lots of effort has been put into training due to their numerous parameters in a deep network. Some complex optimizers with many hyperparameters have been utilized to accelerate the process of network training and improve its generalization ability. It often is a trial-and-error process to tune these hyperparameters in a complex optimizer. In this paper, we analyze the different roles of training samples on a parameter update, visually, and find that a training sample contributes differently to the parameter update. Furthermore, we present a variant of the batch stochastic gradient decedent for a neural network using the ReLU as the activation function in the hidden layers, which is called adaptive stochastic gradient descent (aSGD). Different from the existing methods, it calculates the adaptive batch size for each parameter in the model and uses the mean effective gradient as the actual gradient for parameter updates. Experimental results over MNIST show that aSGD can speed up the optimization process of DNN and achieve higher accuracy without extra hyperparameters. Experimental results over synthetic datasets show that it can find redundant nodes effectively, which is helpful for model compression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.