With the development of remote sensing algorithms and increased access to satellite data, generating up-to-date, accurate land use/land cover (LULC) maps has become increasingly feasible for evaluating and managing changes in land cover as created by changes to ecosystem and land use. The main objective of our study is to evaluate the performance of Support Vector Machine (SVM), Artificial Neural Network (ANN), Maximum Likelihood Classification (MLC), Minimum Distance (MD), and Mahalanobis (MH) algorithms and compare them in order to generate a LULC map using data from Sentinel 2 and Landsat 8 satellites. Further, we also investigate the effect of a penalty parameter on SVM results. Our study uses different kernel functions and hidden layers for SVM and ANN algorithms, respectively. We generated the training and validation datasets from Google Earth images and GPS data prior to pre-processing satellite data. In the next phase, we classified the images using training data and algorithms. Ultimately, to evaluate outcomes, we used the validation data to generate a confusion matrix of the classified images. Our results showed that with optimal tuning parameters, the SVM classifier yielded the highest overall accuracy (OA) of 94%, performing better for both satellite data compared to other methods. In addition, for our scenes, Sentinel 2 date was slightly more accurate compared to Landsat 8. The parametric algorithms MD and MLC provided the lowest accuracy of 80.85% and 74.68% for the data from Sentinel 2 and Landsat 8. In contrast, our evaluation using the SVM tuning parameters showed that the linear kernel with the penalty parameter 150 for Sentinel 2 and the penalty parameter 200 for Landsat 8 yielded the highest accuracies. Further, ANN classification showed that increasing the hidden layers drastically reduces classification accuracy for both datasets, reducing zero for three hidden layers.
Machine learning (ML) has been recognized as a feasible and reliable technique for the modeling of multi-parametric datasets. In real applications, there are different relationships with various complexities between sets of inputs and their corresponding outputs. As a result, various models have been developed with different levels of complexity in the input–output relationships. The group method of data handling (GMDH) employs a family of inductive algorithms for computer-based mathematical modeling grounded on a combination of quadratic and higher neurons in a certain number of variable layers. In this method, a vector of input features is mapped to the expected response by creating a multistage nonlinear pattern. Usually, each neuron of the GMDH is considered a quadratic partial function. In this paper, the basic structure of the GMDH technique is adapted by changing the partial functions to enhance the complexity modeling ability. To accomplish this, popular ML models that have shown reasonable function approximation performance, such as support vector regression and random forest, are used, and the basic polynomial functions in the GMDH are replaced by these ML models. The regression feasibility and validity of the ML-based GMDH models are confirmed by computer simulation.
The production possibility set (PPS) is defined as the set of all inputs and outputs of a system in which inputs can produce outputs. In data envelopment analysis (DEA), identification of the strong defining hyperplanes of the empirical production possibility set (PPS) is important, because they can be used for determining rates of change of outputs with change in inputs. Also, efficient hyperplanes determine the nature of returns to scale, and also is important for defining a suitable pattern for inefficient DMUs. As we know, fuzzy data are one of the different kinds of data that show some uncertainty in inputs and outputs. Therefore we apply an algorithm for transforming fuzzy models in to linear models using Production Possibility Set (PPS). In this paper, we deal with the problem of finding the strong defining hyperplanes of the PPS with fuzzy data. A numerical example shows the reasonability of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.