Weight assignment to the decision parameters is a crucial factor in the decision-making process. Any imprecision in weight assignment to the decision attributes may lead the whole decision-making process useless which ultimately mislead the decision-makers to find an optimal solution. Therefore, attributes' weight allocation process should be flawless and rational, and should not be just assigning some random values to the attributes without a proper analysis of the attributes' impact on the decision-making process. Unfortunately, there is no sophisticated mathematical framework for analyzing the attribute's impact on the decision-making process and thus the weight allocation task is accomplished based on some human sensing factors. To fill this gap, present paper proposes a weight assignment framework that analyzes the impact of an attribute on the decision-making process and based on that, each attribute is evaluated with a justified numerical value. The proposed framework analyzes historical data to assess the importance of an attribute and organizes the decision problems in a hierarchical structure and uses different mathematical formulas to explicit weights at different levels. Weights of mid and higherlevel attributes are calculated based on the weights of root-level attributes. The proposed methodology has been validated with diverse data. In addition, the paper presents some potential applications of the proposed weight allocation scheme.
A novel approach to extract the light invariant local feature for facial expression recognition is presented in this paper. It is robust in monotonic gray-scale changes caused by illumination variations. Proposed method is easy to perform and time effective. The local strength for a pixel is calculated by finding the decimal value of the neighboring of that pixel with the particular rank in term of its gray-scale value among all the nearest pixels. When eight neighboring pixels are considered, the gradient direction of the neighboring pixel with the mix of second minima and maxima of the gray scale intensity can capture more local details and yield the best performance for the facial expression recognition in our experiment. CK+ dataset is used in this experiment to find out the facial expression classification. The classification accuracy rate achieved is 92.1 ± 3.2%, which is not the best but easier to compute. The results show that the experimented feature extraction technique is fast, accurate and efficient for facial expression recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.