Abstract-Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non-Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is nonnegative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.Keyword: Dimensional Reduction, Nonnegative Matrix Factorization, Sparse Matrix, Nonlinear Constraints
IntroductionIn every second in the modern era, a lot of data sets are being generated. The number of people online write their blogs, their homepage design and disseminate their experiences through digital support, videos, images and others. Imagine also the data formed from living organisms and gene research data obtained from space or from our earth. Data of transaction's through e-banking and others. Be useful if the data has been processed. Associated with the boom in the amount of data so rapidly, there are several approaches for data processing, by using classical methods application, designing more powerful computing structures such as: distributed computing, multi-core processors, super computers and others. However, the amount of data growth and complexities tendency exceeds the increase computational capabilities [1]. One of very popular approach called model of reduction that attempts to reduce the complexity while maintaining the assumption of issues (preliminary data). The use of an appropriate model can save the time. In addition, different data types require different models to capture the data comprehension. Of course, a model considered appropriate so that the model can survive better