The presence of any type of distortion in communication system, regardless of the causes, is undesirable and undeniably has a negative impacts on the system in general and therefore it is necessary to eliminate its effects. This study employs one of the wellknown algorithms for adaptive equalization of linear dispersive communication channel which is Least Mean Square (LMS) algorithm. The LMS technique is basically utilized to eliminate the noise in communication channel. The novelty of this paper includes the profoundly analyzing of the influence of rate of convergence, miss-adjustment, computational requirement, and sensitivity to Eigen-value spread in sufficient details in a simple and plain way. Moreover, the system performance improvement employing the feedback equalizer technique is intensively presented which shows that our methodology is very effective to eliminate the noise in the system. The simulation work has been performed with MATLAB software. Contribution/Originality: This study uses new estimation methodology which is regarded as profoundly analyzing of the influence of rate of convergence, miss-adjustment, computational requirement, and sensitivity to Eigen-value spread in sufficient details in a simple and plain way. 1. INTRODUCTION Adaptive filters are used extensively in statistical signal processing and offer a great improvement in performance compared with the conventional fixed filters [1-7].The subject of adaptive filters in general and linear adaptive filters in particular has drawn the attention of many researchers and therefore a various methodologies have been developed and implemented to solve any given problem in the area of statistical signal processing. The linear adaptive filter includes a filter whose function is to produce a desired output, and an adaptive algorithm to set the filter parameters. The selected algorithm is significantly affected by the filter structure which is mainly classified into finite impulse filter (FIR) [8-14] and infinite impulse response (IIR) [15-19]. Generally, adaptive algorithm attempts to minimize the error function in the input, reference, and output signals to near zero value. The most commonly minimization methods used for adaptive filters are Quasi-Newton techniques and the steepest-descent gradient technique [20, 21]. The latter is easy to perform but the quasi-Newton strategy basically has better convergence rate. Therefore the best choice is the Quasi-Newton techniques which have better computational performance and good convergence. But disadvantage of this method is very sensitive to the instability matters. In all these strategies, it is necessary to select the convergence factor carefully based on the specific adaptation issue. The error signal normally is created in different ways but the most popular techniques are Mean Square Error (MSE) methodology, and Least Squares (LS) technique. MSE is requiring an