The analysis of customer patterns and behaviours is essential for all businesses, as the customer is the sole source of revenue. Understanding customer patterns and behavior enables businesses to enhance their business processes and customer happiness. The availability of voluminous client datasets within organizations facilitates efficient customer analysis. Yet, the inclusion of interrelated, irrelevant, as well as missing factors leads to a poor forecast of the dataset. Feature selection techniques are investigated in order to handle the problem. Objective of feature selection is to pick the pertinent variables from a complete set of associated, irrelevant, and missing variables. In general, FS is classified into 3 types: filter, wrapper, & hybrid method. The filter method is a quick one, but the variables used are ineffective. Similarly, a wrapper method is effective yet computationally inefficient. In this study, an ensemble feature selection strategy is presented and tested to circumvent the issue with these feature selections. There are two techniques to ensemble FS: one is homogenous and the other is heterogeneous. This study employs a heterogeneous ensemble feature selection method. In the suggested method, the learning dataset is applied to five distinct filter FS approaches, and the ranked attributes that result are aggregated using two distinct methods: the mean method and the min method. Relevant variables are chosen to further build the final sorted qualities using the cut off value as a guide. As the HEVS technique's filter approach simply ranks the variables, it is necessary to select the variable subset cut off value. The experimental technique is conducted from two distinct vantage points: Heterogeneous ensemble variable selection with Naive Bayes and Naive Bayes without variable selection. In the end, the outcomes that were obtained via the use of the two different approaches are compared using different factors. The experimental results demonstrate that the suggested HEVS method outperforms the usual Naive Bayes model. As relevant variables are included when modeling using NB, the computational complexity of this proposed methodology is also minimized.