Regular data base questioning tactics are insufficient to extract meaningful data due to the exponential expansion of high layered datasets; therefore, analysts nowadays are forced to build new processes to satisfy the increased needs. Because of the development in the number of data protests as well as the expansion in the number of elements/ascribes, such vast articulation data leads to numerous new computational triggers. To increase the effectiveness and accuracy of mining activities on highly layered data, the data should be preprocessed using a successful dimensionality decrease technique. So we have collected ideas of different researchers. In several fields, cluster analysis has recently gained popularity as a method for data analysis. A popular parceling-based clustering method called K-means searches for a certain number of clusters that may be found by their centroids. However, the results are quite dependent on the original cluster focus sites. Once more, the number of distance calculations significantly grows as the complexity of the data increases. This is because building a high-precision model frequently necessitates a sizable and dispersed preparatory set. A large preparation set could also need a significant amount of preparation time. There is a trade-off between speed and accuracy when creating orders, especially for large data sets. Vector data are frequently clustered, packed, and summed using the k-means approach. We provide No Concurrent Specific Clumped K-means, a rapid and memory-effective GPU-based approach for cautious k-means (ASB K-means). In contrast to previous GPU-based k-means methods, which require stacking the entire dataset onto the GPU for clustering, our methodology may be tailored to consume far less GPU RAM than the size of the complete dataset. As a result, we may cluster datasets that are bigger than the available RAM. In order to effectively handle large datasets, the method employs a clustered architecture and applies the triangle disparity in each k-means focus to eliminate a data point on the off chance that its enrollment task, or the cluster it is a member of, remains unchanged. As a result, fewer data guides have to be sent between the Slam of the computer processor and the global memory of the GPU.