K-Nearest Neighbor is highly efficient classification algorithm due to its key features like: very easy to use, requires low training time, robust to noisy training data, easy to implement, but alike other algorithms it also has some shortcomings as computation complexity, large memory requirement for large training datasets, curse of dimensionality which can't be ignored and it also gives equal weights to all attributes. Many researchers have developed various techniques to overcome these shortcomings. In this paper such techniques are discussed. Some of these techniques are structure based, like R-Tree, R*-Tree, Voronoi cells, Branch & Bound algorithm and these algorithms overcome computation complexity as well as also help to reduce the searching time for neighbors in multimedia training datasets. Some techniques are non-structure based like Weighted KNN, Model based KNN, distance based KNN, Class confidence weighted KNN, Dynamic weighted KNN, Clustering based KNN, and Pre-classification based KNN, and these algorithms reduce memory limitation, curse of dimensionality and time complexity.