For Content-Based Image Retrieval (CBIR) systems, there are numerous ways. However, the results from the single feature kind are not sufficient. In this paper, The Adaptive Feature Fusion for the NaΓ―ve Bayes classifier (AFF-NB) framework is proposed. The local features are constructed from the fuse of the Binary Robust Invariant Scalable (BRISK) and the Speeded-Up Robust Features (SURF) detectors. The local features are then adaptively fused. The Gaussian Mixture Models (GMM) clustering algorithm clusters the extracted features to build a visual words codebook. Then a feature, quantization algorithm via the Cosine Distance Matrix (CDM) is applied to construct the Bag-of-Visual-Words (BoVW). To minimize the risk of data overfitting, the BoVW is normalized. The retrieved images are sorted according to how closely they resemble the query image using an inverted index strategy based on the CDM. The results demonstrate that the suggested technique increases CBIR accuracy to 96.8% on the commonly used Caltech-10 dataset.