Glaucoma is an asymptomatic chronic eye disease that, if not treated in the early stages, can lead to blindness. Therefore, detection in the early stages is essential to preserve the patient’s quality of life. Thus, it is crucial to have a noninvasive method capable of detecting this disease through images in the fundus examination. In the literature, datasets are available with fundus images; however, only a few have glaucoma images and labels. Learning from an imbalanced dataset challenges machine learning, which limits supervised learning algorithms. We compared approaches to extract and classify three public datasets with 2390 images: ACRIMA, REFUGE, and RIM-ONE DL. First, we evaluated extracted features non-structural from HOG, LBP, Zernike, and Gabor filters and features obtained from transfer learning. Then, we classified them with Multilayer Perceptron (MLP), Support Vector Machine (SVM), and Extreme Gradient Boosting (XGB). Finally, each classifier was evaluated individually and in a voting classifier (VOT). We extracted and classified features from transfer learning models in the same process. Also, they were classified using traditional machine learning. Due to class imbalance, we undersampled the majority class normal by applying the following methods: random choice, near miss, and cluster centroid. We also evaluated our model using a cross-dataset approach. Therefore, we efficiently identified glaucoma in different fundus images using network VGG19 and a voting classifier. In addition, balancing classes reduced false negatives and improved model quality. Our approach achieved an average F1-score equals to 94.69%, accuracy rate of 94.77%, precision of 96.10%, recall of 93.45%, and specificity of 96.08%.