Due to the rapid technological evolution and communications accessibility, data generated from different sources of information show an exponential growth behavior. That is, volume of data samples that need to be analyzed are getting larger, so the methods for its processing have to adapt to this condition, focusing mainly on ensuring the computation is efficient, especially when the analysis tools are based on computational intelligence techniques. As we know, if you do not have a good control of the handling of the volume of the data, some techniques that are based on learning iterative processes could represent an excessive load of computation and could take a prohibitive time in trying to find a solution that could not come close to desired. There are learning methods known as full batch, online and mini-batch, and they represent a good strategy to this problem since they are oriented to the processing of data according to the size or volume of available data samples that require analysis. In this first approach, synthetic datasets with a small and medium volume were used, since the main objective is to define its implementation and in experimentation phase through regression analysis obtain information that allows us to assess the performance and behavior of different learning methods under distinct conditions. To carry out this study, a Mamdani based neuro-fuzzy system with center-of-sets defuzzification with support of multiple inputs and outputs was designed and implemented that had the flexibility to use any of the three learning methods, which were implemented within the training process. Finally, results show that the learning method with best performances was Mini-Batch when compared to full batch and online learning methods. The results obtained by mini-batch learning method are as follows; mean correlation coefficient
with 0.8268 and coefficient of determination
with 0.7444, and is also the method with better control of the dispersion between the results obtained from the 30 experiments executed per each dataset processed.