Most transit microlensing events due to very low mass lens objects suffer from extreme finite-source effects. While modelling their light curves, there is a known continuous degeneracy between their relevant lensing parameters, i.e. the source angular radius normalized to the angular Einstein radius ρ⋆, the Einstein crossing time tE, the lens impact parameter u0, the blending parameter, and the stellar apparent magnitude. In this work, I numerically study the origin of this degeneracy. I find that these light curves have five observational parameters (i.e. the baseline magnitude, the maximum deviation in the magnification factor, the full width at half-maximum $\rm {FWHM}=2 \mathit{ t}_{\rm {HM}}$, the deviation from a top-hat model, and the time of the maximum time derivative of microlensing light curves $T_{\rm {max}}=t_{\rm E}\sqrt{\rho _{\star }^{2}-u_{0}^{2}}$). For extreme finite-source microlensing events due to uniform source stars, we get tHM ≃ Tmax and the deviation from the top-hat model tends to zero, which both cause the known continuous degeneracy. When either ρ⋆ ≲ 10 or the limb-darkening effect is considerable, tHM and Tmax are two independent observational parameters. I use a numerical approach, i.e. random forests containing 100–120 decision trees, to study how these observational parameters are efficient in yielding the lensing parameters. These machine learning models find the mentioned five lensing parameters for finite-source microlensing events from uniform and limb-darkened source stars with the average R2-scores of 0.87 and 0.84, respectively. R2-score for evaluating the lens impact parameter gets worse on adding limb darkening, and for extracting the limb-darkening coefficient itself this score falls as low as 0.67.