We discuss unbiased estimating equations in a class of objective functions using a monotonically increasing function f and Bregman divergence. The choice of the function f gives desirable properties, such as robustness against outliers. To obtain unbiased estimating equations, analytically intractable integrals are generally required as bias correction terms. In this study, we clarify the combination of Bregman divergence, statistical model, and function f in which the bias correction term vanishes. Focusing on Mahalanobis and Itakura-Saito distances, we generalize fundamental existing results and characterize a class of distributions of positive reals with a scale parameter, including the gamma distribution as a special case. We also generalized these results to general model classes characterized by one-dimensional Bregman divergence. Furthermore, we discuss the possibility of latent bias minimization when the proportion of outliers is large, which is induced by the extinction of the bias correction term. We conducted numerical experiments to show that the latent bias can approach zero under heavy contamination of outliers or very small inliers.