Abusive language detection models tend to have a gender bias problem in which the model is biased towards sentences containing identity words of specific gender groups. Previous studies to reduce bias, such as projection methods, tend to lose information in word vectors and sentence context, resulting in low detection accuracy. This paper proposes a novel method that mitigates gender bias while preserving original information by regularizing sentence embedding vectors based on information theory. Latent vectors generated by an autoencoder are debiased through dual regularization using a gender discriminator, an abuse classifier, and a decoder. While the gender discriminator labels are randomized, the discriminator confuses the gender feature, and the classifier retains the abuse information. Latent vectors are regularized through information theoretic adversarial optimization that disentangles and mitigates gender features. We show that the proposed method successfully orthogonalizes the direction of the correlated information and reduces the gender feature through calculation of subspaces and embedding vector visualization. Moreover, the proposed method maintains the highest accuracy among the four state-of-the-art bias mitigation methods and shows superior performance in reducing gender bias in four different Twitter datasets for abusive language detection.