Purpose: This article aims to evaluate potential gender biases on job postings. Originality/value: The management literature still seeks to understand the different social and economic impacts that the introduction of large language models (LLM) has produced. It is now recognized that such models are not neutral; they carry a large portion of the biases and discriminations found in human language. If used as support in writing job descriptions, it can contribute negatively to the preservation of gender inequality among occupations and the perpetuation of the sexual division of labor. Design/methodology/approach: This research uses different LLMs embeddings to evaluate potential gender biases generated in job postings from two major platforms, LinkedIn and Vagas.com. More specifically, it evaluates gender biases and identifies the vectors’ sensitivity within the context of gender inequality analysis. Findings: The degree of consistency between architectures varies significantly as the words contained in job descriptions and the two gender vectors are altered, which means that even pre-trained models might not be reliable to understand gender bias. This lack of consistency indicates that the evaluation of gender bias might be different depending on the parameters. Also, a high sensitivity to pronouns was observed, and the difference between genders seems to be greater when we relate the unitary vectors “man” and “woman” with terms related to family. Contribution/implication: The use of LLM for job postings must be carried out with caution, and efforts to mitigate gender biases must take place on the corpus to be modeled before data training occurs.