Leveraging Large Language Models to Measure Gender Representation Bias in Gendered Language Corpora

Authors: Derner, E. , Sansalvador de la Fuente, S. , Gutiérrez, Y., Moreda, P., Oliver, N.

External link: https://arxiv.org/abs/2406.13677
Publication: The 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP), ACL 2025, 2025
DOI: https://doi.org/10.48550/arXiv.2406.13677
PDF: Click here for the PDF paper

Large language models (LLMs) often inherit and amplify social biases embedded in their training data. A prominent social bias is gender bias. In this regard, prior work has mainly focused on gender stereotyping bias - the association of specific roles or traits with a particular gender - in English and on evaluating gender bias in model embeddings or generated outputs. In contrast, gender representation bias - the unequal frequency of references to individuals of different genders - in the training corpora has received less attention. Yet such imbalances in the training data constitute an upstream source of bias that can propagate and intensify throughout the entire model lifecycle. To fill this gap, we propose a novel LLM-based method to detect and quantify gender representation bias in LLM training data in gendered languages, where grammatical gender challenges the applicability of methods developed for English. By leveraging the LLMs’ contextual understanding, our approach automatically identifies and classifies person-referencing words in gendered language corpora. Applied to four Spanish-English benchmarks and five Valencian corpora, our method reveals substantial male-dominant imbalances. We show that such biases in training data affect model outputs, but can surprisingly be mitigated leveraging small-scale training on datasets that are biased towards the opposite gender. Our findings highlight the need for corpus-level gender bias analysis in multilingual NLP. We make our code and data publicly available.