A recent article from the Chicago Booth Review discusses advancements in social science research methods, particularly in addressing systemic bias through email-based audit experiments. Traditional audit experiments, such as sending identical résumés with different names to test for discrimination, often fail to account for systemic bias.
Researchers are now utilizing emails to request assistance, which allows them to bypass visible signs of cumulative disadvantage. A notable study revealed that white male council members were more likely to respond to emails from minority students when their demographic identity was mentioned.
ChatGPT can assist in standardizing email-based audit experiments by generating consistent, unbiased email content. This ensures that variations in language or tone do not influence the results, allowing researchers to focus on measuring systemic bias more accurately. By automating the email creation process, ChatGPT can also help scale up these experiments, enabling broader and more comprehensive studies on systemic bias.