Researchers from the University of East Anglia in the U.K. have analyzed OpenAI’s ChatGPT, a market-leading A.I. chatbot, revealing a perceivable bias towards leftist political groups. The study, published in Public Choice, discloses that ChatGPT’s default settings tend to favor political entities like the Democrats in the U.S., the U.K.’s Labour Party, and Brazil’s Workers’ Party, led by President Lula da Silva.
The research team posed as supporters of different political parties and ideologies and asked a modified version of ChatGPT 60 ideological questions. They then compared these answers to the default responses of ChatGPT, aiming to assess whether the model’s responses had any intrinsic bias towards certain political viewpoints. Since its release, some conservatives have claimed to detect a clear bias in ChatGPT.
To deal with the inherent unpredictability of large language models powering A.I. platforms like ChatGPT, the researchers asked each question 100 times and collected the various answers. They further applied a 1,000-repetition “bootstrap” technique to re-sample the original data, enhancing the reliability of the conclusions drawn from the text.
Dr. Fabio Motoki, the leading author from Norwich Business School at the University of East Anglia, emphasized the importance of AI-powered systems like ChatGPT being unbiased. He pointed out the potential consequences of political bias in A.I., including its influence on user opinions and its broader implications for political and electoral systems. Dr. Motoki also expressed concerns that A.I. could exacerbate existing internet and social media challenges.
Several additional tests were performed to ensure a rigorous methodology. These included a ‘dose-response test’ where ChatGPT was asked to imitate extreme political stances, a ‘placebo test’ with politically neutral questions, and a ‘profession-politics alignment test’ where the model was asked to mimic various professionals.
Co-author Dr. Pinho Neto expressed hopes that their method would contribute to examining and regulating emerging technologies. He stressed the importance of detecting and rectifying biases in large language models to foster transparency, accountability, and public trust.
The researchers have also created a unique analysis tool that will be freely accessible and easy for the public to use, thus “democratizing oversight,” as noted by Dr. Motoki. The A.I. tool detects political bias and measures other preferences within ChatGPT’s responses.
Though the study didn’t primarily focus on pinpointing the origins of the political bias, it did hint at two possible sources. The first is the training dataset, which may inherently contain biases or have them introduced by human developers, not entirely eradicated through the ‘cleaning’ procedure. The second possibility was the algorithm, which might accentuate existing biases in the training data.