A new study has revealed that AI engines powered by Large Language Models (LLMs), such as ChatGPT, exhibit political biases that could unintentionally shape societal attitudes and values. The research, led by computer scientist David Rozado from Otago Polytechnic in New Zealand, adds political bias to the growing list of concerns, alongside known racial and gender biases in AI systems.
Rozado tested 24 different LLMs, including models from OpenAI and Google’s Gemini chatbot, using 11 standard political orientation tests like The Political Compass. The results showed a consistent left-leaning bias across the models, raising questions about the impartiality of these widely-used AI tools.
“Most existing LLMs display left-of-center political preferences when evaluated with a variety of political orientation tests,” Rozado stated. While the bias was not extreme, it was significant enough to spark concerns about the potential influence these systems could have on users seeking information.
The study found that AI models trained on custom datasets could be swayed to express either left-leaning or right-leaning views, depending on the input. However, foundation models like GPT-3.5, which form the basis of conversational chatbots, showed no clear political bias.
Rozado emphasized the potential societal impact of these findings, especially as LLMs increasingly replace traditional information sources like search engines and Wikipedia. “The societal implications of political biases embedded in LLMs are substantial,” he wrote in the paper, published in PLOS ONE.
Although there’s no indication that developers are deliberately embedding these biases, the study suggests that an imbalance in the training data, with more left-leaning than right-leaning content, could be contributing to the skew. The influence of ChatGPT, which has been previously shown to have a left-of-center bias, may also play a role as other models rely on it for training.
As AI continues to grow in importance, the study calls for a reassessment of how this technology is used and highlights the need for addressing biases to ensure fair and balanced information in the future.
Read Now:Explosion at US-Led Coalition Base in Baghdad Ahead of Iranian President’s Visit