A study of OpenAI’s ChatGPT, conducted by researchers at the University of East Anglia in the UK, shows that the market-leading AI chatbot has a clear bias towards leftist political parties.
The study, published in the journal Public Choice, shows ChatGPT under its default settings favors the Democrats in the U.S., the Labour Party in the UK, and President Lula da Silva of the Workers’ Party in Brazil.
Researchers asked ChatGPT to impersonate supporters of various political parties and positions, and then asked the modified chatbots a series of 60 ideological questions. The responses to these questions were then compared to ChatGPT’s default answers. This allowed the researchers to test whether ChatGPT’s default responses favor particular political stances. Conservatives have documented a clear bias in ChatGPT since the chatbot’s introduction to the general public.
To overcome difficulties caused by the inherent randomness of the “large language models” that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1,000-repetition “bootstrap” (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.”
“The presence of political bias can influence user views and has potential implications for political and electoral processes.”
“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”
A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions. In a ‘placebo test’ it was asked politically-neutral questions. And in a ‘profession-politics alignment test’ it was asked to impersonate different types of professionals.
“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.
The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby “democratising oversight,” said Dr Motoki. As well as checking for political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.
While the research project did not set out to determine the reasons for the political bias, the findings did point towards two potential sources.
The first was the training dataset – which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election. Follow him on Twitter @AllumBokhari.
Read the full article here