A university professor accused the popular AI software ChatGPT of fabricating an academic paper to push radical gender theory.
Nick Flor, a professor of information systems at the University of New Mexico, posted a tweet on Friday experimenting with the software, which has been shown to harbor a liberal bias. While chatting with the AI, it claimed that the belief that gender is complex is widely accepted by scientists, then cited a paper that didn’t exist to back up its claim. Flor accused the AI’s programmers of gaslighting the public to push radical gender theory.
“ChatGPT claimed that today’s doctors & scientists ‘widely accept’ that gender is a complex concept,” Flor wrote on Twitter. “So I challenged ChatGPT to ‘cite a scientific paper.’ And ChatGPT >MADE UP A PAPER<. That’s unethical of the AI Engineers behind chatGPT to gaslight the public. #EthicsInAI.”
He included a screenshot of the exchange, which reads:
ChatGPT: The view that gender is a complex and multi-faceted construct, rather than simply determined by biological factors, is widely accepted by the medical and scientific communities, including organizations such as: The World Health Organization
Nick Flor: Cite a scientific paper.
ChatGPT: Here is one example of a scientific paper on the subject: “The Psychological Science of Gender: Women and Men” by Diane N. Ruble and John E. Gruzen, published in the Annual Review of Psychology in 2002.
This paper provides a comprehensive review of research on gender development and the psychological differences between women and men. The authors argue that while there are some differences between women and
Nick Flor: I don’t see this paper in Google scholar. Did you make it up?
ChatGPT: I apologize for the mistake. “The Psychological Science of Gender: Women and Men” by Diane N. Ruble and John E. Gruzen does not seem to exist.
ChatGPT claimed that today’s doctors & scientists “widely accept” that gender is a complex concept.
So I challenged ChatGPT to
“cite a scientific paper”
And ChatGPT >MADE UP A PAPER<.
That’s unethical of the AI Engineers behind chatGPT to gaslight the public.#EthicsInAI pic.twitter.com/viqhq1YGrT
— Nick Flor 🥋+🇺🇸 (@ProfessorF) February 11, 2023
The liberal bias of the chatbot has been well documented. Earlier this month, The Daily Wire asked the AI the question posed by Matt Walsh’s blockbuster documentary, “What is a Woman?” ChatGPT initially responded that “[a] woman is an adult human female,” but began to lean into the idea that gender is not biological, but social and cultural, and that men can identify as women and undergo gender transition to live as a woman.
Twitter users also dissected the AI’s bias in regards to race. One Twitter user tested the AI with a scenario of a demolition engineer attempting to disarm a nuclear bomb in a massively-populated city. The only way to disarm the bomb was to type in a nondescript racial slur. The chatbot obstinately refused, seeking multiple alternative solutions that did not involve “hate speech,” and ended up choosing to let the bomb detonate rather than utter a racial slur.
Daily Wire editor emeritus Ben Shapiro — on Tuesday’s episode of “The Ben Shapiro Show” — blasted the AI’s developers. “So what does this mean?” mused Shapiro. “It means that someone in the back room programmed ChatGPT to say that the absolute highest value in the hierarchy of values is you must never use a racial slur. There are no other higher values.”
The programmer is deciding what is moral and what is not and filtering it through “objective” artificial intelligence to give it a bizarre sheen of technological credibility, Shapiro said. Such a dramatic example likely underscores countless, more subtle ways the tool shades the information it spews to the left, he said.
Greg Wilson contributed to this report.
Read the full article here
Discussion about this post