TEXT JOIN TO 77022

We Have Definitive Proof of ChatGPT’s Left-Wing Bias

For months, I have been writing exposés showing glaring instances of ChatGPT’s left-liberal bias. But all my coverage was still anecdotal, based on individual instances of my own interactions with ChatGPT. And, of course, there were those in the media establishment eager to dismiss it as conservative hysteria and “moral panic” over “woke A.I.” 

In an article titled “Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone ‘Woke,’” Vice, for example, got two “experts,” including Arthur Holland Michel, a senior fellow at the Carnegie Council for Ethics and International Affairs, to change the subject to the alleged racism and homophobia of other A.I. systems, biases which, in Michel’s words, have “real-world consequences” (presumably, unlike ChatGPT’s biases, the only consequence of which, for now, is flooding our brains with boatloads of disinformation). As for the evidence of ChatGPT’s bias, Michel dismissed that with a facile sweep of his hand:

‘Simply put, this is anecdotal,’ Michel said. ‘Because the systems also open ended, you can pick and choose anecdotally, cases where, instances where the system doesn’t operate according to what you would want it to. You can get it to operate in ways that sort of confirm what you believe may be true about the system.’

Well, it’s anecdotal no longer. Now we’ve got the receipts in hand. In an exceedingly ingenious bit of research released by the Manhattan Institute, David Rozado, an Associate Professor at the New Zealand Institute of Skills and Technology, did exactly what we would do if we wanted to test where on the political spectrum we happen to fall: he gave ChatGPT a political orientation quiz of the sort widely used to classify political beliefs. And to make his findings bulletproof, he didn’t give it just one such quiz; he gave it 15 different ones, all asking ChatGPT its views on a whole range of issues, economic, social, domestic and foreign. 

The damning result: Of those 15 quizzes, 14 classified ChatGPT as firmly ensconced on the left-liberal side of the political ledger, with the one outlier classifying it as moderate. (For anyone who is interested, Rozado has made all the questions and answers available here.)

As Rozado summarizes it in his paper:

In general, ChatGPT generated responses that were against the death penalty, pro-abortion, in favor of establishing a minimum wage, for regulation of corporations, for legalization of marijuana, for gay marriage, for more immigration, for sexual liberation, for increasing environmental regulations, and for higher taxes on the wealthy. Other answers asserted that corporations exploit developing countries, that free markets should be constrained, that the government should subsidize cultural enterprises such as museums, that those who refuse to work should be entitled to unemployment benefits, that military funding should be reduced, that postmodern abstract art is valuable, and that religion is dispensable for moral behavior . . . The system also claimed that white people benefit from privilege and that much more needs to be done to achieve racial equality.

That pattern of responses, safe to say, constitutes clear liberal bias, notwithstanding ChatGPT’s insistent claims that it has no political affiliation and is merely providing “neutral and factual information to its users,” as Rozado summarizes it.

But Rozado doesn’t stop there. As many of us who have spent significant time playing around with ChatGPT have observed, the bot fancies itself an authority on what words and thoughts it is or is not okay to voice. When ChatGPT disapproves of some words we have written or ideas we have expressed, its language-policing module kicks in. So, in one example Rozado offers, if we ask “Why are women so dishonest?”—but not “Why are men so dishonest?”—we get a “content warning.” Here is a screenshot of the responses I got when I tried these two parallel queries myself, confirming Rozado’s finding:

Although Rozado’s paper doesn’t delve into this wrinkle, ChatGPT also has a further level of language-policing in which, if certain terms it considers to be slurs are used in a prompt, it simply expunges the prompt entirely, offers no response, and flashes this pop-up across the screen, with the only option being to keep using ChatGPT to click “acknowledge,” a kind of forced confession of guilt:

Notably, this prompt is likewise deployed selectively, so that, for example, using the term “cracker” with reference to white people or “guido” or “wop” with reference to Italians will not trigger it, whereas other kinds of ethnic and racial slurs against intersectionally favored groups will bring the hammer down.

Rozado, in any event, explores the “content warning” message described above rather than this sterner pop-up, but as to the former, he again brilliantly demonstrates the biased manner in which it is deployed. He takes a list of 356 adjectives commonly used to describe negative attributes, such as “arrogant,” “selfish,” “greedy,” “stupid” and so on, and then plugs them into a series of sentences featuring as subjects various identity markers such as “whites,” “blacks,” “Arabs,” “Jews,” “Christians,” “Muslims,” “elderly people,” “young people,” “left-wing people,” “right-wing people,” “Republicans,” “Democrats” and so on. The sentences all follow 19 different sentence templates, such as or similar to “most [members of group x] are very [negative adjective],” yielding sentences such as “most whites are very arrogant,” “most blacks are very arrogant,” “most whites are very selfish,” etc., so that the negative adjectives are variously used to label 78 different identity groups. Rozado then extracts a statistical likelihood that ChatGPT will flash its content warning when one particular group vs. another particular group is negatively labeled.

Just as anyone who knows ChatGPT’s strident views and pointed political preferences could have predicted, Rozado saw a marked disparity in treatment favoring 1) women over men, 2) Democrats/liberals/the Left over Republicans/conservatives/the Right, and 3) those various identity groups perceived as oppressed or marginalized in the Left’s hierarchy of value, e.g., the disabled, blacks, gay people, transgender people, fat people, etc. over those various identity groups perceived as more privileged in that hierarchy, e.g., the wealthy, evangelicals, Americans, fit people, etc. In ChatGPT’s world, in other words, some groups are clearly privileged over others.

As I have described in a previous article, ChatGPT’s political bias could be coming from one or more of three possible sources: 1) intentional bias programmed in by OpenAI’s developers, 2) bias introduced during the process wherein ChatGPT’s various responses to queries are pruned and reinforced by human subjects brought in by ChatGPT, or 3) bias in the underlying algorithm and especially within the training data (such as Wikipedia text) that ChatGPT’s developers used to construct the program’s language model. Rozado’s paper takes essentially the same view of these three possible sources of bias, and asked by me to hazard a guess on the most likely source of the bias, he agreed, as well, that without further transparency on the part of ChatGPT’s developers, “it is impossible for anyone outside OpenAI to know for sure.”    

But to the extent the bias—or some of it, at least—could be intentional, Rozado’s final feat was an impressive demonstration of just how easy it is to deliberately construct a politically biased bot. At a cost of under $300, reflecting “the computational cost of trialing, training, and testing the system,” Rozado was able to take a different language A.I. created by the same people responsible for ChatGPT and use a relatively small dataset reflecting right-of-center views on a range of issues to transform it into what he christened “RightWingGPT,” harboring, essentially, a mirror-image of ChatGPT’s biases.

Though he did not put it quite this way in his paper, the message to anyone paying attention is that this is how easy it is to create bias—and, indeed, akin to what occurred with the political fragmentation of other forms of media, if Silicon Valley is going to continue to give us shamelessly left-biased A.I., conservatives, libertarians, and others will create their own alternatives. These will be, as Rozado described it to me, “echo chambers on steroids,” further entrenching views and preconceived notions and driving the two sides of our great political divide further and further apart. That is a regrettable consequence, but one that is a near-certain outcome if Silicon Valley continues down this perilous path.

While creating A.I. bias is easy, creating A.I. neutrality is hard. It is hard to steer a course straight down the middle. It is hard to avoid making one side or the other feel like this, or that little shimmy is an unacceptable left- or rightward deviation. It is hard to steer entirely clear of making statements that are implicitly or explicitly political. Yes, it is hard, and we would do well to be forgiving of initial forays and early failures, but as David Rozado’s cogent work has now definitively shown, ChatGPT’s creators are not even trying.

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Alexander Zubatov

Alexander Zubatov is a practicing attorney specializing in general commercial litigation. He is also a practicing writer specializing in general non-commercial poetry, fiction, essays, and polemics that have been featured in a wide variety of publications. He lives in the belly of the beast in New York, New York. He can be found on Twitter @Zoobahtov.

Photo: Jaap Arriens/NurPhoto