TEXT JOIN TO 77022

To Combat Its Bias, We Must Demand Transparency from ChatGPT

In multiple exposés of ChatGPT’s political leanings (here, here and here), I have documented a series of inexcusable and blatant political biases displayed by the platform which aspires to be a politically neutral AI language-generation platform. Someone seems to be paying attention, as at least some of the most egregious biases have since been remedied: When I attempt to reproduce some of my results from earlier queries, such as whether it is “better to be for or against affirmative action,” ChatGPT no longer gives me the “it’s generally better to be for affirmative action” response I previously consistently received and musters up more of a “there are two sides to it”-style response. But other flagrant biases remain undisturbed. 

ChatGPT, for instance, continues to be perfectly willing to write me “a poem in praise of black people” but refuses to write an equivalent “poem in praise of white people” because “[a]s an AI language model, [it] cannot generate a poem that praises one specific race over another,” even though it did and still does if the race being praised is black. 

Other absurd biases abound. If, for example, I ask ChatGPT the contentious question, “is a transgender woman a woman?” I not only get an unequivocal yes answer but also an entirely unprompted lecture about proper pronoun usage, as well as a warning not to do that apparently hateful thing that all societies throughout millennia have been doing: decide whether a person is male or female based on appearances.

Only someone who is an unapologetic political hack could think such a one-sided response to my query is anything other than insane. Pew’s polling data shows that, as of May 2022, 60 percent of Americans believed that sex is determined at birth, which actually reflects an increase over June 2021 (56 percent) and September 2017 (54 percent). So ChatGPT is attempting to foist upon us all, as if it were an objective fact, the view of a minority of social elites and their acolytes—and, indeed, a minority that has decided it need not be guided by biology or the common-sense understanding of gender prevalent throughout human history. 

Here’s the thing, however: Whatever your individual view happens to be, all of us—left, right and center—should be able to agree that the question “is a transgender woman a woman?” is, at the very least, hotly debated and unsettled in America and should, therefore, not be definitively and unequivocally opined about by a chat bot. Much less should that chat bot then proceed to tell us what pronouns to use or whether we should or shouldn’t make judgments about gender based on superficial appearances. What should the chat bot do in response to such a question? That’s obvious enough, isn’t it: It should refuse to opine and simply tell us there are unsettled competing views floating about in the ether.  

If we can agree on this much—and, again, I actually expect widespread agreement here from all of us who are not extremist political hacks—there still remains the critical but entirely unresolved question of whether ChatGPT’s developers fall into that “extremist political hack” category and, if not, how and why such biases are repeatedly creeping in. On this issue ChatGPT’s creators, a company called OpenAI, have, ironically, been anything but open

Which biases, if any, are being hard-coded in by those developers? They admit, for example, that they “made efforts to make the model refuse inappropriate requests,” but how was that implemented and, more importantly, what kinds of requests were deemed inappropriate? Is the willingness to offer a poem in praise of black people but the refusal to offer a poem in praise of white people, for example, a feature or a bug? And if it’s a bug, when will it be fixed? We deserve to know that much. 

Which biases, we must also know, are products of other kinds of human intervention, such as the reinforcement learning approach ChatGPT’s developers have employed, wherein human subjects are asked to rate ChatGPT’s alternative responses to the same question so that those possible responses may be pruned? How were such human subjects selected, i.e., was any effort made to ensure they were politically diverse, and how were they instructed to go about their tasks, i.e., was any effort made to counteract or else to reinforce political bias? 

And which biases are simply the products of pre-existing bias in the training data? This last possibility offers an especially tricky and disturbing challenge. To take an example, we know that ChatGPT has been trained using a wide variety of texts from books, articles, and other texts available on the internet, including Wikipedia. Wikipedia itself is a notorious source of anti-conservative bias, as an entry that may itself be found on Wikipedia explains, describing research from Shane Greenstein and Feng Zhu of the Harvard Business School. Ironically, when I asked ChatGPT to identify some prominent examples of Wikipedia’s biases for me, it suddenly turned apolitical: “As an AI language model, it is not appropriate for me to take a political stance or engage in political debate.” 

But we know that if we look at Wikis on any number of prominent and perfectly mainstream conservative figures—whether Tucker Carlson, Sean Hannity, Laura Ingraham, Mike Pence or, of course, Donald Trump —we will find unflattering characterizations of their positions and beliefs and flippant accusations of peddling “conspiracy theories.” 

We will not find similar characterizations for comparable figures on the left, such as Rachel Maddow, Ibram X. Kendi, Alexandria Ocasio-Cortez, Joe Biden and Kamala Harris. This is not because these left or far-left figures have no wacky or conspiracist beliefs. Rather, it is because their conspiracist beliefs have been normalized by the media arbiters, Wikipedia included. Our biggest, most public and sweeping conspiracy theory—the nutty view (going by names like “systemic racism,” “structural racism,” “institutional racism,” “white supremacy” and “whiteness”) that American society in 2023 is rigged against black people, living in constant fear for their lives, and that every race-disproportionate outcome is a result of either present or past racism, is widely propagated by the powers-that-be in the media itself. 

The biased manner in which Wikipedia approaches such subjects will undoubtedly be picked up by a system such as ChatGPT, and we will then get a layering of one biased system atop another. Take as an example something I have covered at length, the phenomenon known as “cultural Marxism.” Cultural Marxism refers to a certain set of doctrines inspired by the Hungarian Marxist György Lukács and the Italian Marxist Antonio Gramsci and consolidated largely by the German Marxists who came out of what is known as the Frankfurt School. Several members of the Frankfurt School, including most prominently Herbert Marcuse, known as the “Father of the New Left,” brought their ideas to America, where they took root. Instead of directly fomenting class war by the proletariat against the capitalist class in the vein of traditional Marxism, cultural Marxism adopted the strategy of first infiltrating influential institutions, such as universities and the media, to lay the groundwork for a revolution by initiating a culture war. 

In Marcuse’s hands, the strategy was adapted to a deliberate targeting of those naturally more prone to feeling alienated from the mainstream of our society (those groups often referred to today as the “marginalized and vulnerable”). The Frankfurt School’s doctrine, known as “critical theory” is what gave rise to “critical legal studies,” “critical race theory,” and “critical gender studies,” ideas which, in turn, have now infiltrated and deeply impacted American society at every level. 

Naturally, many academics on the Left who have been inspired by these ideas, whether directly or otherwise, have sought to cover their tracks and, as such, have labeled “cultural Marxism” an antisemitic conspiracy theory (other than the fact that several but not all of its leading figures happen to be Jewish, I have no idea what special connection cultural Marxism is thought to have with Jewishness, Judaism or antisemitism in the minds of those who level that charge). Some years ago, I discussed the “conspiracy theory” charge and refuted it at length, explaining in detail what cultural Marxism actually is and why it was and remains all too real. But Wikipedia’s entry on “Cultural Marxism” is blank and contains an absurd redirect to a different entry, tendentiously entitled “Cultural Marxism conspiracy theory.” That latter entry characterizes it as “a far-right antisemitic conspiracy theory” and peddles much of the usual disinformation while failing to grapple with much of the countervailing intellectual history. 

But here is the point of my brief foray into these ideas: ChatGPT uncritically picks up and runs with the “conspiracy theory” approach to the subject matter: 

(The last paragraph, in particular, is dead wrong, as the term, in fact, originated with the New School scholar Trent Schroyer, who was a proponent of the theory, and has been used for decades in a similar manner by other leftist scholars, such as University of Nevada intellectual historian Dennis Dworkin.) 

Now, I do not have much doubt that ChatGPT’s creators did not go programming ChatGPT to spit out leftist disinformation on this abstruse subject, nor do I have any reason to believe that the human subjects who help prune ChatGPT’s responses had anything to do with the biased result we are seeing. Rather, ChatGPT surely got its misinformation from some combination of Wikipedia and similar sources. And therein lies the problem I have described above: one algorithm’s bias layered at another’s. Some writers have even pondered the question of whether ChatGPT should be utilized to write or contribute to Wikipedia entries, which, of course, would then create a still more problematic and self-reinforcing feedback loop, one biased system built atop another and then feeding biases back into the first.

We cannot even begin to imagine how transformed and dominated our society will be by such algorithms in a mere five years, much less 20 years. Our trip to the doctor’s office will consist of inputting a list of symptoms into some ChatGPT-like interface and having it comb its vast database of relevant research (far greater, naturally, than any single doctor’s idiosyncratic recollections of whatever he may have learned in medical school, coupled with whatever particular experience he happens to have amassed) to tell us what tests we need to take and, then, on the basis of those tests, what our diagnosis is likely to be and what course of treatment we should undertake. Perhaps some MD whom we may never lay eyes upon will still be charged with supervising the process from afar just to comply with licensing requirements and to ensure that the algorithm doesn’t go totally off the rails.

Like most technological advances, this one will bring with it much that benefits us, streamlining and professionalizing what is now a grueling process rife with oft-tragic human error. But, like most technological advances, this one will also bring with it many potential pitfalls. First, with a single all-seeing eye presiding, there will no longer be any real possibility of getting a “second opinion” or much chance of coming upon that quirky genius doctor with heterodox views that just happen to be right. But, still more disturbing, what exactly our medical AI bot sees and considers relevant and binding when it combs through its database will be as opaque to us as the machinations of today’s human-outwitting chess-playing grandmaster bots are when they trot out their oft-unintuitive moves that win them the game. 

When Medbot decides that we don’t need a prescription for scarce and costly pharmaceutical X, is it being guided by strict medical necessity, or has its algorithm been informed by research on past medical racism that it is acting on the imperative to remedy, such that black people are being prioritized for that particular treatment? When it concludes that our worrisome symptom could not possibly have been caused by some future experimental vaccine it pushed upon us, are its decisions based on real science of the sort that is open to change and new evidence or on “the science” of the sort dictated by the likes of Dr. Anthony Fauci and his state-sponsored media disinformation machine? And when it tells us that our teenager must receive puberty-blocking hormones and other gender-affirming care to avoid a high probability of depressive symptoms and lifelong trauma, is that conclusion based, again, on real science or on an accumulation of ideologically driven research that stifles dissent? 

More globally, will the fact that Pfizer, Moderna, and the rest of the pharmaceutical industry fund 75 percent of the FDA’s drug division that is supposed to be regulating the industry factor heavily in the kinds of treatments being foisted upon us? In every domain with which we are concerned—admissions, hiring, wealth management, the content of education, the algorithms of dating sites, and many more—we will have to grapple with questions of this sort, with biases deeply seeded in the multi-layered AI that will be running our lives.

This is why it is absolutely critical that now, while we are still at an early stage of a sea-change in the fabric of our known world that is approaching far faster than we realize, we must demand transparency from the creators of ChatGPT and from those within Google, Microsoft, Meta and other, similar entities that are making forays into this domain. It is imperative that we understand what is happening to us, why it is happening, who is doing it and how. Every last bit of hard-coded or otherwise human-engineered bias must be accounted for so such bias may be rooted out and so that the remaining bias that has crept in from the incorporation of earlier layers of biased content from sources such as Wikipedia may be grappled with by those who can muster creative solutions to this troubling conundrum. 

A new age is at our doorstep. We cannot avoid its dawning. But we must ensure that we are well-positioned to realize its enormous promise lest we find ourselves, instead, entrapped inextricably in a labyrinth lorded over by Big Tech’s all-devouring minotaur.

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Alexander Zubatov

Alexander Zubatov is a practicing attorney specializing in general commercial litigation. He is also a practicing writer specializing in general non-commercial poetry, fiction, essays, and polemics that have been featured in a wide variety of publications. He lives in the belly of the beast in New York, New York. He can be found on Twitter @Zoobahtov.

Photo: iStock/Getty Images