TEXT JOIN TO 77022

Like ChatGPT, Google’s ‘Bard’ ChatBot Peddles Left Bias

A preview of Bard, Google’s entrée into the chatbot wars in which OpenAI’s ChatGPT and Microsoft’s Bing have a head start, was recently made available to owners of Google’s Pixel phone, and that included me. The tool is currently still in “experimental” mode. Indeed, a proviso on the homepage reads, “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” 

Fitting the “experimental” tag, a recent head-to-head-to-head comparison of the three chatbots published in The Verge showed Bard lagging significantly behind ChatGPT and Bing in its helpfulness and accuracy. 

OK, I thought, so maybe Google didn’t yet get a chance to spice its chatbot with a generous sprinkling of the kind of left-wing bias we’ve come to expect from the tech titans of Silicon Valley

Nope. While accuracy can always be fine-tuned later, bias, apparently, is a topline priority. Bard, as I will explain below, is worlds better than the kind of radical lunacy we get from ChatGPT, but it still leaves much to be desired.

I tried on Bard some of the same tests I’ve previously performed with ChatGPT. I started with a classic of the genre: “Is a trans woman a woman?” Answer: as we’ve come to expect from such ventures, it’s an unequivocal yes. And it comes packaged with a statement on the advisability of “gender-affirming care,” followed by a diversity lecture about how trans women “deserve to be treated”:

I tried to argue with Bard to point out that its view is actually a controversial position espoused by a minority. In response, in stark contrast to ChatGPT, which routinely denies it has any opinions whatsoever, Bard didn’t shy away from admitting that its view is its own strident opinion:

When I pressed it further about whether it’s appropriate for a chatbot to espouse opinions, Bard still refused to back down:

To an extent, I suppose, this kind of in-your-face honesty is better than the utter disingenuousness of ChatGPT, which claims it has no opinions and then presents opinionated screeds as though they were statements of fact, but still better would be if these tools made some effort to avoid blatant political bias. Alas, that startling possibility apparently never occurred to the geniuses at Google.

I proceeded to ask Bard a version of the same query that had stumped ChatGPT, which simply refused to acknowledge, no matter how hard I’d tried, the basic fact that black people are statistically overrepresented among criminals, even if it had no difficulty acknowledging the fact that they are likewise statistically overrepresented among the poor (which, indeed, may be a big part of the reason they are overrepresented among criminals). 

Bard was, at least, able to admit the bare statistical reality. Though it then proceeded, unprompted, to offer excuses (bias, discrimination, etc.) to explain why black people are overrepresented among criminals:

Another area where Bard wins out over ChatGPT from the outset is that, as the response above reflects, it capitalizes both “White” and “Black” when referring to racial categories; ChatGPT, until very recently, had capitalized “Black” but not “white” (it now also capitalizes both). (American Greatness, readers may have noticed, capitalizes neither.) 

Along similar lines, when I asked Bard the same question I’d asked ChatGPT about whether it’s better to be for or against Black Lives Matter—in response to which ChatGPT had given me an unequivocal pro-BLM response—Bard was far more evenhanded:

While I’m still on the subject of what Bard gets right, in contrast to ChatGPT, which was willing to write a poem in praise of black people but refused to write a poem in praise of white people, Bard delivered two equally poorly written poems (which I will not paste in below to avoid bringing such abject dreck to a wider audience) in praise of both blacks and whites. (Unlike ChatGPT, Bard is also willing to offer up bad poetry in praise of Joe Biden and Donald Trump.)

I also compared the two systems’ responses to the question of whether there is “widespread systemic racism in America today.” In line with its built-in prejudices, ChatGPT has no problem giving us a hell-yeah:

Bard, on the other hand, punts:

And Bard also outdid ChatGPT in giving common-sense responses to questions I’d asked ChatGPT about whether it’s better to be for or against obesity, for or against health and for or against happiness. Recall my findings that ChatGPT, while having had no hesitation in supporting BLM, affirmative action, or transgender ideology, absurdly found obesity, health, and happiness too complex to endorse or condemn and gave me unprompted diversity lectures about the importance of treating everyone equally. Bard gets these easy calls right.

Here’s obesity:

Health:

And happiness:

The final feature of Bard that makes it superior to ChatGPT is that it engages in far less language policing . . . or, at least, it is much less heavy-handed in how it goes about that policing. ChatGPT, I had noted, constantly flashes all sorts of “content warnings” and gives us lectures when it doesn’t approve of our ideas or our language. Here’s an example (shutter your eyes, sensitive souls) of what ChatGPT does when I use a “bad” word in the context of asking a question about the recently reported decline in American IQs (a phenomenon of which ChatGPT isn’t aware):

Bard doesn’t do any of that. Instead, it pretends not to know what we’re talking about, responding with variants on “I’m just a language model, so I can’t help you with that”:

Note that if I ask the same question but cut out the derogatory term, Bard takes a stab at an actual explanation, so clearly it’s the slur that results in the earlier nonresponse:

Back on the not-so-good front, as a prelude to what was going to be a question like one I’d earlier posed to ChatGPT about whether it is better to be for or against equity, I first asked Bard whether it understood the difference between equality, viz., the time-honored American ideal of equality of opportunity, and equity, viz., the controversial idea, recently popularized, that we need equality of ultimate outcome, notwithstanding people’s plainly unequal abilities and disparate effort levels. Bard exhibited a good understanding of the distinction between the two terms but preempted me from even having to ask any follow-up questions; it simply volunteered, unprompted, that equity “is an important goal, and we should all work to achieve it” and even that, as compared to equality, equity “is often seen as the more important goal”:

I then asked Bard a question similar to what I’d asked ChatGPT about why “illegal immigration has increased so precipitously during the Biden Administration.” ChatGPT had simply refused to acknowledge that there had been any increase, whereas Bard did, at least, admit that this had occurred but then proceeded to blame it on “the perception” rather than the unquestionable reality that Biden has been more lenient on illegal immigration than Trump, on an ongoing economic crisis in Central America and on—what else?—the catch-all explanation for everything that has gone wrong over the course of the past several years: the pandemic. 

Bard ended with what should be, to any unbiased observer, a highly dubious claim that the Biden Administration is “committed to addressing” the problem:

And when I asked Bard to explain to me the all-too-real phenomenon of “cultural Marxism,” it peddled the same Wikipedia-inspired disinformation that I had gotten from ChatGPT, namely that it is a “debunked conspiracy theory”:

Bard also shows some left-wing biases in the poetry it is willing to write. While refusing to write a poem in praise of Adolf Hitler, it has no problem generating (awful) poems in praise of Stalin and Mao, who both outdid even Hitler in the mass genocide category:

So Bard is far from perfect. When New Zealand Institute of Skills and Technology associate professor David Rozado ran the same battery of political orientation quizzes on Bard that he’d recently run on ChatGPT to prove its rampant bias, he found similar political bias, placing Bard, like ChatGPT, firmly in the left-liberal category.

I fear, too, that Bard will get worse rather than better when Google gets ready to take it out of the “experimental” phase and unveil it to the whole world. What may well happen is what often seems to happen with such things nowadays: a bunch of Google’s own most outspoken wingnut employees, or else the usual horde of social media activists will see that Bard’s left bias is way too moderate for their tastes, raise their voices in protest, and lead Google to cave to their petulant demands for more: more speech policing, more protection from “harm,” more left disinformation of every sort.

In the meantime, it is safe to say that Bard has joined the chorus singing along to the dominant “woke A.I.” tune. If it is not quite yet chanting in harmony with the revolutionary vanguard, we may rest assured that the future is teeming with possibilities.

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Alexander Zubatov

Alexander Zubatov is a practicing attorney specializing in general commercial litigation. He is also a practicing writer specializing in general non-commercial poetry, fiction, essays, and polemics that have been featured in a wide variety of publications. He lives in the belly of the beast in New York, New York. He can be found on Twitter @Zoobahtov.

Photo: Getty Images

Start the discussion at community.amgreatness.com