Twitter’s mission statement is to: “Give everyone the power to create and share ideas and information instantly, without barriers.” On Thursday, Project Veritas released a video showing Twitter engineers boasting their use of “shadowbanning” to curb conservative speech.
No amount of “deceptive editing” accusations or “out of context” arguments can contradict the tale of the tape. The engineer lays out, in unedited detail, the process of shadowbanning—which consists of Twitter allowing a post to exist but limiting its reach by hiding it from view, thus ensuring that no engagement occurs. The poster of the tweet will merely think that no one has responded, one way or the other, and that person’s followers—who have actively chosen to read his tweets—will never know the tweet existed.
Shadowbanning lets Twitter have it both ways: to play the paragon of free speech, claiming it doesn’t actually remove speech, while simultaneously allowing its engineers and their partisan algorithms to hide from view any posts deemed unsavory by their lights. Essentially, Twitter is disappearing speech the company deems unfit. What’s more, Twitter employees happily apply salve to their conscience by noting this is a means of looking for bots—i.e., nonhuman accounts—which they seem to conflate with “redneck” thought.
Twitter isn’t merely vilifying the opposition. This is dehumanization, pure and simple.
Along with YouTube’s one-sided demonetization of various conservative videos, Twitter’s shadowbanning points to yet another tech giant’s broadside into the heart of honest discussion and the free exchange of ideas.
The Twitter shadowban strategy, however, is worse than mere censorship. It’s the tech company acting as a drug dealer; combining its market hegemony with addiction psychology to force its users to act differently. In particular, Twitter is using an addiction-fueled meaning economy to curate conversation by removing social media rewards from speech its executives don’t agree with. You can’t have your digital heroin if you say the wrong things. Thus, whether Twitter engineers know it or not (and I suspect they do), shadowbanning is orders of magnitude more pernicious and deleterious to free thought and exchange of ideas than the actual removal of tweets and suspension of accounts.
Classic Addiction Psychology
Having a tweet actively removed or an account blocked allows for responses to the removal—indignation, anger, rethinking of positions, and a general questioning of the value and ethics of the platform. Suspended accounts have also created symbols around which users rally and have fostered vigorous (though not always pretty) debate about vital issues. In the wake of high-profile account bans and suspensions, we’ve seen #Freexxx hashtags trend and foster substantive conversation within the Twitter community over the social media giant’s selective subjectivity with regards to the implementation of its community guidelines.
Shadowbans, however, offer Twitter the ability to censor and curate discussion without the pesky repercussions of anyone noticing and responding to it. The user merely thinks no one engaged with his post. Shadowbans can censor debate and conversations about censorship before they ever occur.
The dangers of this kind of censorship come into clearer focus when we examine the classic addiction psychology Twitter is using to control speech and frame debate.
Social media networks, for the vast majority of those who populate them, offer a new system of chemically induced meaning currency. Popularity, social stature, and, most importantly, self-worth are defined by likes and badges. In such an economy, those who control the pipeline control meaning and worth itself.
Among Millennials and younger generations, social media engagement is a status marker. Popularity among peers is measured by social media engagement. “Likes,” retweets and badges offer the delivery and proof of status(as well as pathways to monetization) As a result, teens will often erase posts that don’t get enough “likes.” Wired Magazine author Andrew Watts noted: “If I don’t get any likes on my Instagram photo or Facebook post within 15 minutes you can sure bet I’ll delete it.”
After Facebook introduced an editing function, users edited posts to accumulate likes. If a post didn’t get attention rapidly, people would either delete or edit the item to generate more engagement.
Justin Rosenstein, one of the Facebook designers responsible for the “Like” Button has said: “The main intention I had was to make positivity the path of least resistance. And I think it succeeded in its goals, but it also created large unintended negative side effects. In a way, it was too successful.” (Emphasis added.)
But this Attention Economy is driven by something chemical, dopamine. We have seen the rise of a dopamine-based meaning economy where shots of dopamine—delivered digitally via likes, retweets, hearts, and badges—conflate with meaning. Facebook founder Sean Parker confirmed that this was intentional:
The thought process that went into building these applications, Facebook being the first of them, . . . was all about: “How do we consume as much of your time and conscious attention as possible?” And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you … more likes and comments. It’s a social-validation feedback loop . . . exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.
It’s as if back in the 1950s, the tobacco industry had come clean about its desire to get us all addicted, and no one cared. Twitter, Facebook, and Instagram are essentially Skinner Boxes, conditioning chambers, with the user base in the role of the creature furiously clawing at the levers to receive a daily dose of dopamine.
A Corporate Culture of Self-Censorship
What happens when the dopamine is cut off for certain ideas while allowed to flow freely for others?
Twitter, by shadowbanning, is using addiction psychology to curate the things people choose to talk about and, by extension, think about. Twitter Co-Founder Evan Williams, in a moment of either hubris or refreshing honesty, admitted it: “I thought [that] once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,” he noted, “I was wrong about that.” And while Williams has moved on to focus his attention on Medium—an online long-form writing platform—the culture at Twitter, as the Project Veritas videos prove, seems nonetheless to continue to be defined by an antipathy towards free thought and speech.
In choking off the currency of engagement via shadowbans, Twitter ensures that people begin to censor themselves in an attempt to feed their addiction and garner more social media engagement; erasing the posts and curating their tweets’ language, tone and content to maximize engagement. Twitter is creating a mechanism whereby people are the instruments of their own mental incarceration. Instead of having to deal with the fallout from real censorship, Twitter is fostering a culture of self-censorship. Instead of having a group of followers discuss, support, and debate posts, Twitter merely makes the user whitewash their own thinking to appeal to engagement. In this case, the whitewashing will always trend in one political direction and it’s never going to be the right.
The result of Twitter’s shadowbanning will be a chemically induced online Victorian-level squeamishness in the face of controversy that, over time, will replace passionate disagreement, honesty, and debate with a bland pablum of Twitters’ creation. This will, no doubt, smooth out the conversation, but it will also produce a populace incapable of expressing and exploring difficult ideas honestly.