TEXT JOIN TO 77022

No Algorithm for the Human Soul

Facebook founder and CEO Mark Zuckerberg this week faced questions from powerful committees in both houses of Congress on the subject of how the social media platform handles and protects its users’ data. Most of the questions dealt with the breach and collection of user data, and whether Facebook has been cavalier with the privacy of its users.

In his testimony, Zuckerberg spoke on a range of topics, from how the platform works to how the company is cooperating with Special Counsel Robert Mueller’s investigation, along with some of the challenges the company faces in coming years. Although much of what Zuckerberg had to say raises troubling questions about privacy in our internet age, one statement in particular was most troubling of all. It had to do with how Facebook regulates speech, though I doubt Zuckerberg would concede that this is what he’s up to. In a discussion about how Facebook can distinguish between “hate speech” and “legitimate political discourse,” Zuckerberg said: “Hate speech, I am optimistic that over a five to 10-year period we will have [artificial intelligence] tools that can get into some linguistic nuances of different types of content to be more accurate, to be flagging things to our systems, but today we’re just not there on that. Until we get it automated, there’s a higher error rate than I’m happy with.”

There are different layers to Zuckerberg’s comment. First, there is no such thing as hate speech. There is only free speech, which may be motivated by hatred, love, or logical reasoning, among other things. Our society has entered a new phase of political correctness coupled with primitive emotionalism. Anything we don’t like or that subjectively offends us, we label “hate speech.”

Second, Zuckerberg is attempting to change the meaning of language. What are these “linguistic nuances” of which he speaks? On what are they based? His statement is as ambiguous as the “nuances” that will determine whether artificial intelligence will flag and remove a particular statement from a user’s Facebook post. It seems clear, however, that Zuckerberg is using the constant fluidity of language for his own purposes. In Zuckerberg’s world, that means being the midwife to Leftism, which is just another way for Zuckerberg to say good, true, and decent. All deviation from Leftism is more than a difference of opinion. It is simply wrong.

Zuckerberg is completely at peace with having a monopoly not only on determining which posts are “hate speech,” obscene, or offensive but, by implication, regulating an individual’s speech entirely. Why should political speech be regulated at all? Isn’t this the most basic of rights in America? Questions of privacy on Facebook have been part of a larger conversation about the negative aspects of social media, but is that the main problem we are facing?

What is most disturbing in Zuckerberg’s statement is the fact that AI will be the “agent” that determines what is linguistically offensive. Perhaps unwittingly, Zuckerberg admits that a robot will have domain over something that is deeply human. Human beings deliberate, think, and relate to each other in ways that sometimes defy logic. This seems to annoy people like Zuckerberg and the denizens of Silicon Valley. It is not something to accept as a fact of human and political interaction. It is a problem to be solved. Zuckerberg’s statement reveals a serious problem of technology’s Utopian striving to impact human affairs. That much of our culture doesn’t see this ambition as much of a problem is a problem in itself.

I am not claiming that we need to get rid of information technology or even that artificial intelligence is inherently bad and dangerous. Getting rid of it would be impossible, and railing against it futile. Nor am I saying that social media is nothing but negative. But given current realities, we have to ask what it means to be a human being in the face of technological advances.

This means making sure that we don’t lose our sense of wonder at the meaning of life. Even something seemingly as small as an AI-controlled speech analyzer could change the way we relate to one another, not to mention that it could deny the significance of perennial aspects of human nature, such as self-reflection.

Human beings are relational in addition to rational by nature. We relate to each other as individuals, and if we choose to, we can create communities. Think for a moment about why Facebook has flourished. The platform has served to connect billions of people in ways that would not have been convenient, let alone possible, a decade ago. But, at best, social media can only extend an individual’s reach (just like any other tool). Social media cannot define an individual or a group. Social media, at its worst, assists in or leads to a cheapening of relationships, a debasing of the sacred, and allows people to think of themselves and others as ideological commodities. Who needs that?

As we are trying to keep up with the speed of technological advances, we are forgetting to ask ourselves who we are, who we are becoming, and why. Our culture has made these questions quaint, even though they are, in fact, profound. They are, as it happens, the most challenging of questions because they go to the heart of how we find meaning in our lives. They cannot be quantified or reduced to an algorithm.  And perhaps that is why they are, today, dismissed. They also require a certain level of vulnerability, which can happen in its fullness only in a face to face relationship. Social media gets its life blood from the masks people wear in public (or behind their keyboards) to cover what lies beneath.

Without a doubt, the experience of the Internet, and especially, of social media has changed the way we think and relate to each other. Much of that change, especially as it has facilitated new and revitalized old and actual human relationships, has been positive. But ideologically regulating people’s behavior can only lead to the creation of a virtual dystopia that could translate into a very real one and ultimately, it leads to loneliness and social breakdown. By regulating speech, Zuckerberg is attempting to change the ethos of an individual whose words and actions will impact others. You could say that is not Zuckerberg’s fault. His company provides a free service, which people can choose to use or not use.

But he and his company are not beyond culpability. Zuckerberg has created something like a monopoly on how we disseminate knowledge and information, and thus we face a Catch-22. We are stuck in an infinite regress of virtual repetitions and the more we engage in them, the more we experience the loss of the self. Perhaps Facebook management should also include a “philosopher-in-residence” to navigate through the inevitable relationship between technology and ethics, and to be reminded there is no algorithm for the human soul.

Photo credit: Yasin Ozturk/Anadolu Agency/Getty Images

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Emina Melonic

Emina Melonic is an adjunct fellow of the Center for American Greatness. Originally from Bosnia, a survivor of the Bosnian war and its aftermath of refugee camps, she immigrated to the United States in 1996 and became an American citizen in 2003. She has a Ph.D. in comparative literature. Her writings have appeared in National Review, The Imaginative Conservative, New English Review, The New Criterion, Law and Liberty, The University Bookman, Claremont Review of Books, The American Mind, and Splice Today. She lives near Buffalo, N.Y.