Our Personal Data Is Lost, So Let’s Rein In the Companies Who Own It

Social media technology has become quite creepy.

Attend a party and meet some new people? Facebook will suggest them as potential friends. Search for a product you want to buy? Advertisements for similar products will follow you around the internet. Think about taking a vacation? Travel promotions start popping up.

Several of my friends have complained to me that at times it feels like the social media tech giants are in their heads and can predict their thoughts and actions.

None of this is particularly new. Large corporations have spent decades turning small pieces of consumer data into richly detailed customer profiles.

Target’s pregnancy prediction score, one of many such metrics developed in the early 2000s, could pinpoint a pregnant customer’s due date with such specificity that they could send coupons timed to certain events in the pregnancy. Target sometimes knew that a woman was pregnant before her own family did.

But Target’s predictive algorithms were based on the limited data that customers would share with the company—mostly their purchase history. The volume of that information pales in comparison to the sheer amount of personal data that we voluntarily give the tech companies on a daily basis.

If Target could ascertain intimate details about their customers from their shopping habits, just consider what Google could figure out about you from your search history.

The tech giants have profited richly from their ability to figure out things about you so that advertisers can create carefully targeted audiences for their messaging. Facebook posted nearly $70 billion in advertising revenue in 2019. Google posted over $160 billion.

In order to keep the money flowing, tech companies have worked to maintain an image as responsible corporate citizens who look out for users.

Long gone are the days of Mark Zuckerberg calling Facebook users “dumb fucks” for willingly turning over personal information. Now that he has it, he is calling for government regulation of the internet with an emphasis on privacy. Other executives, like Google’s Sundar Pinchai, have also called for Congress to pass comprehensive regulatory regimes.

Calling on the government to determine privacy regulations allows the tech giants to avoid a public accounting for difficult questions about privacy on the internet. But given how much money these companies spend lobbying in D.C., it is obvious that they would have a large say in the composition of any resulting regulations.

The regulations would also help squash potential competitors. While startups would find it difficult to comply with complicated regulations, the tech giants would have more than enough resources to ensure that they were following the law. The industry influenced legislation would facilitate blatant regulatory capture.

But more than anything, the tech giants’ calls for increased regulation signals to users (disingenuously, perhaps) that they are concerned about user privacy and that they are actively working to ameliorate society’s concerns.

The tech companies have implemented many new features to make the users feel as though they are in control of their content. Most of the big giants have “privacy checkup” features that allow users to see and partially control what information the companies actively collect.

These companies have also shown restraint in rolling out new user-facing features that people might find too creepy. Much like Target’s old practice of subtly interspersing relevant coupons among completely random ones to avoid unsettling their customers, the tech companies carefully calibrate their use of our data to ensure that we don’t think too much about how much of our information they have.

In fact, in some cases these companies have rolled back features that were confronted with too much backlash from the public. In most cases, these roll-backs are cosmetic and have not actually changed the nature or amount of information the companies collect.

Several years ago, Google announced that it would no longer personalize ads based on the contents of your emails in Gmail. But their algorithms still peruse the text. Facebook admitted to using location data to suggest potential friends, before disabling the feature. But the company still tracks your location, even when you explicitly turn off location tracking.

These companies have worked hard to give users the impression that they are in control of their data—but they still jealously guard their ability to keep collecting more personal information.

Perhaps more interestingly, these companies have also fought to shield information users post publicly on their platforms from competitors.

Most tech giants have policies that restrict automated data collection—in other words, even though information may be publicly available on their platform, a third party is prohibited from systematically scraping that data without the platform’s expressed permission.

The enforceability of such policies is a matter of ongoing debate. A recent decision by the Ninth U.S. Circuit Court of Appeals in hiQ Labs v. LinkedIn held that automated scraping of accessible data, even in violation of a platform’s terms of service, would not constitute a violation of the Computer Fraud and Abuse Act (CFAA). But a circuit split on the correct interpretation of the CFAA means that the issue will have to be resolved by the Supreme Court.

This debate burst further into the mainstream when the New York Times published an exposé on a company that had developed a “groundbreaking facial recognition app.” Clearview AI, a facial recognition company that works solely with law enforcement to identify victims and suspects based on images, came under intense public scrutiny.

Many were frightened by the prospect of being quickly identifiable from images they had posted on the internet beforehand. And even though similar concerns could be raised about having documents associated with your name easily accessible through a Google search, the visceral worries about a loss of public anonymity ruffled feathers.

Last week, Facebook, YouTube, Twitter, and Venmo demanded Clearview AI stop scraping publically available user images from their platforms and that the company delete all of the data that it had obtained from them in the past. It seems unlikely that the company will comply with such requests, especially given recent reports that the app is helping investigators identify child exploitation and abuse victims.

Congress has already had several rounds of hearings on the use of facial recognition software in law enforcement, but the debate will likely continue.

Of course, the real debate here isn’t actually about privacy. The users already gave up their privacy by posting their information on the platforms. My information is already broadcasted to advertisers, corporations, and the world. In fact, part of Facebook’s appeal is that I can post content publicly that can be viewed by anyone. If I want to hide content from the public, I can always change my privacy settings.

Instead, this debate, much like other debates about freedom of speech on these platforms, demonstrates the pressing questions that we must address as these technological tools increasingly become integral parts of our lives. How much responsibility and ownership should these platforms have over the information that is posted on them?

Section 230 of the Communication Decency Act, passed in 1996, holds that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Prior to its enactment, Stratton Oakmont v. Prodigy Services had held that online service providers who in any way moderated the content on their website could be held liable for the speech of their users.

This immunity from liability has served as an incalculable subsidy for social media platforms by allowing them to enforce codes of conduct without assuming the potential risks that publishers face. This provision also has enshrined them as quasi-public platforms. A digital public square.

It is likely too late for us to rein in the amount of personal information floating in the digital public square. After all, we were the ones who put it out into the ether in the first place. But as the large tech companies make hundreds of billions of dollars on the information we freely shared, we must carefully consider what type of power these companies should have over our information, lest we give large private interests control over this new public square.

About Karl Notturno

Karl Notturno is a Mount Vernon Fellow of the Center for American Greatness in addition to being an entrepreneur, musician, and writer. He recently graduated from Yale University with degrees in philosophy and history. He can be found on Twitter @karlnotturno.

Content created by the Center for American Greatness, Inc. is available without charge to any eligible news publisher that can provide a significant audience. For licensing opportunities for our original content, please contact licensing@centerforamericangreatness.com.

Want news updates?

Sign up for our newsletter to stay up to date.