TEXT JOIN TO 77022

Big Tech, Privacy, and Power

The ground is shifting quickly beneath our feet when it comes to tech, privacy, and power. And, although tech companies, their advocates, and even some policymakers, would like us to imagine these issues are cut and dried, they are not.

In their book The Sovereign Individual, published on the eve of the year 2000, James Dale Davidson and William Rees-Mogg attempt to grapple with the forthcoming technological changes that the new millennium inevitably would bring. “As technology revolutionizes the tools we use,” they wrote, “it also antiquates our laws, reshapes our morals, and alters our perceptions.”

This is the dynamic that has been unfolding slowly over the last 20 years, as Google, Facebook, Twitter, and other social media platforms have transformed how we engage with communications, culture, commerce, and one another.

But the COVID-19 pandemic has pushed that transformation into overdrive, while exposing just how significantly power dynamics—between individuals and corporations, and individuals and the state—have shifted.

Earlier this week, Facebook announced it was removing posts intended to organize rallies protesting government stay-at-home policies in various states. Initially, a Facebook spokesman claimed the company was doing this at the behest of state governments. Nearly 12 hours later, the company clarified it was independently removing posts “when gathering[s] do not follow the health parameters established by the government.”

Facebook did not clarify if this meant gatherings in violation of state laws, or executive orders with no force of law, or merely violations of government suggested practices.

This opens up a new, concerning lane for Facebook, and for tech more broadly. As Big Tech cements itself as our primary facilitator of communication (as it most certainly has during this pandemic), it wields outsized power.

Kalev Leetaru at George Washington University recently pointed out the significance of this shift, and the lines that blur as a result:

That a private company can now unilaterally decide to simply delete the promotion of protests it deems unacceptable is a remarkable expansion of its power over what was once a sacrosanct and constitutionally protected freedom. As we cede the public square to private companies, however, those constitutional freedoms of speech and expression no longer apply in some cases. Through those private companies, in fact, government officials can in effect restrict speech they are obligated to protect.

The irony is that less than a year ago, Mark Zuckerberg gave a speech at Georgetown University where he extolled tech’s many virtues, including how tech platforms “have decentralized power by putting it directly into people’s hands.” Yet Facebook’s most recent actions confirm that power of communication for the 70 percent of American adults who use Facebook, rather than being made disparate, is still very much centralized in the tech platform.

YouTube has also put itself in the position of defining “correct” speech—but this time, by aligning itself with the World Health Organization. YouTube’s CEO announced that the platform would remove “anything that would go against World Health Organization (WHO) recommendations.”

YouTube’s apparent motivation is to keep people safe from misinformation—which makes their choice of WHO recommendations an interesting one. In mid-January, the organization was telling the world that COVID-19 wasn’t contagious. WHO also publicly opposed the travel restrictions put in place by multiple countries and didn’t declare coronavirus a pandemic until March 11. All along, the organization has taken China’s obviously false claims at face value, allowing the virus to spread.

Yet this is the banner behind which YouTube will fly its “user safety” flag, thus imposing WHO’s views on its massive user base.

Tracking You—For Your Health.

Then there is the thorny notion of contact tracing—the way in which public health experts attempt to contain a viral pathogen by tracing where an infected individual has been, and with whom they’ve been in contact. Traditionally, contact tracing has been analog, based on a conversation between patient and doctor.

But the digital age has exploded contact tracing exponentially. It is much more efficient and accurate to trace a virtual trail, particularly as we leave immense digital footprints wherever we go. South Korea has typified this type of response, tracking COVID-19 patients using credit card data, surveillance camera footage, and cell phone location data. The South Korean government recently announced they’d be requiring infected individuals to wear electronic wristbands to ensure patients did not breach quarantine.

It is unlikely U.S. citizens would tolerate such intense and mandatory surveillance measures. But that’s where Big Tech comes in.

Without being asked, Google already has been sharing aggregate user location data with governments interested in compliance with social distancing measures. The House Freedom Caucus, a group of conservatives, sent a letter to Google raising concern over the “frightfully detailed, specific, and granular” data being provided to government officials.

Google and Apple recently have announced the development of a contact tracing technology that will use cell phone Bluetooth proximity data to alert individuals if they have come into contact with an infected person. The app’s effectiveness depends upon people self-reporting their own positive diagnosis. Already, security experts have raised concerns about false positives, spoofing, re-anonymization, and “proximity marketing” (yes, you’re just trying to avoid getting sick, but tech advertisers could still make money). Experts have also pointed out how easy it would be for this system to be abused.

The Google/Apple contact tracing app is opt-in—for now. Epidemiologists suggest that contact tracing really only works to slow viral spread if at least 60 percent of the population participates. It’s entirely possible that federal or state governments mandate the use of a contact-tracing app, in which case corporate and state power over the individual would be comingled, with little differentiation.

Michael Kwet, a visiting fellow at Yale Law School’s Information Society Project, put it this way:

Two corporations, Apple and Google, have come to dominate the smartphone software ecosystem, and they have spent years spying on users and enabling consumer surveillance in their app stores. In the world we built, we now have to weigh the fate of our lives and economy against trust in Apple and Google, the ad-tech industry they support, and government intelligence agencies. . . . This is a nightmare.

There are other questions, too. Could public health agencies get court orders to obtain phone tracking data from communications companies without consumers’ permission? Is it acceptable for aggregate location data to be made public?

We do know that the technology’s operating system will be made available only to governments’ public health authorities—will Apple and Google prevent authoritarian governments from using the technology in unintended ways? Will health authorities be able to build apps on top of the Google-Apple technology that could enable more invasive tracking?

Then there is the security of personal health data itself. This is supposed to be protected by HIPAA, the nation’s health privacy law. But the Department of Health and Human Services recently announced it would relax enforcement of HIPAA to facilitate the disclosure of health information between healthcare providers and their business associates. Google is a “business associate” of several major hospital chains already, and as part of the relationship receives the full medical records of patients without their knowledge or consent. What constitutes a HIPAA violation under this technology? Would Apple or Google be held liable?

We Have Been Here Before

COVID-19 has presented fundamentally difficult questions about the tradeoffs between public health and privacy, and the relationship between corporate and state power.

In some ways, however, we have been here before.

In the days after 9/11, Congress grappled with similar questions as they put together the PATRIOT Act. The law authorized massive surveillance of the American population, and the years since have seen that power abused and manipulated. (Tech companies also got in on that game; for years they willingly and secretly shared troves of user data with the National Security Agency.)

What we needed then was sober-minded deliberation and thoughtful analysis—not the rush to give away civil liberties as we grasped for a sense of security.

The lesson there should be applied here. As we rightly seek a functional public health response to a virus that currently lacks a vaccine, the push toward erasing the boundaries of our private lives will only increase. The belief that private industry “innovations” are inherently good and thus do not pose a risk to us has the potential to lull us into complacency. Indeed, the people who warned us about the PATRIOT Act appear to have no such qualms about Google.

But the potential for mandated usage remains, as do a host of questions, both technical and broadly philosophical. These questions should be pondered, not rushed; interrogated, rather than dismissed. As corporate power increasingly co-mingles with state power, this process becomes even more important.

The power of Big Tech has been growing slowly, and in a way that many of us have accommodated as a necessary infiltration. But the scope of that power—and its costs to the culture we have ordered—have been less transparent.

Like the bird that falls asleep on the back of the hippopotamus, we don’t actually think much about the status of where we are until the hippo moves. And now, the hippo is moving. And the massive power Big Tech has amassed has been revealed. How much or how little say we have over the arrangement, however, is still being determined.

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Rachel Bovard

Rachel Bovard is senior director of policy at the Conservative Partnership Institute and Senior Advisor to the Internet Accountability Project. Beginning in 2006, she served in both the House and Senate in various roles including as legislative director for Senator Rand Paul (R-Ky.) and policy director for the Senate Steering Committee under the successive chairmanships of Senator Pat Toomey (R-Penn.) and Senator Mike Lee (R-Utah), where she advised Committee members on strategy related to floor procedure and policy matters. In the House, she worked as senior legislative assistant to Congressman Donald Manzullo (R-Il.), and Congressman Ted Poe (R-Texas). She is the former director of policy services for the Heritage Foundation. Follow her on Twitter at @RachelBovard.

Photo: Dan Mitchell/Getty Images

Content created by the Center for American Greatness, Inc. is available without charge to any eligible news publisher that can provide a significant audience. For licensing opportunities for our original content, please contact licensing@centerforamericangreatness.com.