Big Tech Isn’t A Public Health Issue, It’s A Public Health Emergency

Big Tech Isn’t A Public Health Issue, It’s A Public Health Emergency

What's so dangerous about this particular emergency is that the source of the emergency is dulling our senses, coaxing us into complacency with a carefully curated stream of colorful notifications.
Emily Jashinsky
By

The following is a transcript of my radar on June 30th’s edition of “Rising” on Hill TV.

Too often we think of Big Tech as primarily an antitrust issue or primarily a censorship issue, with little regard for the interplay between both those problems. What’s worse, the politics of Big Tech drags us into convoluted legalese and dense policy discussions that distract from the more immediate concern, which is both more important and easier to win on. It’s a major public health threat on a scale we’ve never seen before.

A ruling against Facebook this week struck “a blow to Big Tech’s claims of complete immunity from liability under Section 230,” as “the Texas Supreme Court ruled last week Facebook can be held liable for state civil claims of sex trafficking on its platform,” per my friend Rachel Bovard’s take in The Federalist.

“Section 230 does not withdraw from states the authority to protect their citizens from internet companies whose own actions — as opposed to those of their users — amount to knowing or intentional participation in human trafficking,” Judge Blacklock wrote.

It’s as good an example as any that Facebook and Mark Zuckerberg’s lauded promises to facilitate societal harmony with their technology have fallen flat.

To focus narrowly on Zuckerberg, it’s pretty stunning to search interviews he gave back in 2007 and 2008, which seems like ages ago but proves how dramatically life has changed in such a short period of time. Listen to what he told Wired exactly 12 years ago, on June 29, 2009.

When I started Facebook from my dorm room in 2004, the idea that my roommates and I talked about all the time was a world that was more open. We believed that people being able to share the information they wanted and having access to the information they wanted is just a better world: People can connect better with the people around them, understand more of what’s going on with the people around them, and understand more in general. Also, openness fundamentally affects a lot of the core institutions in society — the media, the economy, how people relate to the government and just their leadership. We thought that stuff was really interesting to pursue.

To call Zuckerberg’s utopian technocratic fantasy naive would not do it justice. Now, that naivete has morphed into cynicism as Zuck and his fellow oligarchs mine the carcass of their long-gone fantasy for profit.

The Backpage example Rachel wrote about in the context of 230 is an instructive one. They never anticipated how their inventions would be used. But now they know, and they’re completely at a loss for how to deal with it. That’s because dealing with it meaningfully would involve taking a huge hit.

The excesses of the openness Zuckerberg and his fellow travelers promised, like the Backpage case Rachel highlighted, are really the tip of the iceberg.

Tech can be abused, sure, but it’s also fundamentally abusive. Yesterday, GQ published an interview with Aza Raskin. Raskin invented the infinite scroll. He doesn’t use social media much. “For many people,” he said, “it’s become part of their job or the way they support themselves. Part of the way they stay in contact with their friends or their family, or loved ones around the world.”

“That’s what’s inhumane—that we are forced to use systems which are fundamentally unsafe for the things that we need,” Raskin added. “Technology is not just ripping apart the social fabric, it’s replacing our social fabric with something much more brittle.”

TikTok is a good example. Surely you’ve encountered an example of the trendy explainer videos that gain viral momentum. They expose many of our students and our teachers and our parents as vapid clowns with an incredible amount of self-confidence. But TikTok rewards it because social media engineers have consciously programmed us to react in ways that increase engagement on bad content, thereby incentivizing the bad mindsets that create and reward it.

This, again, is downstream of the public health point. Raskin co-founded the Center for Humane Technology along with Tristan Harris, who you may remember from the Netflix documentary, The Social Dilemma. Harris has compared tech to “Big Tobacco for our brains,” in the sense that it’s addictive and the industry profits off our addiction. But tobacco wasn’t a major mode of communication, political discourse, professional life, and social life all in one.

“Three billion people have a brain implant that’s a remotely controlled brain, because — especially in the coronavirus times — we are relying on these things to makes sense of what’s reality out there in the world,” Harris said last fall. “They have become the fabric for our sense-making and the fabric of our choice-making, the fabric of how children develop.”

University of Michigan Professor Daniel Kruger told The Guardian, “There are whole departments trying to design their systems to be as addictive as possible. They want you to be permanently online and by bombarding you with messages and stimuli try to redirect your attention back to their app or webpage.”

We’re barely 20 years into this grand experiment and the guinea pigs aren’t faring so well. Harris and Rakin’s group keeps a ledger of harms on their website, summarizing peer-reviewed studies on tech’s harms. I’m just going to read a few of them.

  • “The level of social media use on a given day is linked to a significant correlated increase in memory failure the next day.”
  • “Three months after starting to use a smartphone, users experience a significant decrease in their mental arithmetic scores (indicating a reduction in their attentional capacity) and a significant increase in social conformity”
  • “The greater your level of Facebook addiction, the lower your brain volume.”
  • “The mere presence of a mobile phone can disrupt the connection between two people, leading to reduced feelings of empathy, trust, and a sense of closeness.”
  • “The more that someone treats an AI (such as Siri) as if it has human qualities, the more they later dehumanize actual humans, and treat them poorly.”

From the printing press to the mirror to the camera, human life has long been disrupted in dramatic ways by technologies and inventions, many of which we now recognize as positive creations. But smartphones and social media are rewiring our brains on an enormous scale, and not for the better. The research is mounting. This is bigger than Section 230. It’s bigger than antitrust, although it’s certainly still partially a consequence of market dominance.

This isn’t just a public health issue. It’s a public health emergency. But what’s so dangerous about this particular emergency is that the source of the emergency is dulling our senses, coaxing us into complacency with a carefully curated stream of colorful notifications. It’s sapping our ability to fight back, and that’s why we’re bickering about Facebook regulations on Facebook and I’m here whining about it on YouTube. They’ve already won, which means they’re rich enough to get the hell off their own services. That’s exactly what they’re doing.

Emily Jashinsky is culture editor at The Federalist. You can follow her on Twitter @emilyjashinsky .

Copyright © 2021 The Federalist, a wholly independent division of FDRLST Media, All Rights Reserved.