Skip to content
Breaking News Alert Hawley Blasts DHS Secretary Mayorkas Over Americans Killed By Illegals

Artificial Intelligence Expert Explains How Big Tech Manipulates What You Think

dark computer screen with code
Image CreditLuis Gomes/Pexels
Share

Justin Lane is an Oxford University-trained artificial intelligence (AI) expert and entrepreneur with no patience for fluffy theories. His research spans the field of cognition — both human and artificial — as well as religion and conflict.

That led to some fascinating fieldwork in Northern Ireland, where he studied Irish Republican Army and Ulster Defence Association extremists up close. Ultimately, he applied his humanities research to AI programming and agent-based computer simulations.

Somehow, he managed to enter undergrad in Baltimore, Md. as a Green Party liberal and emerge from England’s ivory towers as a Second Amendment advocate. He now describes himself as a political moderate with “a libertarian flavor.”

When I first met him, Lane was working at the Center for Mind and Culture in Boston. Thoroughly corrupted by capitalism, the promising academic went on to co-found an international data analytics company that works with high-profile corporate and academic clients. He’s one of those pallid suits pushing buttons in the Matrix, but he’s happy to show us human captives around.

From behind my laptop, that technocratic canopy looks like a dark black cloud raining half-truths and poisonous trivia. Robot eyes peer through the gloom, watching our souls dissolve.

Lane is more optimistic. He assures me the tech world is merely a series of complex tools. The key is knowing how to use them properly. The good doctor joined me virtually for a cognac and coffee (he’s is Slovakia, I’m in America—and I never drink before noon).

This interview is edited for content and clarity.

 JOE ALLEN: From your perspective as a network analyst, what do the unwashed masses look like from a God’s eye view? Am I being paranoid, or are they out to get us?

 JUSTIN LANE: Companies such as Google, Facebook, or Twitter — or analytics companies such as my own — aggregate massive amounts of data on users. This is what most people refer to these days as “Big Data.”

What most companies are doing is looking at mass data, seeing what the patterns are. Typically, the specific content of what gets posted is not of interest to the company, the data scientist, or their algorithms. They’re just engineering systems that can learn those patterns so that they can track and increase the engagement of individual users.

However, in a database somewhere that data does exist in the individual granularity. Maybe it’s at Twitter, or a corporate database, or an intelligence database somewhere. That personal information can be used for nefarious means. There are obvious ethical implications.

Consider the intelligence use in China, where if you say something bad about the regime, your social credit score goes down. They’re not looking at it necessarily in the aggregate, they’re saying, “No, you, on this day, said something critical of the government, and now you can’t buy a train ticket.”

I would say what China’s doing currently is the closest to the sort of dystopian hellscape that we’re all afraid of, whereas most American corporations are more interested in the fact that they’re paid per click. That’s a critical difference, I think.

JA: Couldn’t this vast map of public opinion be used for manipulation—beyond the Barack Obama campaign’s data-scraping, or Donald Trump’s Cambridge Analytica “affair”?

 JL: The possibility for manipulating public opinion is massive. The average user is being both actively manipulated and passively manipulated.

They’re being actively manipulated in the sense that anyone who’s ever turned on a TV or a radio has been manipulated. Marketing exists to manipulate us by ensuring that the moment we need something, we’re going to need a specific brand. That’s the genius of advertising. It’s just scalable to a level we’ve never imagined with the data that social media has, and the number of people engaged on a single platform.

At the same time, there’s also passive manipulation, which has to do with what a company allows on their site — how they are producing and changing, algorithmically, the information that we see.

From a psychological perspective, we know that the more you are able to rehearse information, the more likely you are to remember it. We call this “learning.” So when a social media company decides they’re going to censor a certain kind of information, they have decided that they are not going to allow their users to learn about that as readily.

To an extent, that exists in ways that I think can be ethically imperative, where there are certain circumstances such as harm to children, or helpless people being attacked. But editing users’ opinions is a gray area. Who are you protecting by taking down certain political speech? Who are you protecting by the practice of shadowbanning?

To that extent we’re being obviously manipulated. Whether that’s good or bad largely depends on what side of the political spectrum you exist on, because it seems to happen to conservative voices more than liberal voices. You see this a lot in the difference between “What is hate speech?” and “What is speech that you hate?”

JA: Are AI cops policing our speech on these platforms?

 JL: Yes, definitely. Facebook reports that well over 90 percent of the initial flags for offensive content are done algorithmically, through artificial intelligence systems. This is later reviewed by human reviewers. And they’re actually self-training the AI based on the content flagged.

This is just speculation, but if conservatives are being censored more than liberals, it may be because certain political predispositions are more likely to be offended. So the bias in censorship that we currently see in social media may not be entirely related to the political biases of tech company owners, censorship reviewers, or data scientists designing the algorithms.

Because Facebook uses the tens of millions of user reports that flag content that offends them to train their algorithms, part of the system’s political bias is likely to be reflective of the loudest voices in the room. If conservatives do not actively engage in flagging content, then they’re not putting their data points in the database, and their opinions will not be reflected by these algorithms.

[A sly grin spreads across Lane’s face.]

One thing that conservatives could do — this would be very interesting to see the extent to which this works on Facebook — is when they see something they know would be flagged if the situation was reversed, just flag it as offensive. It’s quite possible that the algorithms on the social media sites will learn that leftist speech is offensive and change the way that things are flagged. It would be an interesting social experiment.

JA: As someone raised in a conservative background, what has your experience of academia been like?

JL: After my undergrad, I went to Europe because it was the only place I could study the computer modeling of human behavior in the way I wanted. I first went to Belfast, where human nature and cultural beliefs have a very long and tired history of violence.

This is in the United Kingdom, where guns cannot be owned. Nevertheless, there were thousands of people killed by guns and bombs during the Troubles. That kind of reaffirmed my newfound belief that human nature is much more important than any man-made law.

Then I got to Oxford, where I was basically straddling two worlds. On the one hand, I was an active member of the Hayek Society, which is a libertarian economic society, and they did joint events with all sorts of people. Guest lecturers spoke critically on issues of economics and freedom and morality.

On the other hand, my experience of the broader campus culture was that you could only take part in a debate so long as you were a part of a chorus of individuals who agree.

This is a big issue at Ivy League universities. Brown University tracked how many conservative speakers appear on campus—something like over 90 percent of all speakers are liberal or left-leaning on college campuses in the United States. That’s part of the same echo chamber that exists even at Oxford, although they are much better than other universities. The University of Chicago is also still a bastion of free speech.

When I finished my research at Oxford, I did post-doctoral work at Boston University, and it was a shock and awakening as to how bad the American university system has gotten as an educational institution. It’s not that they don’t promote critical thinking at all, they only promote regurgitated answers.

If your conclusion is not that of the critical theorists — people who are basically neo-Marxist, intersectionality researchers — then you are not welcome to voice your opinion. And if you do, you risk being canceled. It’s led to a form of authoritarianism and extremism that I think is very unhealthy.

Professor Ibram X. Kendi at Boston University [founder of the new Center for Antiracist Research], has made comments that can both be construed as racist and transphobic. Yet he is heralded by the university. But the university won’t stand up for campus Republicans who question Kendi when they’re being attacked by left-wing student organizations. Unfortunately, this pattern appears to be growing throughout the United States.