Skip to content
Breaking News Alert Biden DOJ Says Droning American Citizens Is Totally Fine Because Obama’s DOJ Said So

How Identity Politics Plays Right Into The Hands Of Big Business

Share

We categorize ourselves according to our chosen and or innate identifiers, but when big tech companies do it, and cater to what they have determined we want, we get irritated. Why? Isn’t this what we want? Don’t we want to be catalogued, classified, and commodified?

Identity has been a driving social and political force, and it makes sense that it would now become part of that other great institution of American cultural life: capitalism.

Using AI for Tribalism

Machine learning is the science of using algorithms and statistical models to essentially teach artificial intelligence (AI) to make choices based on predictive data, without further human input, such as using usage history to determine a user’s future preferences, and prioritizing access to those preferences to the exclusion of others. This is how Facebook sorts the posts on a user’s feed, and Amazon selects products a consumer might be interested in, or how Google gives different search information depending on who is signed in to the engine.

At first glance it seems this technique could be objectively applied. If it is designed well, it should be able to both harness a specific user’s preferences and be applied objectively across the user base.

When we identify ourselves and allow ourselves to be identified, when we tribe up and proclaim the characteristics that are uniform throughout our tribe, we give advertisers and marketers just what they’ve been looking for all these years: groups of conformed individuals to whom they can sell things. The last century proved to marketers and advertisers that they could create products that were geared to be consumed by specific subsets of the population, from fan bases to ethnic groups. The new way to do this is through AI and machine learning algorithms that do more than target individuals who subscribe to group identities—it actually herds us into identities.

Algorithms like this are in use across social media, but also across business platforms and retail sites. Users identify themselves by past purchases and likes. It is through these things, as well as more blatant self identifiers (such as selecting interests or following certain accounts) that we provide information that can class us.

Based on our insistence that we be seen for our identifying factors, whatever they are, however we’re defining them today, it should make sense to us that tech companies, marketers, and advertisers have been listening. We bombard anyone who will hear us with directives to not step on our identified toes, to take heed of our identifiers and behave accordingly.

Personal Experience with Identity

A few years ago, I met up with a friend at a bar in Brooklyn. It had been a while since we’d seen each other, but we grew up with similar prep school privileges and had attended liberal arts colleges. It was back when the progressive left began in earnest with the idea that the “color blind” concept of race relations in the United States was not the way to go.

I thought this was silly, because to me, color blind meant very specifically something like the old 1980s adage of not judging books by their covers, or wines by their labels, or people by their exterior characteristics. It meant looking into the world without preconceived notions of an individual’s worth based on society’s biases about class, race, or sex. This was pretty essential to my worldview.

However, when I broached the topic with my friend, she was horrified. She wanted people to see her blackness, and understand that this was part of her identity. This was the first time I’d heard this view, and at the time it didn’t make sense. When I interact with people, I’m not crazy about the idea that there’s some instant takeaway about who I am due to my whiteness.

I gave this some thought. What was most interesting to me was that my friend wanted people to glean distinct impressions about her based on the fact of her race and its associated qualities. What I couldn’t figure out then, and still haven’t figured out, is what those associated qualities are.

What effect should your identity have on the people around you? What do people need to know about your identity in order to successfully interact with you? What assumptions do you make about what those around you know about your identity? What assumptions do you make about theirs? Is having an understanding of the surface qualities of a given identity a useful tool in interacting with an individual who subscribes to those identity markers?

If we look at these questions through the lens of the color-blind model, the concept of using a person’s external characteristics to make judgments about who they are would be considered stereotyping and racist. But in the new rubric, they’re not racist. Instead, they’re exactly what we’re asking for.

Facebook Wants To Know Your Identity, Too

It looks like marketers, advertisers, and big tech firms heard us, too. So much so that Facebook has allowed advertisers to target consumers based on their expressed identity factors. Since what’s wanted when we express identity factors is to be seen for our identity, we should be pleased that advertisers, marketers, and others who want to capitalize are taking notice. So why don’t we feel seen?

According to an article in The Economist, “Facebook’s own systems are influenced by the race and gender of its users when it presents them with ads.” This is seen to be a problem, so much so that the Department of Housing and Urban Development is suing Facebook over targeted advertising for affordable housing. Instead we should be pleased, because being seen and categorized for our identities is exactly what we wanted.

We wanted to be seen for the value of our identities and what they signify. Yet now that we are being seen for it, we don’t like who is seeing it, the conclusions they’re drawing, and what they’re doing with the data about the boxes in which we have visibly, purposefully, and loudly packed ourselves. As it turns out, once we release this information—once we box ourselves up in neat little packages—we become the consumables.

Advertisers are specifically targeting individuals based on their revealed group identity, and the algorithms that are being designed to help us, to give us the content we want, are driving our choices as much as (if not more than) we are driving them. Machine learning algorithms are based in users’ past choices to predict future preferences. If a user is attracted to certain kinds of content, such as content that reflects their identity, then algorithms will show that user content that reflects preferences associated with that group identity. Algorithms will show us content that our past usage indicates we will choose in the future. We are being herded toward reinforcing our past preferences.

Is this how we want to design ourselves? Do we want to be shuffled by group identity, or do we want our individual preferences and selections to be ours alone?

Are Machines Telling Us What We Want?

At what point does the interaction switch from programmers teaching machines how to determine our wants to machines telling us what we want? Do we want to reinforce our behavior in the past by fixing it as permanent in the future? The algorithm’s success is determined by results. The result is that we keep clicking content that we are receiving based on the algorithm’s interpretation of our past. Does that mean that the results reveal that the algorithm correctly predicted our future behavior, or that the algorithm steered out future behavior?

If we do not want to be categorized by identity, then why do we continue to push ourselves into these categories? And if we do want to be recognized by group identifiers—if we want our tribes to be known—then why do we have an issue with this data being used to give us what our past selections, or presumptions about what our group identifiers, signify we want? If we are going to proclaim our identity, and demand that it be relevant information to political and social forces, then we should want our feeds and choices to be calibrated to our identity.

If we don’t want to be given what we say we want, then it’s time to ask ourselves if we really want it. If we don’t want to be lumped and sorted, if instead we want to be more than bias and stereotype, then we need to stop clinging to these factors as though they are any indication of who we are, and what we are worth.