Skip to content
Breaking News Alert Georgia House Guts Bill That Would Have Given Election Board Power To Investigate Secretary Of State

Should Government Be Able To Track Your Every Move Outside Your House For The Rest Of Your Life?

Share

It only takes one picture for Clearview AI to start its search. The interface is simple: all you have to do is take a picture of a face, upload it, and the facial recognition app takes it from there, scanning as many as three billion pictures on the web for matching images. In seconds, it provides the resulting pictures along with links to where they were found.

Your face, more likely than not, is included somewhere in those three billion pictures, alongside people like Heather Reynolds.

Heather Reynolds had more reason than most to worry about facial recognition technology. In November last year, she somehow managed to steal two grills and a vacuum cleaner out of an Ace Hardware store in Lake County, Florida, making away with something like $12,000 worth of goods.

She would have gotten away with it, too, if it hadn’t been for the surveillance footage—and Clearview AI. Law enforcement didn’t have Reynolds’s name, address, or phone number, but with one blurry capture from a security camera, that all changed. Two days later, Reynolds was identified and apprehended.

Lifetime Surveillance Upgrade

It’s an incredibly useful piece of software. Finding someone’s identity, who he’s affiliated with, and where he’s present online has never been so easy. For law enforcement, the app has been a boon. According to the New York Times, police at a station in Clifton, New Jersey said the app was “able to identify a suspect in a matter of seconds.”

While Clearview AI might be the most recent development in the industry, facial recognition technology (FRT) isn’t new. It didn’t first show up a month ago when The New York Times reported the capabilities and uses of Clearview. It didn’t begin back in 2016, when Clearview was first developed.

In fact, facial recognition software has been around for almost 20 years, growing in proficiency and proliferation. Chainalysis, Affectiva, FaceFirst, Sensory Inc., and TrueFace.Ai, to name a few, are all U.S.-based software companies working on developing facial recognition technology. And it’s being rapidly implemented.

In a study titled “Body Worn Camera Technologies,” the National Institute of Justice found that one-in-five body camera manufactures were adding FRT to their products. Law enforcement is only the tip of the iceberg. Marketing, gaming, and networking are all areas facial recognition technology could be used. In 2016, a study from the Pepperdine Law Review found that FRT could even be applied in medicine to find genetic conditions by evaluating the structure of a face.

Here’s the simple truth: facial recognition is powerful and useful. It works. And if the conversation were to stop there, its rapid progress might seem like an inherently good thing. But here’s another reality: the development and the uses of this technology are growing faster than the laws defining its proper use. To many, that’s a serious concern.

The Absence of Adequate Legal Guidelines

There is no clear line in the sand about how FRT should be used. There is no federal law that tells facial recognition users that they can go only so far and no further—at least not yet, anyway.

What we do have is a number of Supreme Court cases that provide basic guidelines for the government’s use of invasive technology. The 1968 Wiretap Act, stemming from decisions in Katz v. United States and Berger v. New York, prevents law enforcement from wiretapping a suspect’s phone. In the 2017 case of Carpenter v. United States, the Supreme Court ruled that the government couldn’t track citizens’ phones without a warrant.  As far as FRT is concerned, the Privacy Act of 1974 lays down limiting rules specifically for facial recognition software as used by the FBI.

There are others. Many tech companies, organizations, and even states have started to restrict the uses of FRT software. New Jersey, for instance, has banned its police force from using Clearview AI, and Facebook has demanded that Clearview stop using Facebook pictures as part of its search database.

But according to the University of Boston, “the US congress has not yet established federal privacy law regulating the commercial uses of FRT, and all the potentially relevant laws currently on the books do not fully address the privacy core issues of FRT.”

Last May, the Electronic Frontier Foundation looked at FRT in the context of law enforcement and found that: “the adoption of face recognition technologies is occurring without meaningful oversight… and without the enactment of legal protections to prevent internal and external misuse.”

After a look into the legal precedent about FRT, Boston University Journal authors Sharon Nakar and Dov Beenbaum concluded that “US courts seem at best split as to whether there is even a right to anonymity that would protect people from being tracked [by facial recognition technology] … New efforts are needed to develop a consensus among all stakeholders before this technology becomes even more entrenched.”

New Jersey, the EFF, and these authors are all concerned about the same thing: how FRT will affect the future of privacy. Who gets to use facial recognition technology? And can users have access to it whenever they’d like? What happens if you don’t want your face in a searchable database?

Those are a lot of different components to consider. But they all hinge on the same struggle: balancing citizens’ right to privacy with the potential goods FRT could bring.

The Right to Privacy and Facial Recognition Technology

The phrase “right to privacy” never appears in the U.S. Constitution. Yet the concept of a right to privacy has been a central legal component in the lives of millions of Americans.

So if it doesn’t come from the Constitution, where does a right to privacy come from? In the 1965 case Griswold v. Connecticut, the Supreme Court found that instead of being an explicit right, a right to privacy was instead implicitly contained within the First, Third, Fourth, and Ninth amendments.

Their reasoning went something like this: If citizens have a right to congregate to use their free speech, if the government may not station the military in people’s homes, if citizens have a right to have secure their persons, papers, and homes against unreasonable searches and seizures, and if the rights in the Constitution might not be exhaustive, the Constitution does suggest there is a space between citizens and the state that the government may not cross.

It’s the reason the government can’t often tap your phone without a warrant, even if they think you might be doing something illegal. It’s the reason the government can’t use thermal imaging to look inside your home to obtain a warrant. And, as mentioned earlier, it’s the reason the government can’t use your phone data to track you.

But what about a face? Can the government use a face to find the identity and online information of someone suspected of a crime? That depends on where you are.

Think back to Reynolds. Before her decision to steal two grills and a vacuum cleaner from an Acre Hardware store, Reynolds was a regular, innocent Floridian. Even then, there was one thing that made her face searchable. She did something illegal—and she did it in public.

The fact that she did something illegal gave the police a reason to search her face, but that’s not the important part. What makes all the difference is that Reynolds wasn’t in a private space. The moment Reynolds stepped into a public area, she lost the right to privacy that might have otherwise protected her face against an FRT search. Here’s why.

If You’re Outside, You’re Fair Game

In the 1967, the Supreme Court decided in Katz v. United States that what a person knowingly exposes to the public is no longer subject to Fourth Amendment protection. In other words, what you bring out for everyone to see is fair game. You can’t claim a right to privacy in public. Normally, that’s not really an issue. After all, people don’t generally take their address, name, and online information and hand them out on fliers.

But every time you walk through a mall, or an Ace Hardware store, or alongside a busy street, you willingly bring your face into the public. And with one picture, or a recording from a security camera, anyone with FRT capabilities in that public has a chance at finding wherever that face may show up online. Short of wearing a mask, how exactly are citizens supposed to keep their faces private in public?

For now, there isn’t a clear answer. Outlawing picture-taking and security cameras really isn’t an option. Outlawing facial recognition isn’t going to happen, either. There’s no way for the government to simply ban private companies from using FRT, and it’s too powerful a tool to remove from law enforcement.

From a policy standpoint, finding the balance between the benefits of FRT and privacy is something that will have to be hashed out in years to come, mostly likely through trial and error. It pits the interests of the individual against a potential good for a larger community. Sadly, for people like Alim, it’s a balance that’s been abused before.

(“Alim” is a fake name provided by National Public Radio to protect this individual’s privacy. The information he told reporters about surveillance happening in China could have landed him in serious legal trouble if it hadn’t been for these measures.)

In the Name of a More Secure State

Meeting up with a friend to grab some lunch at the mall sounds like a straightforward way to spend a Saturday afternoon. That might have been the case for Alim, if it hadn’t been for two details. Alim happened to be a Muslim—a Uighur Muslim—and a Chinese resident in the Xinjiang province.

To get into a Chinese mall, you have to pass through a security checkpoint that, according to NPR, looks like a combination between a metal detector and entrance to a subway terminal. You swipe your government-issued ID, have your face scanned by a security camera, and go. To American audiences, it might sound excessive, but government oversight and surveillance is a central component of many areas of Chinese life, including shopping.

Alim’s friend scanned in and passed through without a problem. But when Alim swiped his ID, he got a very different result.

“I scan my ID…and them immediately, an orange alert comes on,” Alim told reporters at NPR. Orange alert is a designation reserved for potential terrorists and criminals.

For a moment, a bewildered Alim stared at the orange light in disbelief before police came to escort him away. After a brief interrogation, Alim was released with the suggestion to stay home if he didn’t want another detention and interrogation at some other security checkpoint.

But the government doesn’t need a checkpoint to find him. With the use of FRT, Chinese law enforcement can locate Alim at a traffic intersection, a convenience store, the bank, or an airport. Alim can’t really leave Xinjiang without the authorities knowing about it. As long as he’s in range of a camera, Alim’s “orange alert” status will follow him wherever he goes.

But why the surveillance? According to the police department in Xinjiang, Alim’s home province, there are three great evils: extremism, terrorism, and separatism. To the Chinese, the Uighurs, an ethnically and religiously distinct Turkish minority group, present exactly that kind of threat. They’re different. They’re Muslims. They have a very distinct cultural identity. They’re not the version of “Chinese” the government wants to see. Uighur Muslims are seen as a challenge to the homogeneity of China.

Using FRT to keep an eye on people like Alim is a part of China’s version of the war on terror, according to The Guardian. “It’s targets are not foreigners but domestic minority populations who appear to threaten the Chinese Communist party’s authoritarian rule,” they stated in a story from April of last year.

Even though he’s never been to jail, been arrested, or broken the law, Alim’s ethnic and cultural identity makes him a target of round-the-clock surveillance.

Wang Lixiong, a Chinese author who has written about Xinjiang and the surveillance state, says this kind of surveillance is used for a very specific reason. “The goal here is instilling fear—fear that their surveillance technology can see into every corner of your life,” Lixiong told The New York Times.

China is ahead of curve with facial recognition technology. Some of the world’s leading artificial intelligence and FRT software companies, like SenseTime and Megvii, are located in China and financially supported by China’s online social media behemoth, Alibaba.

While the strength and abilities of Chinese FRT software continues to improve, their concept and respect for the individual degrade. As a result, China has failed to protect against FRT’s blatant misuse and encroachment on privacy.

Tomorrow’s Rules

The New York Time’s coverage of Clearview AI has opened a number of doors to a conversation about privacy and facial recognition technology in the United States. On one hand, the good FRT can do has been clearly demonstrated by police departments that use it. More importantly, however, it seems that Clearview AI and its recent appearance on the national scene will force legislators to take a look at the realities of facial recognition technology in the light of existing legal guidelines.

On March 3, Sen. Ed Markey from Massachusetts released a letter addressed to the creator and owner of Clearview AI, expressing concerns about its use and demanding further transparency from the software company. He states: “I am equally disturbed by new reports about other alleged Clearview business practices that may threaten the public’s civil liberties and privacy…Reporting also suggests that Clearview has been developing live facial recognition in surveillance cameras and augmented reality glasses targeted at the private sector. Your website requires that consumers submit sensitive information to have their images deleted from your database. These practices point to a dangerous neglect for privacy at Clearview AI.”

While clear parameters may be lacking to clearly define what companies like Clearview AI and other FRT software developers may not do, lawmakers may have to start clarifying the boundaries.