Welcome to the dystopian future that is facial recognition technology. Most of us willingly expose ourselves to this technology daily. We use it to unlock our phones, identify people on social media platforms, and even track school attendance.
If we think about the bigger picture, it can be used to diagnose diseases, reunite missing children with their families, and facilitate secure transactions. But behind the sci-fi nature of biometric technology, we have an unraveling case that has the potential to become more dangerous than we could have ever anticipated.
To understand facial recognition, we need to see biometric tech from all angles. Let’s look at what’s happening right here, right now, and how we got to this futurist reality.
You can run, but you can’t hide.
Government use of these programs is on the rise across the country. The comparison to a George Orwell novel is uncanny—like a work of futuristic social science fiction. Critics say it poses a serious threat to Americans’ privacy and challenges our civil liberties. By enabling the unwarranted monitoring of citizens, we are essentially creating a surveillance state.
It may be a hard pill to swallow, but in many ways, we’re already there. AI surveillance and biometrics are no longer works of fiction. They’re real parts of American life. Most of us are caught on camera every single day. Nearly every location is equipped with surveillance of some capacity, and for the most part, we’ve accepted this.
What’s more concerning is the fact that millions of us are already in a face recognition database, and we didn’t consent to it. In 2016, more than 117 million people, over half of American adults, were in a private facial recognition network. In many states, including Illinois, the FBI is allowed to use facial recognition technology to scan through the DMV database of drivers’ license photos.
Beyond law enforcement, this biometric technology is being implemented into marketing, social media, and retail strategies. Last year, Walgreens debuted a new facial recognition tech that targets ads to individual customers standing at cooler doors.
Cooler Screens pioneered this technology, a marketing company focused on bringing a tailored digital experience to brick-and-mortar locations. Cameras are connected to face-detection sensors that can determine a customer’s age and gender, as well as their emotional response to products. According to the Atlantic, the system also has iris tracking capabilities, allowing the company to collect data on which display items are most frequently looked at.
It’s important to note that the Cooler Screen system does not rely on facial recognition algorithms in the same way law enforcement systems do. Instead, cameras analyze faces to make inferences about the customer. Shoppers are not individually identified but are analyzed. This distinction is essential, especially in Illinois.
Illinois is one of the few states to limit the use of facial recognition devices for commercial purposes. Under the Illinois Biometric Information Privacy Act (BIPA), businesses must receive written consent from individuals before obtaining biometric information such as fingerprints, retina scans, and facial geometry scans. They must also disclose how they intend to use this data.
As a result, major tech companies must tailor their products specifically for Illinois users. Both Facebook and Google have faced class-action lawsuits for allegedly violating BIPA policies in their photo-tagging products. On several occasions, Facebook has pushed for legislative revisions to the law, but so far has been unsuccessful. Home security cameras like Nest or Ring have facial recognition abilities disabled in Illinois cameras. Google’s viral Art Selfie app is banned in the state too.
In the class-action case, Rosenbach v. Six Flags Entertainment Corp., plaintiff Stacy Rosenbach contended that Six Flags Great America violated BIPA when the theme park required her 14-year-old son to scan his fingerprints to use a season pass. This biometric scan was done without parental content. Rosenbach alleged that Six flags never informed her about the fingerprint requirement when she bought the pass, and they never provided a policy detailing how they would use or store the information.
In court, Six Flags argued it couldn’t be held responsible unless the plaintiff demonstrated a physical injury from the unauthorized collection. The Illinois Supreme Court was unconvinced by the argument. It ruled that “a person need not have sustained actual damage beyond violation of his or her rights under the Act,” as this is a violation of privacy.
Although these systems are a powerful marketing tool, they also have the potential to be a privacy nightmare. At the moment, facial recognition is almost entirely unregulated outside of Illinois. As a result, facial recognition is a bipartisan issue. Congress has joined together to call for legislative reform on the topic. The Commercial Facial Recognition Privacy Act was introduced at the beginning of 2019 but has not yet passed the Senate. This federal bill would prohibit certain commercial entities from using facial recognition and require affirmative consent from the end-user, much like BIPA.
Another federal bill, the Facial Recognition Technology Warrant Act, was introduced at the end of 2019. It would limit law enforcement’s ability to surveil Americans with biometric technology. If passed, law enforcement agencies would require a court order to track people longer than 72 hours and would limit tracking to 30 days.
Despite its decade of use in law enforcement, no major American crimes have been solved with the technology. Most notably, a Florida woman was arrested for stealing two grills from a hardware store, and that’s about it. While top FBI officials insist the technology helps “safeguard the American people,” it undoubtedly has a rocky track record.
If you dig a little further, you’ll find that facial recognition algorithms have a serious bias against minorities. For example, women of color are ten times more likely to be misidentified than white women. This failure to correctly match non-white faces to the correct identity has already led to wrongful arrests, especially in urban areas. Taking these facts into consideration, it sounds like we have the makings for a disaster.
A Toxic Potential
China has proven that facial recognition technology can be hazardous.
In 2019, the Chinese government faced international outrage for tracking ethnic Muslims with AI technology. Chinese start-up companies have developed facial recognition algorithms that exclusively look for members of the minority group, known as Uighurs. The program identifies the population based on their appearance and keeps records of their location. These facial recognition algorithms are integrated into China’s growing network of sophisticated surveillance cameras.
Already, we can see a glaring problem. While this is undoubtedly an example of racial profiling using AI, it becomes even more troubling when considering China’s detainment of this minority group. As many as a million Uighurs are held captive in internment camps across the far western Xinjiang territory.
The Chinese government insists the camps are for voluntary re-education, but leaked official documents prove otherwise. An international investigation yielded a 9-page memo with clear orders to run camps as high-security prisons with forced assimilation. “It’s a total transformation that is designed specifically to wipe the Muslim Uighurs of Xinjiang as a separate cultural group off the face of the Earth,” said Ben Emmerson QC, a leading human rights lawyer and adviser to the World Uighur Congress.
Emmerson said the operation is a “mass brainwashing scheme designed and directed at an entire ethnic community.” One leaked document includes explicit direction to arrest Uighurs and prosecute them without a trial. Another suggests that China’s embassies and consulates are involved in the international tracking of Uighurs living abroad.
For those keeping score at home, we have a case of government-ordered ethnocide affecting over 11 million people, fueled by facial recognition technology. This situation is said to be the worst human rights crisis in the world as of late.
In addition to federal agencies, local Chinese authorities now use the technology to target Uighurs. Law enforcement in the city of Sanmenxia screened Chinese residents over 500,000 times in one month using facial algorithms. The local demand for these cameras is quickly growing too.
Facial Recognition Expansion
YITU Technology, the company known for the industry’s most advanced facial recognition software, has the ambitions to expand internationally. A push in this direction could quickly put ethnicity-focused facial recognition software into the hands of other authoritarian governments, extremist groups, and the like.
- China plans to be the global leader in artificial intelligence by 2030, a market where the facial recognition piece alone is expected to garner $9.6 billion by 2022.
- China’s facial recognition database includes nearly every one of China’s 1.4 billion citizens.
- Shanghai-based YITU Technology has gained wide recognition for its facial scan platform that can identify a person from a database of at least 2 billion people in a matter of seconds.
YITU’s international expansion represents the growing notion that recognition algorithms can be linked to any digital camera and any database. Our faces are no longer a unique part of our bodies. According to Vox, our faces have become “biometric data that can be copied an infinite number of times and stored forever.” Russia has already demonstrated this by enabling face recognition for its most popular search engine, Yandex, as explained in their December 2019 video, “What Facial Recognition Steals from Us.”
In this video, Vox’s Senior Editorial Producer Joss Fong poses a question that puts the entire scope of facial recognition into perspective. “We typically think of public and private as being opposites, but is there such thing as having privacy when we’re in public?” From what we’ve found in news reports, court documents, opinion pieces, and privacy studies, the answer is no. Still, facial recognition certainly plays the role of duality.
On the one hand, we have technology systems that are far more advanced than we could have ever imagined. More efficient marketing processes are possible and are already taking shape. On the other, facial recognition can be an invasion of privacy and can leave millions of people vulnerable. We’ve already seen its damaging potential through a growing number of court cases and arrests.
Yes, these are contradictory extremes. In many cases, that’s just how technology is. Few things in tech have a middle ground. As a result, we’re faced with a tough decision. As a society, do we choose privacy protection and restrict positive opportunities, or do we nurture tech growth and sacrifice our anonymity? We haven’t quite yet found an answer. Whatever decision we settle on, it will undoubtedly change the pace of marketing and surveillance ethics for years to come.