What’s Wrong With Airport Face Recognition?
A CBP program envisions applying face recognition to all passengers departing the United States, including Americans.
By Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project
Aug 4 2017
U.S. Customs and Border Protection (CBP) has launched a “Traveler Verification Service” (TVS) that envisions applying face recognition to all airline passengers, including U.S. citizens, boarding flights exiting the United States. This system raises very serious privacy issues.
What we know about this program comes from a privacy impact statement DHS issued on the program, and a briefing CBP Deputy Executive Assistant Commissioner John Wagner gave to privacy advocates in Washington this week. CBP’s plan is to install cameras at boarding gates to take photos of, and apply face recognition to, all cross-border passengers at the boarding gate to their aircraft. Currently being operated in six airports around the country (Boston Logan, New York JFK, Dulles in D.C., Hartsfield-Jackson in Atlanta, Chicago O’Hare, and Bush in Houston), the TVS program is part of a larger program called “Biometric Entry/Exit.” That program is DHS’s attempt to comply with a congressional requirement that the agency use biometrics to keep track of visitors entering and exiting the United States, in order to identify individuals who overstay their visas.
The way the system works is that before departure, CBP obtains the passenger manifest for each flight, and then reaches through the government’s extensive, interconnected set of databases to assemble photographs on each passenger. Those include passport and visa photos as well as photos “captured by CBP during the entry inspection” and “from other DHS encounters.” The agency then compares face recognition templates (essentially, patterns) derived from those database photos to templates derived from live photographs taken by a camera at the boarding gate.
There are a number of very serious problems with this program from a privacy standpoint:
• It utilizes the most dangerous biometric: face recognition. While Congress has directed CBP to collect biometrics from noncitizens as part of the entry/exit program, Congress did not specify which biometric the agency should use, and from a privacy perspective, face recognition is (along with iris recognition) the most dangerous biometric to use. That’s because it has greater potential for expansion and misuse: for example, you can subject thousands of people an hour to face recognition when they’re walking down the sidewalk without their knowledge, let alone permission or participation. You can’t do that with fingerprints. Face recognition databases could be plugged in to every surveillance camera in America, creating a giant infrastructure for government tracking and control. Wagner told me that the agency opted for face recognition instead of fingerprints because of the greater ease and practicality of the technology as well as the “optics of us taking fingerprints from people.” Of course, fingerprints do have a negative association in the public mind — but that’s because of their use in tracking and identifying accused criminals. And tracking and identifying is exactly what the photos are being used for here. If, as Wagner suggests, taking a photo seems more benign to the public, that’s only because the public’s intuitions about privacy have not caught up with what the technology can do. And fingerprints work fine in the context of international travel, as they are already used for the Global Entry frequent traveler program.
• It normalizes face recognition as a checkpoint technology. Security technologies that are applied only at airports because of heightened government concerns about the security of air travel tend, over time, to expand outward into society. Magnetometers, for example, spread from airports to a wide variety of venues, including sports stadiums, government buildings, and even some high schools. That dynamic takes place partly because it socializes people to accept such technologies as normal and acceptable, and partly because government agencies and others push it outward in a futile quest for perfect security everywhere. Wagner said of face recognition, “I think this is where the technology is headed.” But “the technology” is not an autonomous, inevitable force; we as a society are in control, and can choose what to deploy and not to deploy. And we should not want to turn into a checkpoint society, where we are subject to ceaseless status and identity checks at every turn, constantly monitoring, evaluating, and sorting citizens into “go” and “no-go” categories. The ease of implementing face recognition makes that all-too-real a threat.