[Note: This item comes from friend David Rosenthal. David’s comment:’See this blog post from nearly two years ago with the “Live Crime In Your Neighborhood” channel: https://blog.dshr.org/2018/01/the-box-conspiracy.html DLH]
Amazon’s home security company, Ring, planned neighborhood “watch lists” built on facial recognition
By Sam Biddle
Nov 26 2019
Ring, Amazon’s crimefighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.
The planning materials envision a seamless system whereby a Ring owner would be automatically alerted when an individual deemed “suspicious” was captured in their camera’s frame, something described as a “suspicious activity prompt.”
It’s unclear who would have access to these neighborhood watch lists, if implemented, or how exactly they would be compiled, but the documents refer repeatedly to law enforcement, and Ring has forged partnerships with police departments throughout the U.S., raising the possibility that the lists could be used to aid local authorities. The documents indicate that the lists would be available in Ring’s Neighbors app, through which Ring camera owners discuss potential porch and garage security threats with others nearby.
Ring spokesperson Yassi Shahmiri told The Intercept that “the features described are not in development or in use and Ring does not use facial recognition technology,” but would not answer further questions.
This month, in response to continued pressure from news reports and a list of questions sent by Massachusetts Sen. Edward Markey, Amazon conceded that facial recognition has been a “contemplated but unreleased feature” for Ring, but would only be added with “thoughtful design including privacy, security and user control.” Now, we know what at least some of that contemplation looked like.
Mohammad Tajsar, an attorney with the American Civil Liberties Union of Southern California, expressed concern over Ring’s willingness to plan the use of facial recognition watch lists, fearing that “giving police departments and consumers access to ‘watch listing’ capabilities on Ring devices encourages the creation of a digital redline in local neighborhoods, where cops in tandem with skeptical homeowners let machines create lists of undesirables unworthy of entrance into well-to-do areas.”
Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.
Mounting Concern About Ring
Once known only for its line of internet-connected doorbell cameras marketed to the geekily cautious, Ring has quickly turned into an icon of unsettling privatized surveillance. The Los Angeles company, now owned by Amazon, has been buffeted this year by reports of lax internal security, problematic law enforcement partnerships, and an overall blurring of the boundaries between public policing and private-sector engineering. Earlier this year, The Intercept published video of a special online portal Ring built so that police could access customer footage, as well as internal company emails about what Ring’s CEO described as the company’s war on “dirtbag criminals that steal our packages and rob our houses.”
Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds. The automated approach to watch-listing described in the documents reviewed by The Intercept may seem less unsettling than that human-based approach, but it potentially allows for a litany of its own problems, like false positives and other forms of algorithmic bias.
In its public-relations efforts, Ring has maintained that only thieves and would-be criminals need to worry about the company’s surveillance network and the Neighbors app. From the way Ring’s products are designed to the way they’re marketed, the notion of “suspicion” remains front and center; Ring promises a future in which “suspicious” people up to “suspicious” things can be safely monitored and deterred from afar.
But “suspicious” is an entirely squishy concept with some very potentially dangerous interpretations, a byword of dog-whistling neighborhood racists who hope to drape garden-variety prejudice beneath the mantle of public safety. The fact remains that anyone moving past a home equipped with Ring cameras is unavoidably sucked into a tech company dragnet, potential fodder for overeager chatter among the suburban xenophobe set. To civil libertarians, privacy scholars, and anyone generally nervous about the prospect of their neighbors forming a collective, artificially intelligent video panopticon maintained by Amazon for unregulated use by police, Ring’s potential consequences for a community are clear.
Earlier this fall, Motherboard reported on a push by Ring to encourage camera owners to seek out, identify, and report to police anything and anyone they considered “unusual” in exchange for product discounts. According to the story, Ring “encouraged people to report all ‘suspicious activity,’ including loitering, ‘strange vans and cars,’ ‘people posing as utility workers,’ and people walking down the street and looking into car windows.”
Documents Show “Proactive Suspect Matching”
According to the Ring documents reviewed by The Intercept, which have not been previously reported, the company planned a string of potentially invasive new surveillance features for its product line, of which the facial recognition-based watch-list system is one part.
In addition to the facial watch lists, Ring has also worked on a so-called suspicious activity prompt feature that would alert users via in-app phone notification when a “suspicious” individual appears near their property’s video feeds. In one document, this feature is illustrated with a mockup of a screen in the Neighbors app, showing a shabbily dressed man walking past a Ring owner’s garage-mounted camera. “Suspicious Activity Suspected,” warns the app. “This person appears to be acting suspicious. We suggest alerting your neighbors.” The app then offers a large “Notify Neighbors” button. The document leaves how exactly “suspicious” is defined a mystery.