Banning Facial Recognition Isn’t Enough

Banning Facial Recognition Isn’t Enough
The whole point of modern surveillance is to treat people differently, and facial recognition technologies are only a small part of that.
By Bruce Schneier
Jan 20 2020

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diegosuspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledgednot to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patternsfrom meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are — using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge — and almost entirely unregulated — data broker industry in the United States that trades on our information. This is how large internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies — and governments — to treat individuals differently. We are shown different ads on the internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heart beats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.


The Policing of the American Porch

The Policing of the American Porch
Ring offers a front-door view of a country where millions of Amazon customers use Amazon cameras to watch Amazon contractors deliver Amazon packages.
By John Herrman
Jan 19 2020

There’s always something happening on the porch.

In Massachusetts, a young man waits for his date at her doorstep while her father grills him. “Bye,” the daughter says as she leaves for the evening, adding an indignant “oh my God.”

In Killeen, Tex., two men, one of whom appears to be holding a gun, take turns launching themselves, feet first, against the front door.

In Sacramento, Calif., a car speeds past a driveway in the middle of the night, then screeches to a halt. Somewhere out of frame, a woman screams.

A bright meteor illuminates a snowy, quiet suburban street in Columbia, Mo.

In Lake Worth, Fla., a bearded man wanders up a dark driveway and licks the doorbell repeatedly. Then he stands back and stares.

These are just a few of the oddities, horrors and comedies of a new American stage, viewed through legions of digital apertures, courtesy of Ring.

The Ring of Surveillance

In 2013, Jamie Siminoff pitched a home security product called Doorbot on NBC’s “Shark Tank.” The device, he said, would allow users to “see and speak with” the people at their doors, using their smartphones. Only one of the investors was interested; Mr. Siminoff rejected his offer.

Still, the “Shark Tank” appearance drew plenty of interest from consumers and, eventually, investors, who together put more than $200 million into the company that was rebranded as Ring. It was acquired by Amazon in 2018. Today, the company sells a variety of video-enabled doorbells, security cameras, and an alarm system. Asked how many devices Ring has sold, Yassi Shahmiri, a company spokeswoman, replied by email: “Ring has millions of users.” According to data from the NPD Group, a market research firm, sales of smart doorbells alone increased by 58 percent from January 2019 to January 2020.

The growth of easy-to-install home-surveillance equipment, and in particular doorbell cameras, has changed American life in ways obvious and subtle. Marketed in part as a solution to package theft, which has grown alongside e-commerce, especially from Amazon, Ring has found an ally in law enforcement.

More than 500 police departments have partnered with the company, gaining access to a service called Neighbors Portal, which allows users to “ask Ring to request video footage from device owners who are in the area of an active investigation,” according to the company. (This footage is often shared by law enforcement with media organizations for broadcast segments.) Some police departments assist in marketing Ring devices to local citizens, in some cases offering government-subsidized discounts, according to documents obtained by Vice.

One such arrangement was announced publicly by Rancho Palos Verdes, Calif., in 2017, in the style of a limited-time sale: “The City and Sheriff’s Department have negotiated a subsidy with,” the Facebook announcement said, in addition to “a limited number of $50 incentives for residents,” toward which the city had committed $100,000. Ring’s efforts to court law enforcement have drawn scrutiny from civil-rights organizations for violating the privacy of users and the subjects of their recordings, and for encouraging profiling by race.

The devices have also altered relations between online shoppers and the people who deliver their orders. On Ring Neighbors, a local social networking service run by the company, users share videos of delivery people carelessly throwing packages, or failing to wait for an answer at the door; others share footage of mail people navigating treacherous ice, or merely waving at the camera.

“I’ve been worried about this,” one UPS employee wrote on Reddit. “Those Ring cameras are everywhere now and going up to houses with packages already delivered I’m afraid they’ll think I’m stealing them.” On a U.S. Postal Service forum, a mail carrier asked: “Anyone else feel kind of creeped out that people are recording and watching you, up close, deliver mail to their house or is it just me?”

Among users, the surveillance is often cast as whimsical. Late last year, Sarah Barnes, a Ring user in Murfreesboro, Tenn., left snacks for her delivery drivers. An Amazon deliveryman, Kyle Smith — who told the “Today” show that people in his position “work like nine to 10 hours and only get like a 30-minute lunch break” — danced happily when he found them. He did not know he was being recorded until the video went viral.

What Are We Watching?

Ring’s “millions” of cameras have produced enormous quantities of raw footage. (Ring’s services crashed, overloaded, on Halloween in 2017. In 2019, the company boasted that its doorbells had “chimed 15.8 million times” on the holiday.)

Ring encourages users to join Neighbors and share videos with locals, and provides fodder for other neighborhood social networks, such as Nextdoor, where conversations already skew paranoid. The company also selects videos from its users to be shared on Ring TV, a video portal run by the company, under categories such as “Crime Prevention,” “Suspicious Activity” and “Family & Friends.” The videos are, essentially, free ads: The terrifying ones might convince viewers to buy cameras of their own; funny or sweet ones, at a minimum, condition viewers to understand front-door surveillance as normal, or even fun.

In Ring, Amazon has something like a self-marketing machine: Amazon customers using Amazon cameras to watch Amazon contractors deliver Amazon packages. A video posted by Kathy Ouma of Middletown, Del., shows a happy deliveryman accepting snacks on her porch. An Amazon logo is plainly visible on the side of his truck. The Ring watermark hovers in the corner of the screen. The video, posted on Facebook, garnered more than 11 million views.

Ring videos also provide a constant stream of news and news-like material for media outlets. The headlines that accompany those videos portray an America both macabre and surreal: “Screams for Help Caught on Ring Camera,” in Sacramento; “Man pleads for help on doorbell camera after being carjacked, shot in Arizona,” in Phoenix; “WOMAN CAUGHT ON MEDFORD DOORBELL CAMERA WITH STOLEN GUN,” in Oregon; “‘Alien abduction’ caught on doorbell cam,” in Porter, Tex. (it was a glitch); “Doorbell camera captures Wichita boy’s plea for help after getting lost.” And then there are videos like one shared by Rob Fox, in McDonough, Ga., in which his dog, locked out of the house, learns to use his doorbell. Mr. Fox posted the video to Facebook and then Reddit, from which the story drew news coverage. Ring contacted him, too, he said, to ask whether the company could use the footage in marketing materials.


The Secretive Company That Might End Privacy as We Know It

The Secretive Company That Might End Privacy as We Know It
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.
By Kashmir Hill
Jan 18 2020

A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage start-up in stealth mode. Mr. Ton-That acknowledged designing a prototype for use with augmented-reality glasses but said the company had no plans to release it. And he said my photo had rung alarm bells because the app “flags possible anomalous search behavior” in order to prevent users from conducting what it deemed “inappropriate searches.”

In addition to Mr. Ton-That, Clearview was founded by Richard Schwartz — who was an aide to Rudolph W. Giuliani when he was mayor of New York — and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.

Another early investor is a small firm called Kirenaga Partners. Its founder, David Scalzo, dismissed concerns about Clearview making the internet searchable by face, saying it’s a valuable crime-solving tool.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”


The Pesticide Industry’s Playbook for Poisoning the Earth

[Note:  This item comes from friend Ed DeWath.  DLH]

The Pesticide Industry’s Playbook for Poisoning the Earth
By Lee Fang
Jan 17 2020

In September 2009, over 3,000 bee enthusiasts from around the world descended on the city of Montpellier in southern France for Apimondia — a festive beekeeper conference filled with scientific lectures, hobbyist demonstrations, and commercial beekeepers hawking honey. But that year, a cloud loomed over the event: bee colonies across the globe were collapsing, and billions of bees were dying.

Bee declines have been observed throughout recorded history, but the sudden, persistent and abnormally high annual hive losses had gotten so bad that the U.S. Department of Agriculture had commissioned two of the world’s most well-known entomologists — Dennis vanEngelsdorp, a chief apiary inspector in Pennsylvania, then studying at Penn State University, and Jeffrey Pettis, then working as a government scientist — to study the mysterious decline. They posited that there must be an underlying factor weakening bees’ immune systems.

At Le Corum, a conference center and opera house, the pair discussed their findings. They had fed bees with extremely small amounts of neonicotinoids, or neonics, the most commonly used class of insecticides in the world. Neonics are, of course, meant to kill insects, but they are marketed as safe for insects that aren’t being directly targeted. VanEngelsdorp and Pettis found that even at nonlethal doses, the bees in the trial became much more vulnerable to fungal infection. Bees carrying an infection will often fly off to die, a virtuous form of suicide designed to protect the larger hive from contagion.
“We exposed whole colonies to very low levels of neonicotinoids in this case, and then challenged bees from those colonies with Nosema, a pathogen, a gut pathogen,” said Pettis, speaking to filmmaker Mark Daniels in his documentary, “The Strange Disappearance of the Bees,” at Apimondia. “And we saw an increase, even if we fed the pesticide at very low levels — an increase in Nosema levels — in direct response to the low-level feeding of neonicotinoids.”

The dosages of the pesticide were so miniscule, said vanEngelsdorp, that it was “below the limit of detection.” The only reason they knew the bees had consumed the neonicotinoids, he added, was “because we exposed them.”

Bee health depends on a variety of synergistic factors, the scientists were careful to note. But in this study, Pettis said, they were able to isolate “one pesticide and one pathogen and we clearly see the interaction.”

The evidence was mounting. Shortly after vanEngelsdorp and Pettis revealed their findings, a number of French researchers produced a nearly identical study, feeding minute amounts of the same pesticide to bees, along with a control group. The study produced results that echoed what the Americans had found.

Drifting clouds of neonicotinoid dust from planting operations caused a series of massive bee die-offs in northern Italy and the Baden-Württemberg region of Germany. Studies have shown neonicotinoids impaired bees’ ability to navigateand forage for food, weakened bee colonies, and made them prone to infestation by parasitic mites.

In 2013, the European Union called for a temporary suspension of the most commonly used neonicotinoid-based products on flowering plants, citing the danger posed to bees — an effort that resulted in a permanent ban in 2018.

In the U.S., however, industry dug in, seeking not only to discredit the research but to cast pesticide companies as a solution to the problem. Lobbying documents and emails, many of which were obtained through open records requests, show a sophisticated effort over the last decade by the pesticide industry to obstruct any effort to restrict the use of neonicotinoids. Bayer and Syngenta, the largest manufacturers of neonics, and Monsanto, one of the leading producers of seeds pretreated with neonics, cultivated ties with prominent academics, including vanEngelsdorp, and other scientists who had once called for a greater focus on the threat posed by pesticides.

The companies also sought influence with beekeepers and regulators, and went to great lengths to shape public opinion. Pesticide firms launched new coalitions and seeded foundations with cash to focus on nonpesticide factors in pollinator decline.

“Position the industry as an active promoter of bee health, and advance best management practices which emphasize bee safety,” noted an internal planning memo from CropLife America, the lobby group for the largest pesticide companies in America, including Bayer and Syngenta. The ultimate goal of the bee health project, the document noted, was to ensure that member companies maintained market access for neonic products and other systemic pesticides.

The planning memo, helmed in part by Syngenta regulatory official John Abbott, charts a variety of strategies for advancing the pesticide industry’s interests, such as, “Challenge EPA on the size and breadth of the pollinator testing program.” CropLife America officials were also tapped to “proactively shape the conversation in the new media realm with respect to pollinators” and “minimize negative association of crop protection products with effects on pollinators.” The document, dated June 2014, calls for “outreach to university researchers who could be independent validators.”

The pesticide companies have used a variety of strategies to shift the public discourse.

“America’s Heartland,” a PBS series shown on affiliates throughout the country and underwritten by CropLife America, portrayed the pollinator declines as a mystery. One segment from early 2013 on the crisis made no mention of pesticides, with the host simply declaring that “experts aren’t sure why” bees and butterflies were disappearing.


China Has a Big Economic Problem, and It Isn’t the Trade War

[Note:  This item comes from friend David Rosenthal.  DLH]

China Has a Big Economic Problem, and It Isn’t the Trade War
Beijing is turning its back on the private sector, a main driver of growth.
By Yasheng Huang
Jan 17 2020

China Has a Big Economic Problem, and It Isn’t the Trade War
By Yasheng HuangJan. 17, 2020
A decade ago, after the 2008 global financial crisis, China seemed to save its economy by decoupling it from the rest of the world’s with a massive domestic investment program. Today, it is progress on the trade war with the United States, or the recoupling of China’s economy with those of other countries, that is seen as the way for it to regain momentum.

But to think in these terms is to miss the main point: The trade war has merely compounded an economic slowdown in China that is substantially of the country’s own making.

The deceleration is serious. In 2018, China’s gross domestic product grew by about 6.5 percent, the lowest rate since 1990. And part of the slowdown is a predictable result of deliberate government decisions, in particular policies that favor the state sector at the expense of the private sector — even though the state sector is woefully inefficient, whereas the private sector long has long been the country’s growth engine.

The most striking evidence, documented by the Peterson Institute of International Economics in October, is the drop in credit to the private sector and the rise in credit to the state sector in recent years. The largest banks in China are state-owned and hew closely to government command. In 2013, 35 percent of bank credit to nonfinancial enterprises went to the state companies and 57 percent to private companies; in 2014, 60 percent went to the state sector, and only 34 percent to the private sector. (The rest went to enterprises with foreign or mixed ownership.) By 2016, the distribution was even more skewed: with 83 percent of credit going to state-owned or state–controlled companies, and just 11 percent to private firms. (According to the Peterson report, 2016 is the last year for which official data are available.)

The official rationale for this policy was to reduce risk in the financial sectoroverall. Private-sector businesses tend to rely on riskier, shadowy sources of informal finance. But this practice is partly the result of existing policy and legal biases against private companies. And simply strengthening lending standards without improving the treatment of the private sector was inevitably going to squeeze its access to credit.

In the face of restrictions on liquidity, defaults and bankruptcies in the private sector have multiplied. The Chinese banking system operates on the basis of cross-guarantees, which means that a single bankruptcy can ricochet through an entire network of connections. If one company wants to take out a loan, it usually needs to secure a guarantee from another company, and so any company that struggles to pay back its loans is also endangering any companies to which it issued guarantees. The private sector accounted for 126 of 165 bond defaults in 2018.

In late 2018, the Chinese government did begin to increase credit flows to the private sector, but at the same time, it asserted more control over it. For example, the municipal authorities of the city of Hangzhou, an entrepreneurial hub, announced in September that officials would be assigned to some 100 private companies, including the e-commerce giant Alibaba. The stated justification for the measure was to improve communication between the government and private companies — a goal that could more readily be reached (as Hangzhou itself has done before) with some deregulation and less bureaucracy.

Against this background, there has been a significant amount of churn at the top levels of management of flagship companies. Jack Ma of Alibaba, Pony Maof the internet company Tencent and Robin Li of Baidu, another major internet company, all relinquished some of their executive positions in recent times. These are not isolated events. According to the Southern Metropolis Daily, a Chinese newspaper, there were 1,401 executive-level resignations at listed companies between January 2019 and June 2019. The figure had ranged between 226 and 264 during the first half of the year from 2015 to 2018. Something has gone awry in China’s corporate world.

It’s unclear why China’s leaders want to curb the private sector when it has served the economy so well. One reason could be concern about growing income inequality. China’s Gini coefficient, a measure of such a gap in a society, is both higher than that of the United States and among the highest in the world, according to a 2014 study in Proceedings of the National Academy of Sciences of the United States of America. Yet many of the truly private capital owners in China are first-generation entrepreneurs who hail from humble backgrounds. Much of their income, especially in the manufacturing and high-tech sectors, really was earned, rather than inherited in any way. Their activities aren’t causing income inequality. The true reason for that gap, as the PNAS paper points out, lies in the “structural factors attributable to the Chinese political system.”

That the Chinese economy is slowing down isn’t necessarily a bad thing, at least not in itself. The consequences could be entirely benign. Ultimately, it is a country’s level of development — higher standards of living and a longer life expectancy, among other things — not its rate of growth, that matters most for the welfare of the people. A richer China, even with lower growth rates, would be cause for celebration, not distress.


Five myths about bipartisanship

Five myths about bipartisanship
No, it wasn’t the norm throughout U.S. history.
By Thomas E. Mann and Norman J. Ornstein
Jan 17 2020

It is common for Americans to rue the absence of bipartisanship. Even expressly partisan figures like Senate Majority Leader Mitch McConnell and President Trump have called for more cross-party collaboration. Former vice president Joe Biden has said that “no party should have too much power.” And there is even a prestigious think tank, the Bipartisan Policy Center, dedicated to the idea. The calls may be nothing new, but they have increased in intensity and volume as our times have become hyper-polarized, rendering bipartisanship the subject of many myths. 

Bipartisanship was the norm through most of U.S. history.

NPR lamented that Sen. John McCain’s death in 2018 symbolized “the near-extinction of lawmakers who believe in seeking bipartisanship to tackle big problems.” A bipartisan pundit duo wrote in the Hill before the 2018 election that bipartisanship had a “strong record of success,” citing President Bill Clinton collaborating with Republicans and President Ronald Reagan working with Democratic House Speaker Tip O’Neill.

But our history is littered with times when partisan rancor was literally deadly. As historian Joanne Freeman’s “The Field of Blood” points out, disputes between the parties included plenty of violence in Congress in the decades before the Civil War. In 1902, a fistfight broke out on the Senate floor when Democratic Sen. Benjamin Tillman was angered that fellow Democrat John McLaurin was even considering siding with Republicans. Physical altercations between the parties abated in the 20th century, at least, but partisan conflict has remained the norm. The period from the 1930s into the 1970s, when a “conservative coalition” of Republicans and Southern Democrats worked together to form majorities, is the exception. And that bipartisanship was achieved at the cost of preserving and protecting Jim Crow. 

The partisan divide is driven by policy.

In their platforms, the parties have stark differences in outlook and policy. Journalist Jonathan Salant says they are “180 degrees apart.” The Pew Research Center concludes that Democrats and Republicans are growing ever more divided on fundamental priorities.

But more than specific policies, strong tribal identities and intense competition for control of government drive our partisan polarization. One psychology studyfound, for instance, that public views on climate change polarize when Democrats and Republicans are told that the policies they are asked to evaluate were supported or opposed by the other party. The Affordable Care Act was designed to appeal to Republicans by adopting key elements from the GOP alternative to the 1993 Clinton health-care plan and from then-Gov. Mitt Romney’s plan in Massachusetts. The unified Republican opposition was not about policy differences but was part of a deliberate strategy, crafted on the eve of Barack Obama’s inauguration in 2009, to oppose and delegitimize all his major initiatives. On immigration reform, the Senate had broad bipartisan agreement during the Obama years, but legislation died in the House because of a desire to keep the president from securing a victory. Once Donald Trump took office, Republican senators who had supported those reforms turned against them. For example, Florida’s Marco Rubio, one of the key architects of the earlier reform, shifted to back Trump’s more restrictive approach.

From health care to climate to stimulus to deficit reduction to trade, debates where there was long common ground — and where there still is significant overlap — have been superseded by partisan warfare.

Bipartisanship is more valued by voters than by politicians.

Voters often express frustration that their elected representatives just won’t stop bickering and do the right thing. More than two decades ago, John Hibbing and Elizabeth Theiss-Morse documented in their book “Congress as Public Enemy” that voters have little patience for the actual workings of democracy, its unruly debates and inevitable compromises. As a result, they embrace the rhetoric of bipartisanship to avoid the unseemliness of politics. A recent survey from USA Today, Public Agenda and Ipsos found that Americans “are sick and tired of being so divided.”

In reality, the voters who are best informed — who make up less than a majority of voters — are also the voters most attached to parties. Reinforced by activists and partisan media, these voters expect their representatives to toe the party line, not embrace bipartisanship. This is consistent with well-demonstrated affective negative partisanship: Voters view the other party as the enemy and don’t approve of their representatives consorting with it. The credible threat of a primary challenge is a frequent topic of discussion and concern in the halls of Congress.

Major policy changes require bipartisanship.

When then-first lady Hillary Clinton struggled with the administration’s version of health-care reform in 1993, then-Sen. Daniel Patrick Moynihan (D-N.Y.)  said, “Something like this passes with 75 votes or not at all.” Sen. Chris Coons (D-Del.) has said a reason most of his legislation is bipartisan is that “if you introduce legislation that only has support from one party, it will not last very long.”

But the notion that major social policy requires broad bipartisan consensus has been belied by a host of examples. It is true that many Republicans joined Democrats in the final votes to pass Social Security and Medicare, and a larger number worked with the majority Democrats on improving the legislation and making sure the programs were implemented. The same happened with Democrats’ support for implementing George W. Bush’s Medicare Part D prescription drug benefit. But bitter partisan warfare and rhetoric marked the lead-up to these programs’ passage, and successes came because enough members of the majority party backed those proposals. On Medicare Part D, for instance, initial Democratic support declined dramatically in the face of partisan hardball played by then-Speaker Dennis Hastert and Senate Republican leaders. The New York Times reported the day after the House vote: “A fiercely polarized House approved legislation on Saturday that would add prescription drug benefits to Medicare, after an all-night session and an extraordinary bout of Republican arm-twisting to muster a majority. The Senate opened its debate under threat of a filibuster.”


The case for … cities where you’re the sensor, not the thing being sensed

The case for … cities where you’re the sensor, not the thing being sensed
Imagine your smartphone knew everything about the city – but the city didn’t know anything about you. Wouldn’t that be ‘smarter’ than our current surveillance dystopia?
By Cory Doctorow
Jan 17 2020

“Smart city” is one of those science fiction phrases seemingly designed to make you uneasy, like “neuromarketing” or “pre-crime”. It’s impossible to be alive in this decade and not find something unsettling in the idea of our cities becoming “smart”.

It’s not hard to see why: “smart” has become code for “terrible”. A “smart speaker” is a speaker that eavesdrops on you and leaks all your conversationsto distant subcontractors for giant tech companies. “Smart watches” spy on your movements and sell them to data-brokers for ad-targeting. “Smart TVs”watch you as you watch them and sell your viewing habits to brokers.

Smart cities are studded with sensors that monitor what’s going on with people, vehicles, and infrastructure; and use actuators to change things based on the resultant data.

Put that way, it’s hard to imagine a city that’s not “smart”. When you call 999, you are acting as a sensor. The fire brigade comes roaring to the rescue in a fire engine – that is, a giant, high-speed, actuating robot. Transit systems are all sensors (“Is there a train ahead of me still at the platform?”) and actuators (“Hit the brakes!”), and they’ve been steadily exposing more and more of the data they generate to potential riders, so you can text a number or use an app or check a lighted signboard to find the wait time until the next vehicle.

All this raises an interesting question: why isn’t it creepy for you to know when the next bus is due, but it is creepy for the bus company to know that you’rewaiting for a bus?

It all comes down to whether you are a sensor – or a thing to be sensed. In the “internet of things,” we’re promised technology that will allow us to project our will on to our surroundings, changing our lighting or unlocking our doors or adjusting our thermostats from anywhere in the world. But anyone who’s used these technologies for more than a few minutes quickly starts to suspect that they are also a thing, just another thing to be sensed and acted upon from a distance, generally by unaccountable algorithms seeking to corral us into altering our conduct to maximise returns to their manufacturers’ shareholders.

As with cities, homes were sensing and actuating long before the “internet of things” emerged. Thermostats, light switches, humidifiers, combi boilers … our homes are stuffed full of automated tools that no one thinks to call “smart,” largely because they aren’t terrible enough to earn the appellation.

Instead, these were oriented around serving us, rather than observing or controlling us (with rare exceptions, such as electricity and gas meters, which were designed on the assumption that they were going into hostile territory and that we couldn’t be trusted not to tamper with them). In your home, you are not a thing, you are a person, and the things around you exist for your comfort and benefit, not the other way around.

Shouldn’t it be that way in our cities?

There’s nothing wrong – or new – in the idea that we should sense what’s happening in our built environments and alter how our systems perform to respond to those sensors’ observations. There’s nothing objectionable about adding more trains when the system is busy, or recording accurate usage data to inform our urban planning debates. The problem is that the smart city, as presently conceived, is a largely privatised affair designed as a public-private partnership to extract as much value as possible from its residents while providing the instrumentation and infrastructure to control any civil unrest that such an arrangement might provoke. Far from treating residents as first-class users of smart infrastructure, they are treated as something between gut flora and pathogen, an inchoate mass of troublesome specks to be nudged into deterministic, convenient-to-manage patterns.

It needn’t be this way. As is so often the case with technology, the most important consideration isn’t what the technology does: it’s who the technology does it to, and who it does it for. The sizzle reel for a smart city always involves a cut to the control room, where the wise, cool-headed technocrats get a god’s-eye view over the city they’ve instrumented and kitted out with electronic ways of reaching into the world and rearranging its furniture.

It’s a safe bet that the people who make those videos imagine themselves as one of the controllers watching the monitors – not as one of the plebs whose movements are being fed to the cameras that feed the monitors. It’s a safe bet that most of us would like that kind of god’s-eye view into our cities, and with a little tweaking, we could have it.

If we decide to treat people as sensors, and not as things to be sensed – if we observe Kant’s injunction that humans should be “treated as an end in themselves and not as a means to something else” – then we can modify the smart city to gather information about the things and share that information with the people.

Imagine a human-centred smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. It knows how much footfall every metre of pavement receives, and which public loos are busiest.