Every Part of the Supply Chain Can Be Attacked

Every Part of the Supply Chain Can Be Attacked
When it comes to 5G technology, we have to build a trustworthy system out of untrustworthy parts.
By Bruce Schneier
Sep 25 2019

The United States government’s continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it’s impossible to verify that they’re trustworthy. Solving this problem — which is increasingly a national security issue — will require us to both make major policy changes and invent new technologies.

The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or — even worse — take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.

It’s obvious that we can’t trust computer equipment from a country we don’t trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren’t made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.

There’s more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

And while nation-state threats like China and Huawei — or Russia and the antivirus company Kaspersky a couple of years earlier — make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.

Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it’s not nearly enough. Too many back doors can evade this kind of inspection.

Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code — this is how a Linux back door was discovered and removed in 2003 — or the hardware design, which becomes a cleverness battle between attacker and defender.

This is an area that needs more research. Today, the advantage goes to the attacker. It’s hard to ensure that the hardware and software you examine is the same as what you get, and it’s too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won’t find them all. It’s a needle-in-a-haystack problem, except we don’t know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

Security is a lot harder than reliability. We don’t even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can’t trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it’s not going to solve our near-term problems.

At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn’t for you to watch videos faster; it’s for things talking to things without bothering you. These things — cars, appliances, power plants, smart cities — increasingly affect the world in a direct physical manner. They’re increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.

All of this doesn’t leave us with many options for today’s supply-chain problems. We still have to presume a dirty network — as well as back-doored computers and phones — and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It’s not nearly enough to solve the problem, but it’s a start.


Re: Jaron Lanier Fixes the Internet

Note:  This comment comes from friend David Reed.  DLH]

From: “David P. Reed” <dpreed@deepplum.com>
Subject: RE: [Dewayne-Net] Jaron Lanier Fixes the Internet
Date: September 30, 2019 at 2:54:26 PM EDT
To: “Dewayne Hendricks” <dewayne@warpspeed.com>

I have to say that I’ve been getting terribly annoyed at the idea that monitoring user behavior, drawing inferences, and using that to drive users (as subjects) to perform more profitably for others is the same thing as “ownable data”.

Ownership has NOTHING to do with what’s going wrong with Facebook, Google, Amazon, Apple, …

Data of value is primarily not something you typed into the computer.

Data has value as a tool for behavior prediction and control at a fine-grained level, in close to real-time, physically intimately with the user, and without the permission or even conscious awareness of the users.

This is because the customer is NOT, and NEVER is, the user/subject. The user/subject may even be paid. But that is NOT the point.

The point is the targeted predication and control, delivered to the customers of those companies. It’s not hard to find who are their customers – they report the revenues on their balance sheet every quarter. You and I are NOT their customers. We aren’t even the product. We are the subjects of their business operations.

This isn’t a secret.

So what does “you should own your data” mean?  Well, data you can buy and sell because you typed it in has nothing at all to do with being predicted or controlled at a fine-grain level. (a very small fraction of that data has any value at all to those companies’ customers).

And Privacy is not at all monetizable. Except in a fantasy world where everything is reduced to currency valuation. There’s no market on which “privacy” is securitized or traded. So there’s no “pricing mechanism” for being manipulated or predicted, much less for being described as currently lonely or currently persuadable (drunk, for example).

We actually know what Privacy is, and have for years. It is “the right to be left alone”, to not be “bothered with”, to not be stalked or surveilled or gossiped about. Celebrities bemoan the loss of privacy they experience, not because the data they type into computers is leaked, but because photographers take embarrassing pictures and exploit them, or talk to their “friends” about the latest happenings in their marriage. The right of a woman to control access to, and choices made about, her body is a *privacy* right. The right not to have a camera looking at you in the bathroom is a “privacy” right.

So I am incredibly aware that some technologists (like Lanier, but also a large crowd of surveillance centric companies) have been assiduously working to turn the discussion of privacy into one that focuses on “secrecy” of data you enter into a computer, ignoring the entire question of Privacy.

To the extent that Privacy relates to data *at all*, it has to do with how data purporting to be *about me* may be used.  It doesn’t even have to be *correct* data – it can be lies or false inferences made based on bad models or prejudices. It is the exploitation aimed at me that I care about, really.

Now “secrecy” is a tool of limited use in privacy protecti0n. That’s precisely because in most cases privacy is invaded using data you’ve never even created, or created by merely existing in the world (the papparazi don’t get a picture based on you entering your picture into some computer – they control the camera, you don’t).

So how does “self-monetizing” solve a privacy problem? Only in the dreams of someone who invented a notion of alternate, virtual realities. He’s a brilliant dreamer – and he is pitching a dreamworld, a fantasy, where companies buy data from you, and there is NO other data at all. but in that world, the companies wouldn’t even exist!

Now I love the idea of a company offering a service that is aimed at helping me navigate the modern complexities of life – Apple’s Knowledge Navigator or Negroponte’s AI butler. The key thing here is that preserving *privacy* here is basically done by making the service dedicated to my benefit alone – with the understanding that nothing should be done at all that I would not get mad about. It’s doable.

I might have to pay the butler or for the knowledge navigator. Maybe there is no way to make it “free” – as in giving free “heroin samples” to potential customers, and then subsequently selling the happy customer.

I need to be the *customer*, not the *vendor* of services, not data.

It does me no good to sell the right to watch me for money or to sell the opportunity to trick me into preferring one purchase vs. another, to make me buy.

In my view, Lanier is so incredibly confused about privacy that he is a danger to society.
And anyone else who says you “should own your own data” (I won’t name names, but there are other celebrtiy technologists who promote this bad idea, and lots of adoring journalists and pundits who promote it, too).

It’s not the data, stupid. It’s the subject behavior control monetization.

Jaron Lanier Fixes the Internet
Sep 23 2019

How Apple became a major player in health research

How Apple became a major player in health research
Jeff Williams, Kevin Lynch, and Sumbul Desai executives talk about the Apple Watch’s role in health, from humble beginnings as a step tracker to the important parts Apple may play in the future health research.
By Amber Neely
Sep 30 2019

From November 2017 until July 2018, Apple Watch owners were given the opportunity to take part in a voluntary heart study that Apple was conducting. 

The study was the largest arrhythmia study of all time, with nearly 420,000 participants. Not only would the study provide useful information health researchers, it would help solidify the Apple Watch as a serious health tool.

Jeff Williams, Kevin Lynch, and Sumbul Desai —Apple’s Chief Operating Officer, Vice President of Technology, and Vice President of Health respectively —met with The Independent to talk about the Apple Watch’s role in consumer health.

As it turns out, the original goal of the Apple Watch wasn’t to put heart health at the front and center, and they certainly weren’t expecting to save any lives. The original Apple Watch’s heart rate monitor was designed solely to provide more accurate step tracking than other competitors on the market. However, this changed when Watch owners began writing to Apple.

“The first letter that we got about it saving somebody’s life with just the heart rate monitor, we were surprised, because anybody can go watch the clock and get their heart rate. But then we started getting more and more and we realized we had a huge chance and maybe even an obligation to do more,” explains Williams. 

As it turns out, one of the reasons the Apple Watch is a successful tool for heart health is because of all its non-health features. Apple has designed a wearable that functions not only as a heart monitor, but also a cellphone, a wrist watch, a tool for reading emails and texts, and so much more. These features made the Apple Watch a best seller, not the heart monitoring.

“If you tried to sell a heart rate monitor to alert you to problems, you know, 12 people would buy it,” Williams continues. “So, the people who are wearing it, we get the chance to in some ways ambush them with information about their health, which is what’s allowed us to have such a big impact.”

They go on to say that the medical community is excited that Apple has managed to pull off this sort of success. Watch wearers are able to take a more proactive role in their health. The Apple Watch, after all, has been credited with saving lives.

The research community is excited as well. As it turns out, Watch Wearers are excited to take part in research. One of the biggest difficulties in health research is getting a substantial amount of data from participants. The Apple Watch gives users who may not be able to participate in traditional studies the ability to contribute their data in a meaningful way.

“It really dates back to when we launched ResearchKit, very early on. It’s basically some frameworks that allow people to build apps that can conduct studies with everybody who has a phone or watch. It took so much friction out of the research process that it was well-accepted early on.” says Williams.

And, as it turns out, the current hardware is capable of doing quite a bit more. 

“There’s already so much that we can work on. It’s really a matter of choosing our focus areas and asking really great questions that then lead to insightful answers. That’s the journey we’re on. The latest studies around hearing health, for example, women’s health,” says Kevin Lynch. Apple has recently brought Cycle—a menstrual tracking function of Apple Health —to Apple Health and Apple Watch. “There’s so much to learn. There are so many areas that we could focus on. And so that’s strategically the most important thing for us: asking where can we make a meaningful contribution?”


What Kind of Problem Is Climate Change?

What Kind of Problem Is Climate Change? 
Knowing the answer might force us toward a real solution. 
By Alex Rosenberg
Sep 30 2019

If the summer heat, followed by Hurricane Dorian, hasn’t convinced you that climate change is real, probably nothing will. Those of us convinced will want to mitigate it if we can. Doing that requires understanding the different kinds of problems climate change presents. They are economic, political and philosophical. The three kinds of problems are inextricably intertwined. That’s one lesson taught by the relatively new discipline of politics, philosophy and economics (PPE).

PPE has been the name for this subject since it was first introduced at Oxford after World War I. Now it’s taught at a hundred or more American universities, combining intellectual resources to come to grips with complex human issues.

To recognize the problems facing any attempt to mitigate climate change, we need to start with a technical term from economics: “public good.”

Put aside the ordinary meaning of these two words. In economic theory, a public good is not a commodity like schools or roads provided to the public by the government. It’s a good with two properties absent in other commodities, including schools and roads. First, a public good is consumed non-rivalrously: No matter how much of it one person consumes, there’s always just as much left for others.

Street lighting is an example: When I consume as much as I want of the nighttime safety it provides, there is still as much left for you. We are not rivals in consumption of a public good. Public schools aren’t public goods in this sense. The more attention your child gets, the less time the teacher has for mine.

Second, a public good is not excludable: There is no way I can consume street lighting without its being available to you at the same time. The only way to exclude you from consumption is to turn it off. But then I can’t consume it. Public schools are excludable goods. Your child can be expelled. So schools are not public goods.

The Paris climate accord set a target of keeping global temperatures from rising more than 1.5 degrees Celsius. That outcome would be a public good. I can’t consume any of this good unless it’s there for you too, and no matter how much of it I consume in personal benefit, that won’t reduce the amount you can consume.

Of course, as with street lighting, some people will benefit more, maybe even much more from a public good, than others. It’s regrettably true that women’s lives are generally more improved by street lighting than men’s lives are. Mitigating climate change isn’t going to benefit everyone equally. But it can’t benefit anyone without benefiting everyone, and no matter how much I benefit, there will be some benefit left for you.

This is where politics and philosophy come in. As with all other public goods, limiting climate change is subject to what is called a prisoner’s dilemma: If the rest of the world’s major polluters get together to curb emissions, the United States doesn’t have to and will still benefit. On the other hand, if China, the European Union, India, Russia and South Korea do nothing, there’s no point in the United States even trying. It can’t solve the problem alone. It looks as if either way, the United States should do nothing to curb its own emissions. If leaders of these other governments reason the same way, the result is likely to be catastrophic weather extremes everywhere.

Is there any way to escape the prisoner’s dilemma facing the provision of a public good?

The problem was first noticed by the 17th-century philosopher Thomas Hobbes, seeking the justification of political authority. Hobbes’s question of how to escape anarchy poses a prisoner’s dilemma. The rule of law, he recognized, is non-rivalrously and non-excludably consumed, even for the weakest, the poorest. It’s obvious of course that some laws are better for some people than for others. But Hobbes argued that any laws, even the laws of a tyrannical dictator, no matter how harmful they may be, confer some minimal non-excludable benefit on everyone that we can consume non-rivalrously. 

The enforced rule of law, any law, at least gets us out of the state of nature, where “the life of man is solitary, mean, nasty, brutish and short.” Hobbes argued that the only way to provide this public good is for each of us to surrender all power to the state so that it can compel obedience to the law. Hobbes’s recipe for escaping the prisoner’s dilemma of anarchy never attracted much support. The history of political philosophy from Locke to Rawls is a sequence of proposed alternatives to Hobbes’s strategy. Each sought a basis on which people can credibly bind themselves voluntarily to provide the public good of “law and order.”

Once the philosopher identifies the problem, the political scientist can approach it empirically: Try to identify the circumstances in real life where people have spontaneously solved the problem of providing themselves a public good, in their self-interest and without coercion.

For answering this question, the political scientist Elinor Ostrom won the Nobel Prize that was supposed to go only to economists. She spent a career identifying the conditions, all over the world, including the developing world, under which groups manage to solve the prisoner’s dilemma by voluntarily creating institutions — rules, norms, practices — that every member benefits from, non-rivalrously and non-excludably. In doing so, Ostrom provided a recipe for how to avoid the prisoner’s dilemma that a public good presents.

The ingredients needed are clear: The participants have to agree on who’s in the group; there’s a single set of rules all participants can actually obey; compliance is monitored effectively, with graduated punishments for violation; enforcement and adjudication is affordable; and outside authorities have to allow the participants to obey the rules. Finally, in the long term, the group providing the public good to its members has to be nested in, authorized by higher-level groups. These in turn persist when they can provide themselves a different set of nonexcludable, nonrivalrously consumed, mutually beneficial rules, norms, laws and institutions.

It’s not rocket science to see how hard it would be for the 200 or so nations of the world to satisfy these conditions. The Paris agreement is a far cry from Ostrom’s recipe. The main obstacle to carrying it out will be the unwillingness to surrender national sovereignty.


The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong?

The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong?
Online predators create and share the illegal material, which is increasingly cloaked by technology. Tech companies, the government and the authorities are no match.
Sep 28 2019

Last year, tech companies reported over 45 million online photos and videos of children being sexually abused — more than double what they found the previous year.

Each image shown here documents a crime. The photos are in a format analysts devised to protect the abused.

Twenty years ago, the online images were a problem; 10 years ago, an epidemic.

Now, the crisis is at a breaking point.

The images are horrific. Children, some just 3 or 4 years old, being sexually abused and in some cases tortured.

Pictures of child sexual abuse have long been produced and shared to satisfy twisted adult obsessions. But it has never been like this: Technology companies reported a record 45 million online photos and videos of the abuse last year.

More than a decade ago, when the reported number was less than a million, the proliferation of the explicit imagery had already reached a crisis point. Tech companies, law enforcement agencies and legislators in Washington responded, committing to new measures meant to rein in the scourge. Landmark legislation passed in 2008.

Yet the explosion in detected content kept growing — exponentially.

An investigation by The New York Times found an insatiable criminal underworld that had exploited the flawed and insufficient efforts to contain it. As with hate speech and terrorist propaganda, many tech companies failed to adequately police sexual abuse imagery on their platforms, or failed to cooperate sufficiently with the authorities when they found it.

Law enforcement agencies devoted to the problem were left understaffed and underfunded, even as they were asked to handle far larger caseloads.

The Justice Department, given a major role by Congress, neglected even to write mandatory monitoring reports, nor did it appoint a senior executive-level official to lead a crackdown. And the group tasked with serving as a federal clearinghouse for the imagery — the go-between for the tech companies and the authorities — was ill equipped for the expanding demands.

A paper recently published in conjunction with that group, the National Center for Missing and Exploited Children, described a system at “a breaking point,” with reports of abusive images “exceeding the capabilities of independent clearinghouses and law enforcement to take action.” It suggested that future advancements in machine learning might be the only way to catch up with the criminals.


Tech Companies Are Quietly Phasing Out a Major Privacy Safeguard

[Note:  This item comes from reader Randall Head.  DLH]

Tech Companies Are Quietly Phasing Out a Major Privacy Safeguard
More and more companies are failing to issue transparency reports to tell consumers how much of their information governments have demanded.
Sep 29 2019

TikTok can tell you a lot about internet culture, but the massively popular Chinese-owned social-media app doesn’t seem to know much about Hong Kong protests.

Searches for hashtags like #HongKong do surface some videos of demonstrations, but not anywhere near the volume you’d find on, say, Twitter. You might then wonder—as The Washington Post did in a September piece—whether that reflects interference by TikTok’s Beijing-based parent firm, ByteDance.

On Wednesday, The Guardian reported on leaked documentation indicating that the company did suppress certain videos, specifically about Tiananmen Square, Tibetan independence, and the banned religious group Falun Gong. ByteDance told the paper that the documents in question were no longer in use, but their existence was enough to invite further speculation.

And TikTok isn’t doing one thing that would help its users stop wondering. Unlike such older social platforms as Snapchat, Instagram, and Twitter, which post regular transparency reports documenting how often governments around the world have demanded data about its users or the removal of their posts, TikTok does not. (The company did not respond to my request for comment.)

Google pioneered the concept of the transparency report in 2010, but it took off after Edward Snowden revealed massive surveillance by the NSA, in 2013. Within the next two years, transparency reporting had spread to almost every large tech firm with a large consumer business—not just Apple, Facebook, and Microsoft, but even more politics-averse corporations such as Amazon and the largest telecom carriers.

Digital-rights advocates routinely point to transparency reports as an essential tool to hold companies accountable for defending their customers when governments ask for their information or the disappearance of their speech.

“That’s the only way we can start to shed light on what’s happening,” Gennie Gebhart, associate director of research at the Electronic Frontier Foundation, told me. The San Francisco digital-rights organization has been rating corporate transparency since 2013 in its annual “Who Has Your Back?” reports.

But these days, companies are turning away from transparency reports. Corporate transactions have shuffled tens of millions of customers to service providers who don’t practice transparency reporting, while other tech firms have quietly dropped the practice or weakened their support for it. And not a single household-name tech firm seems to have adopted this habit since early 2016.

“The momentum has faded,” says Peter Micek, general counsel with Access Now. The digital-rights advocacy group is updating its index of transparency reports, which it last posted in 2016, and this pending revision will document serious stagnation in these disclosures.

The worst rollbacks have happened when companies have merged or sold off large parts of their customer base, leaving the people involved doing business with new management that lacks the old management’s commitment to transparency.

For example, Charter Communications’ 2016 purchase of Time Warner Cableended a roughly two-year spell of transparency for some 15 million TWC customers. Charter also erased past TWC transparency reports from its website, leaving just a reference to their existence on a TWC customer-help page. (The reports do remain readable on the Internet Archive’s copy of that page.) Charter, now doing business as Spectrum, said in a statement that it follows the law and that customers can consult its privacy policy.

Frontier Communications’ 2016 acquisition of some 2.2 million broadband accounts from Verizon had a similar effect, moving those customers from a provider that started issuing transparency reports in 2014 to one that has yet to post any. Frontier did not answer multiple emails requesting comment. (I also write for Yahoo Finance, a Verizon media property.)

Other firms needed no outside encouragement to drop transparency. Ebay, for instance, appears to have posted its last report in 2013—and like TWC’s farewell filing, this one now only exists on the Internet Archive. The company did not answer email queries.

The Internet Archive’s own last transparency report? The year 2015.

“We have not compiled the stats,” the Internet Archive founder Brewster Kahle said in an email. “Not for lack of desire, just lack of time. We hope to get back on it soon.”

Meanwhile, firms that didn’t adopt transparency reports during the post-Snowden stampede seem in no rush now. Take the biometric-security firmClear, which sells expedited screening at airports, sports stadiums, and other venues. It has yet to produce a transparency report, and its vague privacy policysays it may provide user data “in response to requests by government agencies.” A Clear spokesman declined to speak on the record about the company’s responses to requests for customer data—mainly driver’s-license and Social Security numbers, which federal and state governments already have, and eye and fingerprint biometrics, which they may not—and how many it has fielded. He pointed to a May 2019 CNBC interview in which Clear CEO Caryn Seidman-Becker said, “We’re all about trust,” and added, “It’s really important that people know what they’re opting in for.” But subscribers to Clear’s $179-a-year service won’t know how much government scrutiny they’ve opted in to without transparency reports.

Payment platforms also still tend to shy away from transparency reporting, despite a public scolding on EFF’s blog in June 2018 that led the payments company Stripe to tell Axios it would publish its own disclosure “in the near future.” Since then: $0.00 in change. Stripe, which endorsed this ideal on its corporate blog in 2012, still says it plans to publish a report, but has offered no timetable.