Someone Is Learning How to Take Down the Internet

[Note: Given the events of last week, I thought it was appropriate to post this item from September by Bruce Schneier to the list. DLH]

Someone Is Learning How to Take Down the Internet
By Bruce Schneier
Sep 13 2016

Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don’t know who is doing this, but it feels like a large nation state. China or Russia would be my first guesses.

First, a little background. If you want to take a network off the Internet, the easiest way to do it is with a distributed denial-of-service attack (DDoS). Like the name says, this is an attack designed to prevent legitimate users from getting to the site. There are subtleties, but basically it means blasting so much data at the site that it’s overwhelmed. These attacks are not new: hackers do this to sites they don’t like, and criminals have done it as a method of extortion. There is an entire industry, with an arsenal of technologies, devoted to DDoS defense. But largely it’s a matter of bandwidth. If the attacker has a bigger fire hose of data than the defender has, the attacker wins.

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company’s total defenses are. There are many different ways to launch a DDoS attack. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they’ve got to defend themselves. They can’t hold anything back. They’re forced to demonstrate their defense capabilities for the attacker.

I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there’s a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn’t have the level of detail I heard from the companies I spoke with, the trends are the same: “in Q2 2016, attacks continued to become more frequent, persistent, and complex.”

There’s more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.


As Artificial Intelligence Evolves, So Does Its Criminal Potential

As Artificial Intelligence Evolves, So Does Its Criminal Potential
Oct 23 2016

Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.

Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.

It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.

Such a situation is still science fiction — but just barely. It is also the future of crime.

The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”

The irony, of course, is that this year the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognition techniques will improve the woeful state of computer security.

But there is a downside.

“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of “Future Crimes.” He added, “This is not about Matthew Broderick hacking from his basement,” a reference to the 1983 movie “War Games.”

The alarm about malevolent use of advanced artificial intelligence technologies was sounded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.

The growing sophistication of computer criminals can be seen in the evolution of attack tools like the widely used malicious program known as Blackshades, according to Mr. Goodman. The author of the program, a Swedish national, was convicted last year in the United States.

The system, which was sold widely in the computer underground, functioned as a “criminal franchise in a box,” Mr. Goodman said. It allowed users without technical skills to deploy computer ransomware or perform video or audio eavesdropping with a mouse click.

The next generation of these tools will add machine learning capabilities that have been pioneered by artificial intelligence researchers to improve the quality of machine vision, speech understanding, speech synthesis and natural language understanding. Some computer security researchers believe that digital criminals have been experimenting with the use of A.I. technologies for more than half a decade.

That can be seen in efforts to subvert the internet’s omnipresent Captcha — Completely Automated Public Turing test to tell Computers and Humans Apart — the challenge-and-response puzzle invented in 2003 by Carnegie Mellon University researchers to block automated programs from stealing online accounts.

Both “white hat” artificial intelligence researchers and “black hat” criminals have been deploying machine vision software to subvert Captchas for more than half a decade, said Stefan Savage, a computer security researcher at the University of California, San Diego.


Inside The Strange, Paranoid World Of Julian Assange

[Note: This item comes from friend David Isenberg. DLH]

Inside The Strange, Paranoid World Of Julian Assange
The WikiLeaks founder is out to settle a score with Hillary Clinton and reassert himself as a player on the world stage, says BuzzFeed News special correspondent James Ball, who worked for Assange at WikiLeaks.
By James Ball
Oct 23 2016

On 29 November 2010, then US secretary of state Hillary Clinton stepped out in front of reporters to condemn the release of classified documents by WikiLeaks and five major news organisations the previous day.

WikiLeaks’ release, she said, “puts people’s lives in danger”, “threatens our national security”, and “undermines our efforts to work with other countries”.

“Releasing them poses real risks to real people,” she noted, adding, “We are taking aggressive steps to hold responsible those who stole this information.”

Julian Assange watched that message on a television in the corner of a living room in Ellingham Hall, a stately home in rural Norfolk, around 120 miles away from London.

I was sitting around 8ft away from him as he did so, the room’s antique furniture and rugs strewn with laptops, cables, and the mess of a tiny organisation orchestrating the world’s biggest news story.

Minutes later, the roar of a military jet sounded sharply overhead. I looked around the room and could see everyone thinking the same thing, but no one wanting to say it. Surely not. Surely? Of course, the jet passed harmlessly overhead – Ellingham Hall is not far from a Royal Air Force base – but such was the pressure, the adrenaline, and the paranoia in the room around Assange at that time that nothing felt impossible.

Spending those few months at such close proximity to Assange and his confidants, and experiencing first-hand the pressures exerted on those there, have given me a particular insight into how WikiLeaks has become what it is today.

To an outsider, the WikiLeaks of 2016 looks totally unrelated to the WikiLeaks of 2010. Then it was a darling of many of the liberal left, working with some of the world’s most respected newspapers and exposing the truth behind drone killing, civilian deaths in Afghanistan and Iraq, and surveillance of top UN officials.

Now it is the darling of the alt-right, revealing hacked emails seemingly to influence a presidential contest, claiming the US election is “rigged”, and descending into conspiracy. Just this week on Twitter, it described the deaths by natural causes of two of its supporters as a “bloody year for WikiLeaks”, and warned of media outlets “controlled by” members of the Rothschild family – a common anti-Semitic trope.

The questions asked about the organisation and its leader are often the wrong ones: How has WikiLeaks changed so much? Is Julian Assange the catspaw of Vladimir Putin? Is WikiLeaks endorsing a president candidate who has been described as racist, misogynistic, xenophobic, and more?

These questions miss a broader truth: Neither Assange nor WikiLeaks (and the two are virtually one and the same thing) have changed – the world they operate in has. WikiLeaks is in many ways the same bold, reckless, paranoid creation that once it was, but how that manifests, and who cheers it on, has changed.


That Map of the Internet Failing You Saw on Friday Didn’t Tell the Story at All (and Here’s What Really Did Happen)

That Map of the Internet Failing You Saw on Friday Didn’t Tell the Story at All (and Here’s What Really Did Happen)
By Glenn Fleishman
Oct 23 2016

It was a convenient picture, and one that I found compelling, too: a heatmap showing outages across the Internet due to an Internet of Things (IoT) botnet attack that was crippling a private Internet infrastructure company’s ability to respond to requests. The map apparently showed Level 3’s network; Level 3 is one of the largest network providers, transiting data among networks large and small. A congestion or outage would degrade everyone’s ability to reach certain networks.

Except the map we all shared, including me, didn’t show the status of Level 3’s network at all—its network and others were not under attack. Sites weren’t unreachable because the Internet was overloaded. I’ll explain below what actually happened on Friday.

The map was from Downdetector, which continues today (Sunday, October 23) to show the same pattern of outages for Level 3.

Downdetector doesn’t probe routes and check for connectivity at network interchanges, as other Internet health maps do, like Internet Traffic Report, Keynote’s Internet Health Report, and Akamai’s Real-Time Web Monitor. Rather, it compiles reports of outages and plots them on a map.

Downdetector collects status reports from a series of sources. Through a realtime analysis of this data, our system is able to automatically determine outages and service interruptions at a very early stage. One of the sources that we analyse are reports on Twitter.
The number of reports is tiny. Flip to the chart view instead of the map view, and you see that dozens of reports result in a map that looks like major parts of the U.S. Internet are unreachable.

Some appearances of this chart went so far as to attribute the map to Level 3, despite Downdetector’s disclaimer:

Downdetector and its parent company Serinus42 are not associated with any service, corporation or organisation that we monitor.
What appeared to confuse many reporters and editors working on this story into using this map, and even attributing it to Level 3, is that the Downdetector result was shared early by those trying to figure out what was going on; it appears at the top of Google results for “Level 3 outages”; and the labeling of the map, which uses Level 3’s corporate mission statement and logo, makes it appear official. One clue it wasn’t? The site shows Level 3 (space between Level and 3 in all its official text uses) as “Level3” without a space. (I’ll be surprised if Downdetector doesn’t get a demand from Level 3 and others to display more prominently a disclaimer about its unofficial status.)

Level 3 doesn’t offer an outage map, so it doesn’t appear in Google results; and the map confirmed people’s expectations of how the Internet was behaving.

Level 3 went so far as to host a Periscope session with its chief security officer to go through the details, because the map was being used so widely.

What the map showed is that people across the U.S. were having trouble reaching popular sites, some of which rely on Level 3. But what really happened had nothing to do with “routing”—getting data packets from one point to another on the Internet. Rather, it was about phone directories.


Re: Hacked Cameras, DVRs Powered Today’s Massive Internet Outage

[Note: This comment comes from a reader of Dave Farber’s IP List. DLH]

From: Brett Glass <>
Date: Sunday, October 23, 2016
Subject: [Dewayne-Net] Hacked Cameras, DVRs Powered Today’s Massive Internet Outage

Dave, and everyone:

While my small ISP couldn’t do much about the massive denial of service attacks that plagued the Internet this week (except to answer the phone calls from frustrated customers who could not use Twitter, Disqus, and other services which relied on Dyn as a DNS provider), we could at least make sure that we were not contributing to the attacks — and we did.

We blocked incoming attacks by the Mirai worm (which was creating the botnet that executed the DDoS attacks), monitored our network for vulnerable camera systems that were attempting to participate in it (there was only one — a cheap, Chinese DVR rebranded and resold by a company in New Jersey to one of our rural customers), and set up a honeypot to capture the code.

The thing which was embarrassing (or should have been) was that the code for the worm was simpler and easier to analyze than that of the infamous Morris worm, which was released on the Internet in 1988. It simply brute-forced certain vulnerable systems via Telnet, using default passwords, and then wormed its way into the affected systems via the shell. No need for “stack smashing” exploits or fancy, hand-assembled machine code; the systems were such sitting ducks that none of that was necessary to turn them into bots.

The owner of the infected DVR had no idea that he’d bought a vulnerable piece of equipment, one for which software updates were not available and whose security holes could not be closed — only shielded from outside attacks via a firewall and VPN. He was incredulous that anyone would even be ALLOWED to sell a device that insecure, or that the FCC — via its unwise and illegal “network neutrality” regulations — would require ISPs like me to leave them exposed to attacks by default.

As an ISP, an engineer, and an embedded system developer, all I can say is, “I told you so.”

–Brett Glass

Hacked Cameras, DVRs Powered Today’s Massive Internet Outage
By Brian Krebbs
Oct 21 2016

Should We See Everything a Cop Sees?

Should We See Everything a Cop Sees?
Body cameras have been promoted as a solution to police misconduct. But the strange two-year saga of Seattle shows just how complicated total transparency can be.
Oct 18 2016

On his first day on the job in the Seattle Police Department, Mike Wagers was invited to an urgent meeting about transparency. It was July 28, 2014, little more than a week after Eric Garner was killed on Staten Island, less than two weeks before Michael Brown was killed in Ferguson, Mo., and police departments around the country were facing a new era of public scrutiny. Wagers, who has a Ph.D. in criminal justice from Rutgers, was the Seattle department’s new chief operating officer, a 42-year-old civilian in jeans and square-rimmed glasses. He’d left his wife and two kids in Virginia and come alone to Seattle, a city he didn’t know — where it rained but cultural norms, he’d read, didn’t allow you to use an umbrella — because the job was what he called “the chance of a lifetime.” Seattle was the first big-city police department in a decade to have come under what is known as a consent decree — police reform by federal fiat — after a string of violent police actions against black, Latino and Native American people were caught on camera in 2009 and 2010. Wagers and his new boss, Chief Kathleen O’Toole, herself just arrived in Seattle, would use the best new thinking and the best new technology to lead the turnaround. And then Wagers would go home.

O’Toole’s eighth-floor conference room, which has views of City Hall and Elliott Bay and the snow-capped Olympic Mountains, was packed with top police and city officials. All eyes were on a lawyer from the city attorney’s office named Mary Perry. A former naval officer in her 60s, Perry was small and soft-spoken and favored pearls, and she was the city’s unrivaled expert in the seemingly mundane intricacies of the state’s Public Records Act. She was briefing the room on the potential fallout from a landmark case she had just argued and lost before the Washington State Supreme Court. The case stemmed from a series of public-records requests by a reporter for the local television station KOMO, Tracy Vedder, who began filing them after the same high-profile incidents that would lead to the consent decree. She asked for user manuals to the department’s new system of in-car dashboard cameras, then for lists of dashcam recordings, then for some of the recordings themselves. After the department denied every one of them, Vedder sued it for violating the Public Records Act.

The act, which dates to 1972, when governments ran on paper and our modern torrent of electronic data was unimaginable, is one of the strongest in the country. Washington State agencies cannot deny requests for records because the requester is anonymous or the request is too broad, nor can they deny requests simply in order to protect an individual’s privacy; instead, agencies must redact only the details deemed sensitive under state code — for example, some addresses, sometimes the face of a minor — and disclose the rest. Before the court, Perry had argued that a different law, the state’s Privacy Act, which allows departments to withhold recordings until related criminal or civil cases are resolved, should take precedent and the Seattle Police should be allowed to broadly deny Vedder’s requests until the relevant statute of limitations ran out. The court disagreed.

As Perry now told Wagers and the officials gathered in the chief’s conference room, Seattle and other departments across the state were operating in a new reality. Only in cases under actual, pending litigation could the police withhold video footage from the people. This presented several problems. The first was logistical and financial: Seattle Police were sitting on more than 1.5 million individual dashcam and surveillance videos, or about 300,000 hours and 350 terabytes total. Before releasing any footage, someone in the department had to review and redact it in accordance with the specific privacy exemptions the state code did have. The process was manual, a painstaking, frame-by-frame ordeal. By one estimate, 169 people would have to work for a year just to fulfill the department’s existing video requests, and the department added 2,000 new video clips daily. Perry feared that the new flood of data, especially but not exclusively video, could bankrupt Seattle if someone requested it all. “It’s like being on the Titanic,” she later told me, “and you’ve got a teaspoon to bail.”


Detroit incinerator is hotspot for health problems, environmentalists claim

Detroit incinerator is hotspot for health problems, environmentalists claim
The country’s biggest trash-burning facility has been issued with a notice to sue, with local residents complaining of the bad smell and pollution it produces
By Ryan Felton in Detroit
Oct 23 2016

At the intersection of two highways just outside downtown Detroit, a hulking relic of the city’s past looms over the skyline: the largest municipal trash incinerator in the US. It’s a facility that has raised concerns of nearby residents since its construction in the 1980s.

And some days, it stinks.

“The odors, if you ride I-94, you get this foul, rotten egg smell,” said Sandra Turner-Handy, who lives about three miles from the facility.

The 59-year-old said her son used to work a block away from the incinerator and said the smell was “constant”. Her granddaughter developed asthma while attending a school near the incinerator, but hasn’t used an inhaler since she graduated and moved away.

The persistent odor and emission of other polluting substances are among 40 alleged Clean Air Act and state violations that have been logged against the company that owns the facility, Detroit Renewable Energy, since March 2015, according to a notice to sue by the Great Lakes Environmental Law Center.

“It’s the things that you can’t smell that are the most harmful,” said Turner-Handy. “And how do residents report something that they can’t smell?”

In 2015, the incinerator burned over 650,000 tons of garbage, according to the notice. And since the beginning of that year, the incinerator has been fined for persistently violating an earlier agreement with the state for alleged state violations.

The law center also said in the filing that the incinerator presents a clear example of an environmental justice problem, as a majority of the trash burnt at the facility is imported from outlying communities, which pay $10 per ton less than Detroit to dispose of garbage.

“In short, Detroit is subsidizing other communities throughout the State of Michigan, the Midwest, and Canada to dispose of its garbage at the Incinerator,” the filing said, with the incinerator “located in a neighborhood that is composed mostly of low-income people of color and is heavily overburdened by air pollution”.

There’s a long-running debate over whether incineration is better for the environment than landfills. But communities located near incinerators have long lamented that they bear the brunt of its adverse impacts on air quality.

Citing statistics from the EPA, the center said 7,280 residents live within one mile of the incinerator, of whom 60% live below the federal poverty line and 87% are people of color. An EPA report also found that the area is a “hotspot for respiratory related health impacts when compared to other Michigan communities”, according to the law center.

The notice to sue also names Michigan’s governor, Rick Snyder, along with the state and federal environment agencies. The agencies have 60 days to commence an enforcement action, the law center said, and if neither pursues any, a lawsuit may be filed on behalf of residents to enforce the Clean Air Act.

Detroit Renewable Energy, which says the facility provides energy to more than 140 buildings in the downtown and midtown neighborhoods, downplayed the litany of claims in the notice of intent.

In a statement, the company said that it has invested approximately $6m over the last few years to “improve odor management” at the facility. The company said it employs nearly 300 residents, operates a “sophisticated” waste-to-energy facility, and “places the highest priority on complying with the strict and complex requirements established by the EPA and the state environmental department.

“In short, we have done our part,” the company said. “Any claim to the contrary would simply be false.”

The law center says otherwise.