[Note: I thought it would be timely to post this article from back in Aug 2016 that I missed back then. DLH]
Building a new Tor that can resist next-generation state surveillance
Tor is an imperfect privacy platform. Ars meets the researchers trying to replace it.
By J.M. PORUP
Aug 31 2016
Since Edward Snowden stepped into the limelight from a hotel room in Hong Kong three years ago, use of the Tor anonymity network has grown massively. Journalists and activists have embraced the anonymity the network provides as a way to evade the mass surveillance under which we all now live, while citizens in countries with restrictive Internet censorship, like Turkey or Saudi Arabia, have turned to Tor in order to circumvent national firewalls. Law enforcement has been less enthusiastic, worrying that online anonymity also enables criminal activity.
Tor’s growth in users has not gone unnoticed, and today the network first dubbed “The Onion Router” is under constant strain from those wishing to identify anonymous Web users. The NSA and GCHQ have been studying Tor for a decade, looking for ways to penetrate online anonymity, at least according to these Snowden docs. In 2014, the US government paid Carnegie Mellon University to run a series of poisoned Tor relays to de-anonymise Tor users. A 2015 research paper outlined an attack effective, under certain circumstances, at decloaking Tor hidden services (now rebranded as “onion services”). Most recently, 110 poisoned Tor hidden service directories were discovered probing .onion sites for vulnerabilities, most likely in an attempt to de-anonymise both the servers and their visitors.
Cracks are beginning to show; a 2013 analysis by researchers at the US Naval Research Laboratory (NRL), who helped develop Tor in the first place, concluded that “80 percent of all types of users may be de-anonymised by a relatively moderate Tor-relay adversary within six months.”
Despite this conclusion, the lead author of that research, Aaron Johnson of the NRL, tells Ars he would not describe Tor as broken—the issue is rather that it was never designed to be secure against the world’s most powerful adversaries in the first place.
“It may be that people’s threat models have changed, and it’s no longer appropriate for what they might have used it for years ago,” he explains. “Tor hasn’t changed, it’s the world that’s changed.”
Tor’s weakness to traffic analysis attacks is well-known. The original design documents highlight the system’s vulnerability to a “global passive adversary” that can see all the traffic both entering and leaving the Tor network. Such an adversary could correlate that traffic and de-anonymise every user.
But as the Tor project’s cofounder Nick Mathewson explains, the problem of “Tor-relay adversaries” running poisoned nodes means that a theoretical adversary of this kind is not the network’s greatest threat.
“No adversary is truly global, but no adversary needs to be truly global,” he says. “Eavesdropping on the entire Internet is a several-billion-dollar problem. Running a few computers to eavesdrop on a lot of traffic, a selective denial of service attack to drive traffic to your computers, that’s like a tens-of-thousands-of-dollars problem.”
At the most basic level, an attacker who runs two poisoned Tor nodes—one entry, one exit—is able to analyse traffic and thereby identify the tiny, unlucky percentage of users whose circuit happened to cross both of those nodes. At present the Tor network offers, out of a total of around 7,000 relays, around 2,000 guard (entry) nodes and around 1,000 exit nodes. So the odds of such an event happening are one in two million (1/2000 x 1/1000), give or take.
But, as Bryan Ford, professor at the Swiss Federal Institute of Technology in Lausanne (EPFL), who leads the Decentralised/Distributed Systems (DeDiS) Lab, explains: “If the attacker can add enough entry and exit relays to represent, say, 10 percent of Tor’s total entry-relay and exit-relay bandwidth respectively, then suddenly the attacker is able to de-anonymise about one percent of all Tor circuits via this kind of traffic analysis (10 percent x 10 percent).”
“Given that normal Web-browsing activity tends to open many Tor circuits concurrently (to different remote websites and HTTP servers) and over time (as you browse many different sites),” he adds, “this means that if you do any significant amount of Web browsing activity over Tor, and eventually open hundreds of different circuits over time, you can be virtually certain that such a poisoned-relay attacker will trivially be able to de-anonymise at least one of your Tor circuits.”
For a dissident or journalist worried about a visit from the secret police, de-anonymisation could mean arrest, torture, or death.
As a result, these known weaknesses have prompted academic research into how Tor could be strengthened or even replaced by some new anonymity system. The priority for most researchers has been to find better ways to prevent traffic analysis. While a new anonymity system might be equally vulnerable to adversaries running poisoned nodes, better defences against traffic analysis would make those compromised relays much less useful and significantly raise the cost of de-anonymising users.
The biggest hurdle? Despite the caveats mentioned here, Tor remains one of the better solutions for online anonymity, supported and maintained by a strong community of developers and volunteers. Deploying and scaling something better than Tor in a real-world, non-academic environment is no small feat.