The Push for Net Neutrality Arose From Lack of Choice

The Push for Net Neutrality Arose From Lack of Choice
Feb 25 2015

The case for strong government rules to protect an open Internet rests in large part on a perceived market failure — the lack of competition for high-speed Internet service into American homes.

The Federal Communications Commission is expected to adopt on Thursday utility-style rules to ensure so-called net neutrality, prohibiting practices like offering pay-to-play fast lanes on the Internet. A legislative response by Republicans on Capitol Hill has stalled out.

The F.C.C.’s approach makes sense, proponents say, because for genuine high-speed Internet service most American households now have only one choice, and most often it is a cable company.

“For the moment, cable has won the high-speed Internet market,” said Susan Crawford, co-director of the Berkman Center for Internet and Society at Harvard Law School, and a former adviser to the Obama administration.

The new rules will not ensure competition from new entrants, ranging from next-generation wireless technology to ultrahigh-speed networks built by municipalities. Instead, strong regulation is intended to prevent the dominant broadband suppliers from abusing their market power.

Technology, of course, can change quickly and unpredictably. So, analysts say, it is impossible to predict what the competitive landscape might look like in several years, or a decade from now.

“But we are very unlikely to see any kind of broad-scale, national competitor to the incumbents in the near future,” said Kevin Werbach, a former F.C.C.counsel and an associate professor at the Wharton School of the University of Pennsylvania.

Tom Wheeler, chairman of the commission, described the challenge in a speech last year. “The underpinning of broadband policy today is that competition is the most effective tool for driving innovation, investment and consumer and economic benefits,” he said. “Unfortunately, the reality we face today is that as broadband increases, competitive choice decreases.”

When judging from lower-speed services and mobile access, there is ample competition. But Mr. Wheeler’s analysis leans on the definition of high-speed service at download speeds of 25 megabits per second or higher. He terms the 25-megabit threshold “ ‘table stakes’ in 21st-century communications,” when households are increasingly using online connections to download movies and music. At that level, 55 percent of consumers have only one choice of provider, according to the F.C.C.

Last month, the commission redefined basic broadband service, adopting the 25-megabit standard, up from four megabits. The move was opposed by the two Republican commissioners, including Ajit Pai, who says there is a “very competitive marketplace.”

With or without the new net neutrality rules, cable broadband faces numerous competitors. They include upgraded versions of the DSL, or digital subscriber line, technology offered by most telephone companies; next-generation wireless service; Internet access from low-orbit satellites; and very-high-speed fiber optic connections to homes.

Each has promise, analysts say, but also limitations. The telecommunications companies have employed a variety of techniques to increase the performance of DSL and have made progress. But cable remains a more capable technology, and keeps advancing.

“The gap between cable and DSL is getting larger,” said Craig Moffett, a senior analyst at MoffettNathanson Research.

Mobile wireless services are improving rapidly. But even if high speeds and steady transmission could be achieved, analysts say, the cost to consumers on metered data plans would make them inordinately expensive for households streaming movies on Netflix, for example. Mobile wireless is for data sipping, not gulping.


Why 40-Year-Old Tech Is Still Running America’s Air Traffic Control

Why 40-Year-Old Tech Is Still Running America’s Air Traffic Control
Feb 24 2015

On Friday, September 26, 2014, a telecommunications contractor named Brian Howard woke early and headed to Chicago Center, an air traffic control hub in Aurora, Illinois, where he had worked for eight years. He had decided to get stoned and kill himself, and as his final gesture he planned to take a chunk of the US air traffic control system with him.

Court records say Howard entered Chicago Center at 5:06 am and went to the basement, where he set a fire in the electronics bay, sliced cables beneath the floor, and cut his own throat. Paramedics saved Howard’s life, but Chicago Center, which controls air traffic above 10,000 feet for 91,000 square miles of the Midwest, went dark. Airlines canceled 6,600 flights; air traffic was interrupted for 17 days. Howard had wanted to cause trouble, but he hadn’t anticipated a disruption of this magnitude. He had posted a message to Facebook saying that the sabotage “should not take a large toll on the air space as all comms should be switched to the alt location.” It’s not clear what alt location Howard was talking about, because there wasn’t one. Howard had worked at the center for nearly a decade, and even he didn’t know that.

At any given time, around 7,000 aircraft are flying over the United States. For the past 40 years, the same computer system has controlled all that high-altitude traffic—a relic of the 1970s known as Host. The core system predates the advent of the Global Positioning System, so Host uses point-to-point, ground-based radar. Every day, thousands of travelers switch their GPS-enabled smartphones to airplane mode while their flights are guided by technology that predates the Speak & Spell. If you’re reading this at 30,000 feet, relax—Host is still safe, in terms of getting planes from point A to point B. But it’s unbelievably inefficient. It can handle a limited amount of traffic, and controllers can’t see anything outside of their own airspace—when they hand off a plane to a contiguous airspace, it vanishes from their radar.

The FAA knows all that. For 11 years the agency has been limping toward a collection of upgrades called NextGen. At its core is a new computer system that will replace Host and allow any controller, anywhere, to see any plane in US airspace. In theory, this would enable one air traffic control center to take over for another with the flip of a switch, as Howard seemed to believe was already possible. NextGen isn’t vaporware; that core system was live in Chicago and the four adjacent centers when Howard attacked, and this spring it’ll go online in all 20 US centers. But implementation has been a mess, with a cascade of delays, revisions, and unforeseen problems. Air traffic control can’t do anything as sophisticated as Howard thought, and unless something changes about the way the FAA is managing NextGen, it probably never will.


This technology is complicated and novel, but that isn’t the problem. The problem is that NextGen is a project of the FAA. The agency is primarily a regulatory body, responsible for keeping the national airspace safe, and yet it is also in charge of operating air traffic control, an inherent conflict that causes big issues when it comes to upgrades. Modernization, a struggle for any federal agency, is practically antithetical to the FAA’s operational culture, which is risk-averse, methodical, and bureaucratic. Paired with this is the lack of anything approximating market pressure. The FAA is the sole consumer of the product; it’s a closed loop.

The first phase of NextGen is to replace Host with the new computer system, the foundation for all future upgrades. The FAA will finish the job this spring, five years late and at least $500 million over budget. Lockheed Martin began developing the software for it in 2002, and the FAA projected that the transition from Host would be complete by late 2010. By 2007, the upgraded system was sailing through internal tests. But once installed, it was frighteningly buggy. It would link planes to flight data for the wrong aircraft, and sometimes planes disappeared from controllers’ screens altogether. As timelines slipped and the project budget ballooned, Lockheed churned out new software builds, but unanticipated issues continued to pop up. As recently as April 2014, the system crashed at Los Angeles Center when a military U-2 jet entered its airspace—the spy plane cruises at 60,000 feet, twice the altitude of commercial airliners, and its flight plan caused a software glitch that overloaded the system.


Re: The Supposedly Superfast pCell Network Gets Its First Big Test In San Francisco

Note:  This comment comes from friend Dave Burstein.  DLH]

From: Dave Burstein <>
Date: February 24, 2015 at 21:37:51 EST
Subject: Artemis is totally unproven and highly dubious

Dewayne, David

Since Dewayne and some reporters have picked something that appears to me to be hype, I thought to write a note. Until Steve Perlman presents far more convincing evidence, I don’t think there’s anything here worth writing about except to debunk.

I’d love to be wrong but Artemis pCell looks to be a prototype MU MIMO system that is almost certainly not going to deliver what Perlman has promised. I’ve looked as closely as his limited disclosures allow and reported expert opinion. 

Good engineers tell me MU MIMO has an extraordinary future. Folks like Tom Marzetta (Bell Labs) Arogyaswami Paulraj (Stanford) and Henry Samueli (Broadcom) are very optimistic for a few years from now. Samueli has begun work on chips to drive what will probably be an array of 50 small antennas. 

Once a century a Ramanujan comes along and proves things the best experts never imagined. Maybe Antonio Forenza of Artemis is that good. So far, every engineer I’ve discussed this with believes it would take many, many unexpected breakthroughs to deliver what Artemis claims.

Perlman is an inspiring lecturer, who learned how to construct a reality from a master, Steve Jobs. He may have the best reality distortion field around now that we’ve lost Steve. 

Two expert comments I reported a while back <>. Both of them have done important work in advanced wireless.

Andrea Goldsmith of Stanford

I remember when DIDO first hit the presses. The white paper on it ( was so vague as to be laughable, and the only technology I could discern through the smoke and mirrors was MU MIMO. I thought we had buried DIDO back then, but apparently not. Thanks for your quick response to this ridiculous “discovery”

Ted Rappaport, NYU

Dave: I have heard and seen this stuff from this group over the past few years, but have yet to see theory or detailed analysis or explanation that would allow this to be independently verified or generally understood by technically sophisticated people.

Without more technical details, this likely strikes many technically literate people as hype,fluff, and PR.

Dave Burstein

The Supposedly Superfast pCell Network Gets Its First Big Test In San Francisco
Steve Perlman and Dish Network lay down a marker
Feb 24 2015

Blaming the Internet for Terrorism: So Wrong and So Dangerous

[Note: This item comes from Lauren Weinstein’s NNSquad List. DLH]

Date: February 22, 2015 at 20:29:34 EST 
From: Lauren Weinstein < <>> 
Subject: [ NNSquad ] Blaming the Internet for Terrorism: So Wrong and So Dangerous

Blaming the Internet for Terrorism: So Wrong and So Dangerous <>

You can almost physically hear the drumbeat getting louder. It’s almost impossible to read a news site or watch cable news without seeing some political, religious, or “whomever we could get on the air just now” spokesperson bemoaning and/or expressing anger about free speech on the Internet.

Their claims are quite explicit. “Almost a hundred thousand social media messages sent by ISIL a day!” “Internet is the most powerful tool of extremists.” On and on.

Now, most of these proponents of “controlling” free speech aren’t dummies. They don’t usually come right out and say they want censorship. In fact, they frequently claim to be big supporters of free speech on the Net — they only want to shut down “extremist” speech, you see. And don’t worry, they all seem to claim they’re up to the task of defining which speech would be so classified as verboten. “Trust us,” they plead with big puppy dog eyes.

But blaming the Net for terrorism — which is the underlying story behind their arguments — actually has all the logical and scientific rigor of blaming elemental uranium for atomic bombs.

Speaking of which, I’d personally be much more concerned about terrorist groups getting hold of loose fissile material than Facebook accounts. And I’m pretty curious about how that 100K a day social media messages stat is derived. Hell, if you multiply the number of social media messages I typically send per day times the number of ostensible followers I have, it would total in the millions — every day. And you know what? That plus one dollar will buy you a cup of crummy coffee.

Proponents of controls on Internet speech are often pretty expert at conflating and confusing different aspects of speech, with a definite emphasis on expanding the already controversial meanings of “hate speech” and similar terms.

They also note — accurately in this respect — that social media firms aren’t required to make publicly available all materials that are submitted to them. Yep, this is certainly true, and an important consideration. But what speech control advocates seem to conveniently downplay is that the major social media firms already have significant staffs devoted to removing materials from their sites that violate their associated Terms of Service related to hate speech and other content, and what’s more this is an incredibly difficult and emotionally challenging task, calling on the Wisdom of Solomon as but one prerequisite.

The complexities in this area are many. The technology of the Net makes true elimination of any given material essentially impossible. Attempts to remove “terrorist-related” items from public view often draw more attention to them via the notorious “Streisand Effect” — and/or push them into underground, so-called “darknets” where they are still available but harder to monitor towards public safety tracking of their activities.

“Out of sight, out of mind” might work for a cartoon ostrich with its head stuck into the ground, but it’s a recipe for disaster in the real world of the Internet.

There are of course differences between “public” and “publicized.” Sometimes it seems like cable news has become the paid publicity partner of ISIL and other terrorist groups, merrily spending hours promoting the latest videotaped missive from every wannabe terrorist criminal wearing a hood and standing in front of an ISIL flag fresh from their $50 inkjet printer.

But that sort of publicity in the name of ratings is very far indeed from attempting to control the dissemination of information on the Net, where information once disseminated can receive almost limitless signal boosts from every attempt made to remove it.

This is not to say that social media firms shouldn’t enforce their own standards. But the subtext of information control proponents — and their attempts to blame the Internet for terrorism — is the implicit or explicit implication that ultimately governments will need to step in and enforce their own censorship regimes.

We’re well down that path already in some ways, of course. Government-mandated ISP block lists replete with errors blocking innocent sites, yet still rapidly expanding beyond their sometimes relatively narrow original mandates.

And whether we’re talking about massive, pervasive censorship systems like in China or Iran, or the immense censorship pressures applied in countries like Russia, or even the theoretically optional systems like in the U.K, the underlying mindsets are very much the same, and very much to the liking of political leaders who would censor the Internet not just on the basis of “stopping terrorism,” but for their own political, financial, religious or other essentially power hungry reasons as well.

In this respect, it’s almost as if terrorists were partnering with these political leaders, so convenient are the excuses for trying to crush free speech, to control that “damned Internet” — provided to the latter by the former.

Which brings us to perhaps the ultimate irony in this spectacle, the sad truth that by trying to restrict information on the Internet in the name of limiting the dissemination of “terrorist” materials on the Net, even the honest advocates of this stance — those devoid of ulterior motives for broader information control — are actually advancing the cause of terrorism by drawing more attention to those very items they’d declare “forbidden,” even while it will be technologically impossible to actually remove those materials from public view.

It’s very much a lose-lose situation of the highest order, with potentially devastating consequences far beyond the realm of battling terrorists.

For if these proponents of Internet information control — ultimately of Internet censorship — are successful in their quest, they will have handed terrorists, totalitarian governments, and other evil forces a propaganda and operational prize more valuable to the cause of repression than all the ISIL social media postings and videos made to date or yet to be posted.

And then, dear friends, as the saying goes, the terrorists really would have won, after all.

Be seeing you.

–Lauren– Lauren Weinstein ( <>):

Google warns FCC plan could help ISPs charge senders of Web traffic

Google warns FCC plan could help ISPs charge senders of Web traffic
Net neutrality plan could have unintended consequences, Google argues.
By Jon Brodkin
Feb 23 2015

Google is warning that the Federal Communications Commission’s net neutrality plan could have unintended consequences that help Internet service providers charge Web services for sending traffic.

FCC Chairman Tom Wheeler’s plan would reclassify broadband providers as common carriers on two fronts, in the service they provide home Internet customers and their relationships with “edge providers,” companies like Netflix that offer content to consumers over the Internet. Classifying the ISP-edge provider relationship is, in the FCC’s way of thinking, supposed to provide additional authority so the commission can intervene when an edge provider claims it is being treated unfairly.

But Google says that giving the ISP-edge provider relationships a new classification could actually make it easier for Internet providers to charge edge providers for the right to send traffic to consumers.

“[T]his issue must be viewed in light of the efforts by some ISPs, particularly abroad, to claim that they provide a service to content providers for which they should be able to charge under a ‘sender pays’ model—while still charging their retail customers for the same traffic,” Google Communications Law Director Austin Schlick wrote in a filing with the FCC. “To the extent the Commission encourages the falsehood that ISPs offer two overlapping access services instead of just one, or the fiction that edge providers are customers of terminating ISPs when they deliver content to the Internet, it may encourage such attempts at double-recovery. That could do serious, long-term harm to the virtuous circle of Internet innovation, thus greatly undermining the benefit of adopting net neutrality rules.”

Google is making an argument similar to one put forth by the advocacy group Free Press, which said that classifying the ISP-edge provider connection as a common carrier service is a legally dicey strategy. The FCC’s goal is to be able to intervene in interconnection disputes that harm Internet service quality. But both Free Press and Google argue that the FCC can oversee interconnection simply by reclassifying consumer broadband as a common carrier service.

It is not “necessary to imagine a non-existent service in order to reach ISPs’ interconnection practices,” Google told the FCC. “Should the Commission classify end-user broadband Internet access as a telecommunications service subject to Title II [of the Communications Act], that classification alone would enable the Commission to ensure that ISPs’ interconnection practices are just and reasonable. As noted, for instance, Section 201(b) requires just and reasonable practices ‘for and in connection with such communication service.’ If an ISP’s intentional port congestion or other interconnection practices denied end-user customers the full benefit of the two-way service they have purchased, then the Commission could take enforcement action.”

Interconnection is when two network providers, or an edge provider and an ISP, exchange traffic directly without a middleman. These transfers can happen with or without payment. This type of paid traffic transfer is different from “paid prioritization” deals prohibited by the net neutrality proposal, because interconnection doesn’t speed traffic up after it enters the ISP’s network. But interconnection can greatly improve performance because it provides a dedicated path into the ISP’s network.

Interconnection became part of the net neutrality debate only after a dispute between Netflix and ISPs caused consumers to have poor Netflix service for months, until Netflix relented and paid for direct network connections. The FCC is not proposing a ban on interconnection payments outright, but it wants to set up a complaint process in which edge providers could argue that they are being overcharged or that ISPs aren’t upgrading capacity quickly enough.

Google, which is both a content provider and an ISP, has argued that companies like Netflix should not have to pay for interconnection. But the FCC’s approach to interconnection is flawed, Google argued.


Why “Citizenfour” Deserved Its Oscar

Why “Citizenfour” Deserved Its Oscar
Feb 22 2015

“Thank you to Edward Snowden for his courage,” Laura Poitras, the director of “Citizenfour,” said as she accepted the Oscar for best documentary. Neil Patrick Harris, the award show’s host, noted that Snowden couldn’t be there “for some treason.” Treason isn’t one of the crimes Snowden has been charged with—the government wants to prosecute him under the Espionage Act—but both the praise and the joke point to why this Snowden Oscar mattered. What he did was useful, and dangerous.

That wouldn’t have been enough if the movie were bad. But “Citizenfour” is worth watching, as well as celebrating. One still has to ask where the cinematic romance is. At the Oscars, an answer was provided by the young woman onstage with Poitras: Lindsay Mills, the woman whom Snowden at first left behind when he left his job and everything else for a hotel room in Hong Kong. One of the minor revelations of “Citizenfour” was that Mills had joined him in Moscow.

“Just walk me through it,” Glenn Greenwald tells Edward Snowden, in that Hong Kong hotel room. The guidance Greenwald and his colleagues look for is of three distinct kinds: How do you keep secrets? Why would Snowden tell secrets? And what has the government been hiding?

The first is the most one-sided. Greenwald, as the narration delicately makes clear, initially can’t figure out or can’t be bothered to set up the encrypted line of communication needed to satisfy the mysterious source who e-mails him—this is why Snowden turns to Laura Poitras, who knows exactly what he’s talking about when he asks, in their first exchanges, about her public keys. (George Packer wrote a Profile of Poitras for The New Yorker.) Snowden shows Greenwald how to do it (“It seems hard, but it’s not—this is super-easy”), and why he should. Here is one of the practical, paradoxical gifts of the Snowden affair: don’t give up on the idea that your words can be secret, at least slightly more secret than is convenient for companies or spies. If you are a little disciplined, you can be freer. There is a lovely shot of Greenwald’s face when Snowden, who is about to enter a password, asks for his “magic mantle of power,” a red sweatshirt, and pulls it over his head, as if he were a man running in the rain, or a teen-ager with a flashlight under his blankets. Looking at him, Greenwald, whom we’ve already encountered as a big talker, is, for a moment, only quiet and curious, with barely a flicker in his expression before he asks, “Is that about the possibility of—overhead?” Greenwald adds that nothing will surprise him anymore. His tone in that instant is one that the film, for all the scenes with angry activists, ultimately shares, and why the film works—neither titillated nor portentous, and just abashed enough to keep its importance from becoming self-importance.

Narcissism is the charge that’s thrown at Snowden—that he thinks he gets to decide what’s secret. His character, or, rather, his motivation for leaking, is the second puzzle for Greenwald and for Ewen MacAskill, the Guardian reporter also in the hotel room. Here, it is MacAskill’s face that is revealing. Greenwald seems sure of what category to put Snowden in, once he is persuaded that the leak is for real and the information is good. (“The fearlessness and the ‘fuck you’ to, like, the bullying tactics has got to be completely pervading everything we do.”) MacAskill, though, begins by telling Snowden that he doesn’t know anything about him; when Snowden starts talking about the N.S.A.’s relation to Booz Allen Hamilton, his on-paper employer, MacAskill stops him: “So, I don’t know your name.” He takes notes; his glances, when he looks up from writing in the scenes that follow, suggest a skeptic’s trust being earned.


The Spy Cables: A glimpse into the world of espionage

The Spy Cables: A glimpse into the world of espionage
Secret documents, leaked from numerous intelligence agencies, offer rare insights into the interactions between spies.
By Al Jazeera Investigative Unit
Feb 23 2015

A digital leak to Al Jazeera of hundreds of secret intelligence documents from the world’s spy agencies has offered an unprecedented insight into operational dealings of the shadowy and highly politicised realm of global espionage.

Over the coming days, Al Jazeera’s Investigative Unit is publishing The Spy Cables, in collaboration with The Guardian newspaper.

Spanning a period from 2006 until December 2014, they include detailed briefings and internal analyses written by operatives of South Africa’s State Security Agency (SSA). They also reveal the South Africans’ secret correspondence with the US intelligence agency, the CIA, Britain’s MI6, Israel’s Mossad, Russia’s FSB and Iran’s operatives, as well as dozens of other services from Asia to the Middle East and Africa.

Among the revelations, the Spy Cables disclose how:

• Israel’s Mossad told its allies that Iran was not working to produce nuclear weapons just a month after Prime Minister Benjamin Netanyahu warned it was barely a year from being able to do so;
• The CIA made attempts to contact Hamas directly despite the US government listing the Palestinian group as a “terrorist organisation”;
• Britain’s MI6 sought South African help in an operation to recruit a North Korean official who had previously refused their cash; and
• South African and Ethiopian spies struggled to “neutralise” an assassination plot targeting a leading African diplomat.

The files unveil details of how, as the post-apartheid South African state grappled with the challenges of forging new security services, the country became vulnerable to foreign espionage and inundated with warnings related to the US “War on Terror”.

Following the 9/11 attacks, South African spies were flooded with requests related to al-Qaeda, despite their own intelligence gathering and analysis telling them that they faced minimal direct threats from such groups, and that the main threat of violence on South African soil came from domestic far-right groups.

The South Africans’ focus on Iran was largely a result of  pressure from other nations, and the leaked documents also report in depth on alleged efforts by Iran to defeat international sanctions and even its use of Persian rug stores as front companies for spying activity. 


Unlike the Edward Snowden documents that focus on electronic signals intelligence, commonly referred to in intelligence circles as “SIGINT”, the Spy Cables deal with human intelligence, or “HUMINT”.

This is espionage at the more humdrum, day-in-the-office level. At times, the workplace resembles any other, with spies involved in form-filling, complaints about missing documents and personal squabbles. Some of the communiqués between agencies are simply invitations for liaison meetings or briefings by one agency to another.

Inter-agency communiqués include “trace requests” for individuals or phone numbers. One set of cables from the Algerian Embassy in South Africa relates to a more practical concern. It demands that “no parking” signs are placed in the street outside. The cable notes that the British and US embassies enjoy this privilege, and argues that it should be extended to Algeria as well.

Rather than chronicling spy-movie style tales of  ruthless efficiency of intelligence agencies, they offer an unprecedented glimpse into the daily working lives of people whose jobs are kept secret from the public.