IPv6 adoption starting to add up to real numbers: 0.6 percent

IPv6 adoption starting to add up to real numbers: 0.6 percent
Researchers looked at 12 metrics, including half the Internet’s actual traffic.
By Iljitsch van Beijnum
Aug 28 2014

In a paper presented at the prestigious ACM SIGCOMM conference last week, researchers from the University of Michigan, the International Computer Science Institute, Arbor Networks, and Verisign Labs presented the paper “Measuring IPv6 Adoption.” In it, the team does just that—in 12 different ways, no less. The results from these different measurements don’t exactly agree, with the lowest and the highest being two orders of magnitude (close to a factor 100) apart. But the overall picture that emerges is one of a protocol that’s quickly capturing its own place under the sun next to its big brother IPv4.

As a long-time Ars reader, you of course already know everything you need to know about IPv6. There’s no Plan B, but you have survived World IPv6 Dayand World IPv6 Launch. All of this drama occurs because existing IP(v4) addresses are too short and are thus running out, so we need to start using the new version of IP (IPv6) that has a much larger supply of much longer addresses.

The good news is that the engineers in charge knew we’d be running out of IPv4 addresses at some point two decades ago, so we’ve had a long time to standardize IPv6 and put the new protocol in routers, firewalls, operating systems, and applications. The not-so-good news is that IP is everywhere. The new protocol can only be used when the two computers (or other devices) communicating over the ‘Net—as well as every router, firewall, and load balancer in between—have IPv6 enabled and configured. As such, getting IPv6 deployed has been an uphill struggle. But last week’s paper shows us how far we’ve managed to struggle so far.

In an effort to be comprehensive, the paper (PDF) visits all corners of the Internet’s foundation, from getting addresses to routing in the core of the network. The researchers also got their hands on as many as half of the packets flowing across the Internet at certain times, counting how many of those packets were IPv4 and how many were IPv6.

The authors focused on content providers, service providers, and content consumers. For each of these, the first step toward sending and receiving IPv6 packets is to get IPv6 addresses. Five Regional Internet Registries (RIRs) give out both IPv4 addresses and IPv6 addresses. Looking at 10 years of IP address distribution records, it turns out that prior to 2007, only 30 IPv6 address blocks or address prefixes were given out each month. That figure is now 300; the running total is 18,000. IPv4, on the other hand, reached a peak of more than 800 prefixes a month in 2011 and is now at about 500. Although IPv6 is close on a monthly basis, IPv4 had a big head start and is currently at 136,000 prefixes given out.

Once a network obtains address space, it’s helpful if packets manage to find their way to the holder of the addresses. Prefixes must be “advertised” in the global routing system so routers around the world know where to send the packets for those addresses. The number of IPv6 prefixes in routing tables in the core of the Internet was 500 in 2004 and 19,000 now—a 37-fold increase. IPv4, on the other hand, went from 153,000 to 578,000 in the same time, an increase of a factor four. Network operators often break up their prefixes into smaller ones, so this number is higher than the number of prefixes given out by the RIRs. Note that this is purely the number of address blocks, regardless of their size.

With routing and addressing out of the way, we need to start thinking about the DNS. When a system wants to talk to a service known by a domain name, it can ask the DNS system for the IPv4 addresses that go with that domain and have an A (address) record query. If the system also wants to know the IPv6 addresses that go with the domain, it has to perform a separate AAAA record query. These queries can be performed over either IPv4 or IPv6.


Re: Time Warner Cable Internet Outage Affects Millions

[Note:  This comment comes from friend Bob Frankston.  DLH]

From: “Bob Frankston” <Bob19-0501@bobf.frankston.com>
Subject: RE: [Dewayne-Net] Time Warner Cable Internet Outage Affects Millions
Date: August 28, 2014 at 15:17:07 EDT
To: dewayne@warpspeed.com

You get what you ask for. If you build a video delivery system and treat the Internet as just one channel being delivered you get outages at points of failure in an entertainment system.

If you want to build a system for connectivity you need a different approach to engineering and people who aren’t primarily in the video delivery business nor in the business of building an infrastructure for telecommunications services. Buffer bloat is another example of what happens if you use engineering principles from another era.

Time Warner Cable Internet Outage Affects Millions
By Kia Makarechi
Aug 27 2014

As Obama Settles on Nonbinding Treaty, “Only a Big Movement” Can Take on Global Warming

As Obama Settles on Nonbinding Treaty, “Only a Big Movement” Can Take on Global Warming
Aug 28 2014

As international climate scientists warn runaway greenhouse gas emissions could cause “severe, pervasive and irreversible impacts,” the Obama administration is abandoning attempts to have Congress agree to a legally binding international climate deal. The New York Times reports U.S. negotiators are crafting a proposal that would not require congressional approval and instead would seek pledges from countries to cut emissions on a voluntary basis. This comes as a new U.N. report warns climate change could become “irreversible” if greenhouse gas emissions go unchecked. If global warming is to be adequately contained, it says, at least three-quarters of known fossil fuel reserves must remain in the ground. We speak to 350.org founder Bill McKibben about why his hopes for taking on global warming lie not in President Obama’s approach, but rather in events like the upcoming People’s Climate March in New York City, which could mark the largest rally for climate action ever. “The Obama administration, which likes to poke fun at recalcitrant congressmen, hasn’t been willing to really endure much in the way of political pain itself in order to slow things down,” McKibben says. “The rest of the world can see that. The only way we’ll change any of these equations here or elsewhere is by building a big movement — that’s why September 21 in New York is such an important day.”

Bill McKibben, co-founder and director of 350.org. He is author of many books, including Eaarth: Making a Life on a Tough New Planet.


The Biggest Tax Scam Ever: How Corporate America Parks Profits Overseas, Avoiding Billions in Taxes

The Biggest Tax Scam Ever: How Corporate America Parks Profits Overseas, Avoiding Billions in Taxes
Aug 28 2014

As Burger King heads north for Canada’s lower corporate tax rate, we speak to Rolling Stone contributing editor Tim Dickinson about his new article, “The Biggest Tax Scam Ever.” Dickinson reports on how top U.S. companies are avoiding hundreds of billions of dollars by parking their profits abroad — and still receiving more congressionally approved incentives. Dickinson writes: “Top offenders include giants from high-tech (Microsoft, $76 billion); Big Pharma (Pfizer, $69 billion); Big Oil (Exxon­Mobil, $47 billion); investment banks (Goldman Sachs, $22 billion); Big Tobacco (Philip Morris, $20 billion); discount retailers (Wal-Mart, $19 billion); fast-food chains (McDonald’s, $16 billion) – even heavy machinery (Caterpillar, $17 billion). General Electric has $110 billion stashed offshore, and enjoys an effective tax rate of 4 percent – 31 points lower than its statutory obligation to the IRS.”

Tim Dickinson, contributing editor at Rolling Stone, where he covers the national affairs beat. His latest article is “The Biggest Tax Scam Ever: Some of America’s top corporations are parking profits overseas and ducking hundreds of billions in taxes. And how’s Congress responding? It’s rewarding them for ripping us off.”

James Henry, economist, lawyer and senior adviser with the Tax Justice Network. He is former chief economist at McKinsey & Company.


14 years ago, DOJ said letting one broadband company run half the country was a bad idea

14 years ago, DOJ said letting one broadband company run half the country was a bad idea
By Brian Fung
Aug 28 2014

Remember the year 2000? We’d just gotten through worrying about the Y2K bug. The dot-com bubble was in full swing. Tuvalu joined the United Nations. Heady times!

The year 2000 also happened to be when federal regulators approved a merger between two tech titans that some now say should be instructive for the Justice Department and the Federal Communications Commission, as the two agencies review a current-day proposal by Comcast to acquire Time Warner Cable. The 2000 merger, known as AT&T-MediaOne, offers a precedent. But it also raises further questions about certain rules we’ve established to ensure competition in the marketplace. As a matter of fact, as communications technologies have begun to blend and overlap, it’s no longer clear that those rules adequately address the problems they were created to solve.

The obscure case we’re talking about dates back to the early days of high-speed broadband. It keeps coming up in filing after filing to the FCC. Netflix brought it up, as did Dish Network. It’s been cited by Sen. Al Franken (D-Minn.) — an outspoken opponent of the Comcast merger — a handful of consumer groups,a D.C. suburb in Maryland, a group of antitrust lawyers and even by Comcast itself. What is everyone talking about, and why is a 14-year-old case that predates YouTube and Facebook still relevant?

You may not remember MediaOne, but back in 2000 it was one of the biggest Internet providers around. Comcast had initially planned to buy it before being outmaneuvered at the last minute by AT&T, which submitted a higher bid. So the merger became known as AT&T-MediaOne.

It so happened that through an ISP called Road Runner, MediaOne served a large chunk of America’s broadband subscribers. AT&T, meanwhile, sold Internet through a Road Runner competitor called Excite@Home. A merger would’ve given AT&T control not only over MediaOne’s operations, but also part ownership in Road Runner — and together with its stake in Excite@Home, AT&T would’ve controlled an estimated 40 percent of the country’s access to broadband.

“Through its control of Excite@Home and its substantial influence or control of Road Runner,” the Justice Department wrote in a complaint in 2000, “AT&T would substantially increase its leverage in dealing with broadband content providers, enabling it to extract more favorable terms for such services.”

The Justice Department believed that if AT&T-MediaOne went through, AT&T’s newfound position as a gatekeeper would let it dictate outcomes across a national market for broadband. While regulators have been wary of gatekeeping for decades, this marked the first time that the concept had been raised in the Internet industry, policy analysts say.

What does this have to do with the Comcast merger? According to Netflix, Dish and others, regulators currently face a similar situation where you have two major Internet service providers who don’t compete against each other in local markets, but whose combination would create an entity that would be similarly empowered to dictate outcomes across a range of other industries nationally. (Where AT&T-MediaOne would’ve controlled 40 percent of the broadband market, Comcast-TWC’s post-merger broadband marketshare would stand at 36 percent.)

In the case of MediaOne, regulators made AT&T sell off its newly acquired stake in Road Runner as a way to address the potential gatekeeping problem. But today’s case isn’t as simple, largely because the Internet has become far more than an information retrieval system. It’s now a platform for a host of applications that were traditionally provided by distinct industries. These industries are now in the process of converging and moving online, and Comcast conveniently sits at the intersection of them all. Even if regulators thought that Comcast should sell off some part of its business to keep it from becoming too powerful, it isn’t clear what it could spin off.


How big telecom smothers city-run broadband

How big telecom smothers city-run broadband
AT&T, Comcast, TWC use statehouses to curb public Internet service.
By Allan Holmes
Aug 28 2014

Janice Bowling, a 67-year-old grandmother and Republican state senator from rural Tennessee, thought it only made sense that the city of Tullahoma be able to offer its local high-speed Internet service to areas beyond the city limits.

After all, many of her rural constituents had slow service or did not have access to commercial providers, like AT&T Inc. and Charter Communications Inc.

But a 1999 Tennessee law prohibits cities that operate their own Internet networks from providing access outside the boundaries where they provide electrical service. Bowling wanted to change that and introduced a bill in February that would allow them to expand.

She viewed the network, which offers Tullahoma residents speeds about 80 times faster than AT&T and 10 times faster than Charter (according to advertised services), as a utility, like electricity, that all Tennesseans need.

“We don’t quarrel with the fact that AT&T has shareholders that it has to answer to,” Bowling said with a drawl while sitting in the spacious wood-paneled den of her log-cabin-style home. “That’s fine, and I believe in capitalism and the free market. But when they won’t come in, then Tennesseans have an obligation to do it themselves.”

At a meeting three weeks after Bowling introduced Senate Bill 2562, the state’s three largest telecommunications companies—AT&T, Charter, and Comcast Corp.—tried to convince Republican leaders to relegate the measure to so-called “summer study,” a black hole that effectively kills a bill. Bowling, described as “feisty” by her constituents, initially beat back the effort and thought she’d get a vote.

That’s when Joelle Phillips, president of AT&T’s Tennessee operations, leaned toward her across the table in a conference room next to the House caucus leader’s office and said tersely, “Well, I’d hate for this to end up in litigation,” Bowling recalls.

The threat surprised Bowling, and apparently AT&T’s ominous warning reached her colleagues as well. Days later, support in the Tennessee House for Bowling’s bill dissolved. AT&T had won.

“I had no idea the force that would come against this, because it’s just so reasonable and so necessary,” Bowling said.

AT&T and Phillips didn’t respond to e-mails asking for comment.

A national fight

Tullahoma is just one battlefront in a nationwide war that the telecommunications giants are fighting against the spread of municipal broadband networks. For more than a decade, AT&T, Comcast, Time Warner Cable Inc., and CenturyLink Inc. have spent millions of dollars to lobby state legislatures, influence state elections, and buy research to try to stop the spread of public Internet services that often offer faster speeds at cheaper rates.

The companies have succeeded in getting laws passed in 20 states that ban or restrict municipalities from offering Internet to residents.

Now the fight has gone national. The Federal Communications Commission (FCC) in Washington, DC, is considering requests from Chattanooga, Tennessee, and Wilson, North Carolina, to pre-empt state laws that block municipalities from building or expanding broadband networks, hindering economic growth, the cities argue.

If the FCC rules in favor of the cities, and the ruling survives any legal challenges, municipalities nationwide will be free to offer high-speed Internet to residents when they aren’t satisfied with the service provided by private telecommunications companies.

To better understand the municipal broadband debate, the Center for Public Integrity traveled to two southern cities: Tullahoma, which has a broadband network, and Fayetteville, North Carolina, which doesn’t.


Juniper CEO on IoT, co-creation and the IOP

Juniper CEO on IoT, co-creation and the IOP
Shaygan Kheradpir blogs and talks about company strategy and a new style of customer interaction
Aug 27 2014

An Internet of Things is much more than simply connecting things to things or data to devices, argues Juniper CEO Shaygan Kheradpir; it should include the ability to inform users on why things are happening and then take appropriate action, such as reconfiguring itself to address or correct the situation.

Juniper’s been quiet on its own IoT strategy while competitors, chiefly Cisco,have been bullish on the opportunity and its ability to address it. But that’s not to say Juniper doesn’t have an IoT strategy, Kheradpir told us in an exclusive interview this week:

It’s the cloud and the very tentacles and tributaries you’ve got now – smart sensors. The next step on this is, how do you use all of this information? Which is High IQ networking in the service of healthcare… transportation… energy… media. It’s a coming together – finally – of high performance networking, big data analytics, and control system theory. These three things together… now you can actually do stuff because finally you are closing the loop on many things. Clearly, Internet of Things is a step toward there, cloud was a step towards there. We have a lot of use cases we are working on at Juniper of how you can make the network tell you stuff that you don’t know.

Kheradpir’s first Juniper blog, posted this week, states an example of the network tweeting information to an operator on an event, its status and remedial activity undertaken to address it. The engineering of this intelligence and self-sufficiency is cloud, sensors and control systems working in concert to achieve an operational goal.

So Juniper’s IoT strategy is working towards this goal, which is consistent with its cloud and “High IQ” networking imperatives, Kheradpir says.

Kheradpir has been in the corner office of Juniper for eight months now. He has been traveling extensively in that time, talking to customers and implementing his Integrated Operating Plan (IOP), an effort to streamline company operations, reduce costs and return more to shareholders.

Customer priorities are all over the map, and differ by business segment, he says. Telecom and cable providers are looking for operational cost efficiencies, scale, density and power improvements, and the ability to quickly turn up new services and tap new revenue streams through DevOps and virtual CPE.

Web 2.0 and webscale companies are targeting hyperscale growth, with the ability to scale both up and out, he says. They are also prioritizing on precision and context sensitivity in delivery of services to maximize the user experience at low cost.

Financial services firms are interested in carrier grade infrastructure, with high availability, automation in provisioning, and security and compliance assurance and enforcement by embedding associated policies into the fabric of the infrastructure. Infrastructure flexibility and multitenancy support also ranked high among customer needs, Kheradpir says.