Google Reveals ‘Project Wing,’ Its Two-Year Effort to Build Delivery Drones

Google Reveals ‘Project Wing,’ Its Two-Year Effort to Build Delivery Drones
Aug 28 2014

Google X, the tech giant’s “moonshot” lab, has spent the last two years building an aerial drone that can deliver goods across the country. The company calls the effort Project Wing.

Revealed today in a story from The Atlantic, the project is reminiscent of work underway at Amazon CEO and founder Jeff Bezos revealed the retailer’s drone ambitions this past holiday shopping season during an appearance on the popular TV news magazine 60 Minutes.

“Self-flying vehicles could open up entirely new approaches to moving things around—including options that are faster, cheaper, less wasteful, and more environmentally sensitive than the way we do things today,” a Google spokesperson said in an email to WIRED.

According the company, the Project Wing team recently tested its drone prototypes in Australia, delivering packages to a pair of local farmers. The company said it would not agree to additional interviews about the project. “The vehicle you see in our video is more a research vehicle than an indication of a final decision or direction—as we figure out exactly what our service will deliver and where and why, we will look at a variety of vehicle options (both home-made and off-the-shelf),” the spokesperson said.

A white paper released by the company says that the Google X team first discussed the idea of building flying vehicles in 2011, and that in July 2012, Nick Roy, of the MIT Aeronautics & Astronautics program, joined the company to explore the possibilities. Originally, the paper says, the aim was to use drones to deliver defibrillators to heart attack victims.


Netflix asks FCC to stop Comcast/TWC merger citing ‘serious’ public harm

Netflix asks FCC to stop Comcast/TWC merger citing ‘serious’ public harm
By Steve Dent
Aug 26 2014

As it promised, Netflix has filed a petition to the FCC demanding that it deny the proposed merger between Comcast and Time Warner. The 256-page document claims that it would result in “serious public interest harm,” and no discernible public benefit — two red flags for regulatory bodies. Netflix cited several examples of harm already inflicted on it by Comcast or Time Warner Cable. For one, Comcast has used network congestion as an excuse to “shift Netflix traffic to paid interconnections,” It also argued that data caps have been used as a tactic to deter consumers from third-party streaming companies like Netflix or Hulu.

Netflix wrote that a merged cable giant would have huge leverage over it and other internet companies. It said Comcast and TWC’s claims that there is enough competition in the market are disingenuous, since DSL offerings from AT&T and Verizon are often insufficient for Netflix streaming. It added that TWC and Comcast offer competing paid video-on-demand services over broadband and thus have “incentives to interfere” with third-party companies like Netflix.

It also noted that it’s prohibitively expensive for consumers to switch broadband services, and that even if they wanted to, there are often zero alternatives — a situation that would worsen with a merger. Finally, it complained about the problem of “terminating networks,” or the point at which user data switches from one network to another. It contended that providers can deliberately congest such routes to extract fees — and in fact, currently have no incentive not to.

There are many more arguments listed in the document (at the source), and many are well known to the US public — who have become intensely interested in the merger and net neutrality in general. Naturally, Netflix has its own interests (and profits) at heart, and Comcast and TWC may have a rebuttal to its main arguments. But Netflix’s legal challenge to the FCC is significant, since it (and its customers) may suffer the most from a merger. It has now joined Dish Network in filing a formal brief along with numerous consumer groups


Voices From a Virtual Past

An oral history of a technology whose time has come again
Compiled by Adi Robertson and Michael Zelenko
Aug 24 2014

When Facebook bought virtual reality company Oculus in early 2014, virtual reality blew up. While game and movie studios began reimagining the future, others looked back at the “old days” of VR — a loosely remembered period in the 1990s when gloves and goggles were super cool and everyone was going to get high on 3D graphics. But things were never so simple. We spoke to 18 key VR innovators about their work and dreams. What follows is over two decades of memories and visions for what the future could be.

Some people identify the birth of virtual reality in rudimentary Victorian “stereoscopes,” the first 3D picture viewers. Others might point to any sort of out-of-body experience. But to most, VR as we know it was created by a handful of pioneers in the 1950s and 1960s. In 1962, after years of work, filmmaker Mort Heilig patented what might be the first true VR system: the Sensorama, an arcade-style cabinet with a 3D display, vibrating seat, and scent producer. Heilig imagined it as one in a line of products for the “cinema of the future,” but that future failed to materialize in his lifetime.

In 1965, Ivan Sutherland — already known as the creator of groundbreaking computer interface Sketchpad — conceived of what he termed “The Ultimate Display,” or, as he wrote, “a room within which the computer can control the existence of matter.” He demonstrated an extremely preliminary iteration of such a device, a periscope-like video headset called the “Sword of Damocles,” in 1968.

Meanwhile, at the Wright–Patterson Air Force Base in Ohio, military engineer Thomas Furness was designing a new generation of flight simulators, working on a multi-decade project that eventually became the hallmark program known as the Super Cockpit.

A few years later, in the late ’60s, an artist and programmer named Myron Krueger would begin creating a new kind of experience he termed “artificial reality,” and attempt to revolutionize how humans interacted with machines.


Internet Archive posts millions of historic images to Flickr

Internet Archive posts millions of historic images to Flickr
Dating from 1500 to 1922, these images are now free and searchable.
By Megan GeussAug 29 2014<>

Earlier this year, communications technology scholar Kalev Leetaru began culling over 14 million images from the Internet Archive’s public domain ebooks and uploading them to the Internet Archive’s Flickr account. As of today, 2.6 million images are now easily searchable and downloadable.

When the Internet Archive originally scanned the books, they used Optical Character Recognition (OCR), which made the book text searchable, but that didn’t mean much if you were looking for images. So Leetaru wrote some software to take advantage of the OCR program that the Internet Archive had used to scan public domain works published and written between 1500 and 1922.

According to the BBC, the OCR program scanned the books and discarded sections of the text that it recognized as images. Leetaru had his software go back and find those discarded portions of text, automatically converting those sections into Jpeg images and uploading them to Flickr. “The software also copied the caption for each image and the text from the paragraphs immediately preceding and following it in the book,” the BBC wrote.

Although the tagging for the images is admittedly imprecise, the potential for such an easily accessible archive is massive. “Any library could repeat this process,” Leetaru told the BBC. “That’s actually my hope, that libraries around the world run this same process of their digitized books to constantly expand this universe of images.”

‘I could have been Mike Brown': your stories of racial profiling by the world’s police

‘I could have been Mike Brown': your stories of racial profiling by the world’s police
By Tony Messenger in St Louis and Matt Sullivan in New York
Aug 29 2014

Hands up. Don’t shoot. The image of black men and women repeating this simple action at protests in Ferguson, Missouri – and across the globe – generates its power from what happens before that moment.

In Ferguson and too many places, police are more likely to pull over people of color for driving – indeed, often for simply being a person of color.

But there is lasting power in the stories people never forget. They are stories of “broken” taillights, of police brutality that doesn’t show up in an arrest report because there never was one, of no justice because nobody knew where to turn.

To help reach beyond Ferguson, the opinion departments of Guardian US and the St Louis Post-Dispatch partnered to gather hundreds of reader experiences. Our hope is that this sampling will help spur empathy – and then action, everywhere. These are your stories, lightly edited for space, privacy and clarity, and your hope for what’s next.


IPv6 adoption starting to add up to real numbers: 0.6 percent

IPv6 adoption starting to add up to real numbers: 0.6 percent
Researchers looked at 12 metrics, including half the Internet’s actual traffic.
By Iljitsch van Beijnum
Aug 28 2014

In a paper presented at the prestigious ACM SIGCOMM conference last week, researchers from the University of Michigan, the International Computer Science Institute, Arbor Networks, and Verisign Labs presented the paper “Measuring IPv6 Adoption.” In it, the team does just that—in 12 different ways, no less. The results from these different measurements don’t exactly agree, with the lowest and the highest being two orders of magnitude (close to a factor 100) apart. But the overall picture that emerges is one of a protocol that’s quickly capturing its own place under the sun next to its big brother IPv4.

As a long-time Ars reader, you of course already know everything you need to know about IPv6. There’s no Plan B, but you have survived World IPv6 Dayand World IPv6 Launch. All of this drama occurs because existing IP(v4) addresses are too short and are thus running out, so we need to start using the new version of IP (IPv6) that has a much larger supply of much longer addresses.

The good news is that the engineers in charge knew we’d be running out of IPv4 addresses at some point two decades ago, so we’ve had a long time to standardize IPv6 and put the new protocol in routers, firewalls, operating systems, and applications. The not-so-good news is that IP is everywhere. The new protocol can only be used when the two computers (or other devices) communicating over the ‘Net—as well as every router, firewall, and load balancer in between—have IPv6 enabled and configured. As such, getting IPv6 deployed has been an uphill struggle. But last week’s paper shows us how far we’ve managed to struggle so far.

In an effort to be comprehensive, the paper (PDF) visits all corners of the Internet’s foundation, from getting addresses to routing in the core of the network. The researchers also got their hands on as many as half of the packets flowing across the Internet at certain times, counting how many of those packets were IPv4 and how many were IPv6.

The authors focused on content providers, service providers, and content consumers. For each of these, the first step toward sending and receiving IPv6 packets is to get IPv6 addresses. Five Regional Internet Registries (RIRs) give out both IPv4 addresses and IPv6 addresses. Looking at 10 years of IP address distribution records, it turns out that prior to 2007, only 30 IPv6 address blocks or address prefixes were given out each month. That figure is now 300; the running total is 18,000. IPv4, on the other hand, reached a peak of more than 800 prefixes a month in 2011 and is now at about 500. Although IPv6 is close on a monthly basis, IPv4 had a big head start and is currently at 136,000 prefixes given out.

Once a network obtains address space, it’s helpful if packets manage to find their way to the holder of the addresses. Prefixes must be “advertised” in the global routing system so routers around the world know where to send the packets for those addresses. The number of IPv6 prefixes in routing tables in the core of the Internet was 500 in 2004 and 19,000 now—a 37-fold increase. IPv4, on the other hand, went from 153,000 to 578,000 in the same time, an increase of a factor four. Network operators often break up their prefixes into smaller ones, so this number is higher than the number of prefixes given out by the RIRs. Note that this is purely the number of address blocks, regardless of their size.

With routing and addressing out of the way, we need to start thinking about the DNS. When a system wants to talk to a service known by a domain name, it can ask the DNS system for the IPv4 addresses that go with that domain and have an A (address) record query. If the system also wants to know the IPv6 addresses that go with the domain, it has to perform a separate AAAA record query. These queries can be performed over either IPv4 or IPv6.


Re: Time Warner Cable Internet Outage Affects Millions

[Note:  This comment comes from friend Bob Frankston.  DLH]

From: “Bob Frankston” <>
Subject: RE: [Dewayne-Net] Time Warner Cable Internet Outage Affects Millions
Date: August 28, 2014 at 15:17:07 EDT

You get what you ask for. If you build a video delivery system and treat the Internet as just one channel being delivered you get outages at points of failure in an entertainment system.

If you want to build a system for connectivity you need a different approach to engineering and people who aren’t primarily in the video delivery business nor in the business of building an infrastructure for telecommunications services. Buffer bloat is another example of what happens if you use engineering principles from another era.

Time Warner Cable Internet Outage Affects Millions
By Kia Makarechi
Aug 27 2014