Amazon says the FAA is so slow, the delivery drone it approved is already obsolete

Amazon says the FAA is so slow, the delivery drone it approved is already obsolete
The e-commerce giant wants US regulators to follow Europe’s lead
By Ben Popper
Mar 24 2015
<http://www.theverge.com/2015/3/24/8283565/amazon-prime-air-delivery-drone-faa-regulations>

Last week the Federal Aviation Administration finally gave Amazon permission to begin test flying its delivery drones outdoors. But in testimony before a Senate subcommittee today, Amazon argued that the government wasn’t moving nearly fast enough. “This approval came last Thursday, and we’re eager to get flying here as we have been abroad,” said Paul Misener, Amazon’s vice president for Global Public Policy. But “while the FAA was considering our applications for testing, we innovated so rapidly that the [drone] approved last week by the FAA has become obsolete. We don’t test it anymore. We’ve moved on to more advanced designs that we already are testing abroad.”

The FAA took one and half years to give Amazon permission to fly one very specific model of drone. Misener compared this to what is happening overseas. “Nowhere outside of the United States have we been required to wait more than one or two months to begin testing, and permission has been granted for operating a category of UAS, giving us room to experiment and rapidly perfect designs without being required to continually obtain new approvals for specific UAS vehicles.”

Amazon wants minimal involvement from human pilots

Misener also took issue with the recently proposed rules from the FAA that would govern commercial drone flight in the US. The agency wants all drones to have a human pilot and to stay within that person’s line of sight at all times. A better system, he argued, “must allow UAS applications to take advantage of a core capability of the technology: to fly with minimal human involvement, beyond visual line of sight.” This kind of flight might have been dangerous a few years ago, but “automated UAS sense and avoid technology and on-board intelligence address these factors and will mitigate the related risks.”

If the FAA proceeds with its current rules and timetable, Amazon believes it will fall far behind the rest of the world. Misener applauded new policies from the European Aviation Safety Agency (EASA), which treat drones as a new category of aircraft, instead of lumping them in with manned aircraft. Academic experts who study the drone industry agree that unless something changes, American companies will move the research and development of commercial drones overseas to take advantage of the more permissive environment.

“Granted, the path to that future is a challenging technical problem, but what EASA has done is removed arbitrary regulatory hurdles, allowing engineers to do what they do best — design and innovate,” wrote Gregory McNeal, an associate professor of law and public policy at Pepperdine. “The new regulatory framework means that these innovators will have a clear path towards flying in Europe. They can plan and design their products to address safety concerns, rather than plan around arbitrary rules based on the last century’s aviation technology.”

[snip]

Voice Control Will Force An Overhaul of the Whole Internet

VOICE CONTROL WILL FORCE AN OVERHAUL OF THE WHOLE INTERNET
By Cade Metz
Mar 24 2015
<http://www.wired.com/2015/03/voice-control-will-force-overhaul-whole-internet/>

Jason Mars built his own Siri and then he gave it away. 

Mars is a professor of a computer science at the University of Michigan. Working alongside several other university researchers, he recently built a digital assistant that could instantly respond to voice commands—much like Siri, the talking assistant offered on the Apple iPhone. Then he open sourced the thing, freely sharing the underlying code with the world at large.

Known as Sirius, the project is a way for all sorts of other software coders to explore the complexities of modern speech recognition, and perhaps even add speech rec to their own mobile apps. This, Jason Mars realizes, is where the world is moving.

But the project has another aim. Mars also realizes that the massive computing centers that underpin today’s internet are ill-equipped for the coming voice revolution, and with his project, he hopes to show how these facilities must change. “We want to understand how future data centers should be built,” he says.

You see, digital assistants like Siri and Google Now and Microsoft Cortana don’t just run on your phone. They run across thousands of machines packed into these enormous computing centers, and as we extend such services to more and more people across the globe, we can’t just run them on ordinary machines. That would take up far too much space and burn far too much energy. We need hardware that’s significantly more efficient. 

With their open source project, Mars and his colleagues, including a Michigan PhD student named Yunqi Zhang, can show how a tool like Siri operates inside the data center, and ultimately, they aim to identify the hardware best suited to running this kind of voice service—not to mention all the other artificially intelligent tools poised to remake the internet, from face recognition tools to self-driving cars.

Dwarfing Google Search

In testing Sirius, Mars has already shown if you run the service on traditional hardware, it requires about 168 times more machines, space, and power than a text-based search engine a la Google Search. When you consider that voice-recognition is the future of not only mobile phones but the ever growing array of wearable devices, from Apple Watch on down, that’s completely impractical. “We’re going to hit a wall,” Mars says. Data centers don’t just take up space. They don’t just cost enormous amounts of money to build. They burn enormous amounts of energy—and that costs even more money.

The big question is: What hardware will replace the traditional gear? It’s a question that will affect not only the Apples and the Googles and the Microsofts and so many other app makers, but also the companies that sell data center hardware, most notably big-name chip makers like Intel and AMD. “We’re all over this,” says Mark Papermaster, AMD’s chief technology officer. “It’s huge for us and our future.” 

Ultimately, that’s why Mars is running his Sirius project. The Apples and Googles and the Microsoft know how this new breed of service operates, but the rest of the world doesn’t. And they need to.

A Parallel Universe

Most web services, from Google’s web search engine to Facebook’s social network, run on basic server chips from Intel and AMD (mostly Intel). The problem is: these CPUs (central processing units) are ill-suited to a voice-recognizing services like Siri, which tend to run lots and lots of tiny calculations in parallel.

[snip]

EFF wants to bring safe harbor to ISPs and internet intermediaries worldwide

EFF wants to bring safe harbor to ISPs and internet intermediaries worldwide
“Manila Principles” seek to shield ISPs from liability for third-party content.
By Glyn Moody
Mar 24 2015
<http://arstechnica.com/tech-policy/2015/03/rights-groups-want-to-take-isp-safe-harbors-worldwide/>

An international group of digital rights organizations, including the Electronic Frontier Foundation (EFF) in the US, has launched the “Manila Principles on Internet Liability,” what it calls “a roadmap for the global community to protect online freedom of expression and innovation around the world.”

The principles concern Internet intermediaries—telecom companies, ISPs, search engines, social networks, etc.—that run key parts of the online world’s infrastructure. As EFF’s Senior Global Policy Analyst Jeremy Malcolm, one of the people behind the Manila Principles, explains: “These services are all routinely asked to take down content, and their policies for responding are often muddled, heavy-handed, or inconsistent. That results in censorship and the limiting of people’s rights.” The Principles are designed to provide a framework of safeguards and best practice when responding to such requests to remove content.

In an e-mail, Malcolm told Ars about the Principles’ background and motivation. The original partners were the EFF, the Centre for Internet and Society in India, and free speech organization Article 19. Other groups were added to give a more global balance. Malcolm says: “The motivation for this work was that intermediaries have been drifting back into the view of regulators and private interests who want to restrict content online. Whereas a decade ago the idea of immunizing intermediaries from liability was well accepted, it is now being questioned again.”

The supporters of the Manila Principles see that as deeply problematic. “If intermediaries become liable for content of users, they will immediately and silently begin to restrict what users can do and say,” Malcolm explains. “And because intermediaries are mostly private businesses, this happens completely outside of the rule of law.”

To counter that danger, the Manila Principles ask governments to establish legal regimes that hold intermediaries immune from liability for third-party content (e.g., safe harbors like the DMCA in the US)—turning them into “dumb pipes.” Malcolm notes that the roadmap “also urges intermediaries not to take down content unnecessarily, and when they do take it down, to ensure that they act with transparency and due process, and with regard to human rights standards.”

The six principles are as follows:

• Intermediaries should be shielded by law from liability for third-party content
• Content must not be required to be restricted without an order by a judicial authority
• Requests for restrictions of content must be clear, be unambiguous, and follow due process
• Laws, content restriction orders and practices must comply with the tests of necessity and proportionality
• Laws and content restriction policies and practices must respect due process
• Transparency and accountability must be built in to laws and content restriction policies and practices

[snip]

The Tricorder, An All-In-One Diagnostic Device, Draws Nigh

The Tricorder, An All-In-One Diagnostic Device, Draws Nigh
It could keep us healthy and free our data.
By SIGNE BREWSTER
Mar 19 2015
<http://readwrite.com/2015/03/19/tricorder-x-prize-home-diagnostics>

After pushing back deadlines by a few months, the 10 remaining teams in the Tricorder X Prize are nearing the day they will deliver a device that can diagnose 15 diseases and other basic health information through at-home tests. The teams are scheduled to deliver working prototypes in June to a UC-San Diego study that will test the devices on patients with known medical disorders to measure their accuracy.

“We’re pretty confident that the majority of the 10 finalist teams will actually be able to deliver,” senior director Grant Company said. “Some may merge, and some may fall out, just because they can’t pull it together. And that just reinforces how big of a challenge this really is. It’s because the goals are very high.”

The winning “tricorder”—and its competitors—likely have a long FDA approval process ahead of them, which means their consumer release could be years away. But when they do arrive, they will be able to diagnose problems like stroke, anemia and tuberculosis—tasks that have always been reserved for doctors.

“We’re pretty confident that the majority of the 10 finalist teams will actually be able to deliver,” senior director Grant Company said. “Some may merge, and some may fall out, just because they can’t pull it together. And that just reinforces how big of a challenge this really is. It’s because the goals are very high.”

The winning “tricorder”—and its competitors—likely have a long FDA approval process ahead of them, which means their consumer release could be years away. But when they do arrive, they will be able to diagnose problems like stroke, anemia and tuberculosis—tasks that have always been reserved for doctors.

Diagnosis: Home Diagnosis

Such devices will arrive at an interesting time in medical history. With the emergence of mobile phones and wearable devices, home diagnostics are poised to explode. 

Company said the Apple Watch and affiliated software development, will be a welcome boost for the space.

“I think it’s a good first step, and a useful barometer of what the public’s appetite is for this type of technology,” Company said. “There’s going to be a need of collection and analysis, and these types of tools are going to be absolutely critical. If the masses are able to start building capabilities, using these research kits, it’s the first step toward adoption.”

[snip]

The War Over Who Steve Jobs Was

The War Over Who Steve Jobs Was
Walter Isaacson’s official biography of Apple’s genius leader is being challenged by a new book supported by Jobs’s inner circle.
By Steven Levy
Mar 23 2015
<https://medium.com/backchannel/the-war-over-who-steve-jobs-was-92bda2cd1e1e>

On October 16, 2011, the early evening weather on the Stanford University campus in Palo Alto, California, was almost unspeakably gorgeous — mild as a warm bath, a cloudless sky above, a full moon beaming benevolently on the 300 people gathered to mourn Steve Jobs. The world had lost one of its greatest creative forces, but for those in attendance, especially his family, the loss was painfully personal.

Yet the ceremony was, as intended, a celebration of a distinct, brilliant and sometimes impetuous man of flesh, blood and foibles. After the formalities in Stanford Memorial Church ended with Bono singing “Every Grain of Sand”(reading the lyrics from an iPad), the entire party retreated to a nearby garden area that was beautifully arranged for a post-dusk gathering. For several magical hours over drinks and hors d’oeuvres, mourners reminisced about a most unforgettable human being.

The Steve Jobs portrayed in the encomiums that evening was not just a genius who oversaw the development of products that would change our lives. He was cast as a person with a keen wit and a capacity for deep connections with his friends and enduring love for his family. But behind the scenes of a seemingly perfect memorial a shadow drama was unfolding, with Jobs’s public perception at stake.

As the crowds mingled before the service, Walter Isaacson, who had been entrusted to write the official biography of Jobs, was telling people he had just dropped off an early copy of the book to Steve’s widow, Laurene. The work had been rushed into print to capitalize on the huge interest after Jobs’s death. And just after the service, a journalist who had known Jobs well, Brent Schlender, left before the mourners gathered in the garden, racked with regret at not seizing the opportunity to say goodbye to his frequent subject earlier that year.

Isaacson’s eponymous biography of Jobs became a publishing phenomenon, selling over a million copies and making Isaacson himself somewhat of a celebrity. But privately, those closest to Jobs complained that Isaacson’s portrait focused too heavily on the Apple CEO’s worst behavior, and failed to present a 360-degree view of the person they knew. Though the book Steve Jobs gave copious evidence of its subject’s talent and achievements, millions of readers finished the book believing that he could be described with a word that rhymes with “gas hole.” A public debate erupted around the question of whether having a toxic personality (as was the general interpretation of Isaacson’s depiction) was an asset or a handicap if one chose to thoroughly disrupt existing businesses with vision and imagination. A Wired cover story(not mine!) asked, “Do you really want to be Steve Jobs?”

Only now, over three years later, has their dissatisfaction become public. In a February New Yorker profile, Apple’s design wizard Jony Ive conspicuously insisted that, while sometimes withering, Jobs’s harsh criticisms of his employees’ work were not personal attacks, but simply the result of impatient candor. As for Isaacson’s book, Ive was quoted as saying, “My regard couldn’t be any lower.”

But their unhappiness comes in full view in a new book co-written by the journalist who left early from the memorial service, Brent Schlender, called Becoming Steve Jobs. The reason Schlender had been angry enough at Jobs to turn down a precious final meeting was that his former source had stopped giving him access for his Fortune Magazine stories. But for this book, Apple was rolling out the red carpet for Schlender. In their new tome, Schlender and co-author Rick Tetzeli capture the thoughts of the people closest to Jobs in rare interviews seemingly granted to get the record straight. The subjects include Ive, Apple CEO Tim Cook, Apple’s former head of communications Katie Cotton, Pixar CEO Ed Catmull, and Jobs’ widow, Laurene Powell Jobs. Others who were otherwise uninclined to cooperate did so at the urging of some of the aforementioned insiders. The implicit message seems to be that although almost all of those people participated in the official biography, they very much feel that the Steve Jobs they knew has still not been captured. Catmull’s authorized quote about the new book is telling: “I hope it will be recognized as the definitive history.”

Becoming Steve Jobs is the anti-Walter.

I guess I should come clean about my own bias here. Schlender is one of the very few journalists whom Steve Jobs favored with his trust over decades of coverage. The core of this tiny group probably includes only Schlender (who reported for the Wall Street Journal and Fortune), the New York Times’ John Markoff — and me. (Walt Mossberg, formerly of the Wall Street Journal, also had a close relationship with Jobs, but it was through his job as a product reviewer, and later conference organizer, rather than his reportage.) All three of us had some similar experiences with Jobs. And all of us had the opportunity to see Jobs evolve from a cosmically brash rebel in his twenties to a leader of one of the world’s most significant companies in his fifties.

[snip]

Trade group led by AT&T and Verizon sues FCC to overturn net neutrality

Trade group led by AT&T and Verizon sues FCC to overturn net neutrality
FCC calls early lawsuit “premature and subject to dismissal.”
By Jon Brodkin
Mar 23 2015
<http://arstechnica.com/tech-policy/2015/03/trade-group-led-by-att-and-verizon-sues-fcc-to-overturn-net-neutrality/>

The Federal Communications Commission’s new net neutrality rules haven’t taken effect yet, but they’re already facing lawsuits from Internet service providers.

One such lawsuit was filed today by USTelecom, which is led by AT&T, Verizon, and others. Another lawsuit was filed by a small Internet service provider in Texas called Alamo Broadband. (The Washington Post flagged the lawsuits.)

The net neutrality order, which reclassifies broadband providers as common carriers and imposes rules against blocking and discriminating against online content, “is arbitrary, capricious, and an abuse of discretion,” USTelecom alleged in its petition to the US Court of Appeals for the District of Columbia Circuit. The order “violates federal law, including, but not limited to, the Constitution, the Communications Act of 1934, as amended, and FCC regulations promulgated thereunder.” The order also violates notice-and-comment rulemaking requirements, the petition said.

The petitions don’t go into any more detail on the Internet service providers’ arguments. The timing is an issue; the FCC’s rules haven’t been published in the Federal Register and do not go into effect until 60 days after publication. USTelecom’s suit says it “is filing this protective petition for review out of an abundance of caution… in case the FCC’s Order (or the Declaratory Ruling part of that Order) is construed to be final on the date it was issued (as opposed to after Federal Register publication, which USTelecom believes is the better view).”

Parties have ten days to file lawsuits from whichever date of publication ends up being the significant one. The full order was posted on the FCC’s website on March 12.

The DC Circuit threw out similarly early appeals from Verizon and MetroPCS to the FCC’s first net neutrality order back in April 2011, calling them premature. Verizon ultimately filed after the correct date and won, forcing the FCC to start over.

“We believe that the petitions for review filed today are premature and subject to dismissal,” an FCC spokesperson told Ars.

Lawsuits are also likely to be filed by the National Cable & Telecommunications Association and CTIA-The Wireless Association, the major trade groups representing cable and wireless operators. Trade groups, rather than individual Internet providers, are expected to lead the fight against the FCC this time around.

For a brighter robotics future, it’s time to offload their brains

For a brighter robotics future, it’s time to offload their brains
The power of cloud software could make robots smarter and less expensive.
By Sean Gallagher
Mar 23 2015
<http://arstechnica.com/information-technology/2015/03/for-a-brighter-robotics-future-its-time-to-offload-their-brains/>

Robots already stand in for humans in some of the dullest and most dangerous jobs there are, handling everything from painting cars to drilling rocks on Mars. And if you listen to the hype about the potential of drones and autonomous vehicles, it’s just a matter of time before robots do more. These future autonomous handymen and handywomen will deliver packages, take us to the airport, or handle less romantic tasks like shuffling freight containers and helping bedridden patients.

There’s just one problem: robots are dumb.

Despite all of the science fiction over the past half-century that has foretold the coming of intelligent, autonomous mechanical beings that attain consciousness—Neil Blomkamp’s Chappie being the latest—robots generally remain limited to the most basic of programmed tasks. Even the most advanced and deadliest of unmanned aerial vehicles depend heavily on their network tethers back to human beings. Otherwise, they’re nothing more than glorified model aircrafts on autopilot.

For robots to do tasks that are relatively simple for humans—like driving a car, fetching a tool from a toolbox, passing something to a human co-worker, or repairing a broken object—they need to be a lot more intelligent. But the computing power required for that intelligence, absent Isaac Asimov’s positronic brain, would make robots prohibitively expensive, bulky, and power-hungry. (Exhibit A: Northrop Grumman’s X-47B and the UCLASS robot fighter/bomber that will follow it).

But smart, mass market robotics isn’t an impossible goal. In fact, the building blocks for such a change may exist today—it’s the same technologies that have driven the “app” economy, software as a service, and the Internet startup economy. Specifically, researchers have increasingly looked to the power of cloud computing and high-speed networking in recent years to bring more cognitive capabilities to robots. Cloud technologies that were pioneered to help human beings process information appear to be the key for making robots act more intelligently, too.

The basics of the new botnet

“Up to this point, the performance of a mobile robot is largely limited by the amount of memory or computation that it has onboard,” Dr Chris Jones, Director for Research Advancement at iRobot, told Ars. “And if you’re trying to hit price points that make sense, that computation and memory can be fairly small. So by connecting to the cloud, you end up with a lot more resources at your disposal.”

Jones cites an array of possible examples: from more memory and more computation stored in the cloud to the boundless advantages of connectivity. By striving for a robot that’s constantly networked, developers would be able to offload the theoretical burden of advanced perception approaches or navigation tactics.

We’re already seeing some companies offloading an autonomous “brain” to the cloud. And it may not be too long before the same sorts of services used to build mobile digital assistants like Siri, Google’s Voice Search, and Cortana are helping physical robots understand the world around them. The result could be a sort of “hive mind,” where relatively inexpensive machines with some autonomous systems share a common set of cloud services that act as a group consciousness. Such a setup would allow a group of machines to constantly improve, adjusting operations as more experience is added to the collective memory. Theoretically, bots like this could not only interact with more complex environments, but they could engage people around them in a way that resembles a co-worker more than a calculator.

That’s still a ways off, however. So far, industrial robots have in many ways followed a similar course to that of the PC. They’ve gone from standalone machines to networked devices, first over a proprietary protocol and eventually using open standards. In the 1980s, General Motors developed the first standard protocol for networked robots, called the Manufacturing Automation Protocol (MAP), based on a token bus architecture. MAP’s networking would become the IEEE’s 802.4 standard. MAP, and later other protocols that used Ethernet networking, allowed robots from different manufacturers to communicate and synchronize operations in real time.

[snip]