Here’s How to Stop Squelching New Ideas, Eric Schmidt’s Advisory Board Tells DoD

Here’s How to Stop Squelching New Ideas, Eric Schmidt’s Advisory Board Tells DoD
An exclusive preview of the Defense Innovation Board’s new recommendations for James Mattis.
By Patrick Tucker
Jan 17 2018

“DoD does not have an innovation problem; it has an innovation adoption problem,” reads one of the new recommendations from the Defense Innovation Board. It even has an “innovation theatre” problem: the preference for small cosmetic steps over actual change.

The advisory is chaired by former Alphabet chief executive Eric Schmidt. Their latest report, to be delivered Wednesday afternoon to Defense Secretary Jim Mattis, suggests that the Pentagon too often tends to squelch its new ideas with outdated bureaucratic models and obsolete cultural notions.

Obtained exclusively by Defense One before the meeting, A draft of several new recommendations include:

• Design a fast track for new technology initiatives. Basically, take the Army’s Rapid Equipping Office, which pushes urgent-need technology to front lines, and make its processes the norm for some new technology development. From the report: “DoD should develop a sustainable process, as opposed to another rapid office, that would act as a ‘fast-track’ for: (1) identifying and prioritizing the most critical operational warfighting problems, (2) assembling cross-functional teams that span organizational boundaries and disciplines to develop rapid solutions.”

• Start an incubator. In the business and tech world, incubators help startups turn ideas into businesses by providing management, funding, office space, and expertise. The board is suggesting that the military take some of its best and brightest and help them build DOD ‘startups’ related to specific problem areas, like big data analysis. “The military has to establish a new approach to empowering its most talented people…allowing the military’s most intrapreneurial people to work on their ideas to get them elevated past the usual roadblocks in the system,” it says.  

• Create an innovation + STEM career field. In much the same way that the military created the cyber operations career field, it should do the same with science, tech, and innovation. The new career field would “cover innovation, rapid capability development and acquisition, data science, and STEM)” and would “will operate in small teams across the Joint Force.”

• Establish technology and innovation training for senior DoD leaders. This would be a sort of camp where leaders can learn how to recognize, respect, and nurture entrepreneurial ideas and potential among subordinates – or at least stop accidentally crushing them. “Successful innovation practices being implemented within the private sector are not understood or not viewed as acceptable paths by senior Department leaders. As a result, the Department is not maintaining its once pronounced technological advantage over its adversaries,” the report says.

A few folks in the Defense Department have already made some progress on the last recommendation. Last week, a handful of flag officers from the Marines, Air Force Special Operations Forces, the Office of Naval Research, and other military outfits participated in a weeklong class and training event in entrepreneurship. The class was modeled after the Innovation Corps, or ICorps, curriculum developed by Silicon Valley luminary Steve Blank and used by more than 80 universities across the United States plus the National Science Foundation and the intelligence community. The goal is to roll ICorps training out to a lot more officers in the years ahead, while also educating leaders and superiors about how to better nurture good ideas.



The Pentagon’s New Artificial Intelligence Is Already Hunting Terrorists

The Pentagon’s New Artificial Intelligence Is Already Hunting Terrorists
After less than eight months of development, the algorithms are helping intel analysts exploit drone video over the battlefield.
Dec 21 2017

Earlier this month at an undisclosed location in the Middle East, computers using special algorithms helped intelligence analysts identify objects in a video feed from a small ScanEagle drone over the battlefield.

A few days into the trials, the computer identified objects — people, cars, types of building — correctly about 60 percent of the time. Just over a week on the job — and a handful of on-the-fly software updates later — the machine’s accuracy improved to around 80 percent. Next month, when its creators send the technology back to war with more software and hardware updates, they believe it will become even more accurate.

It’s an early win for a small team of just 12 people who started working on the project in April. Over the next year, they plan to expand the project to help automate the analysis of video feeds coming from large drones — and that’s just the beginning.

“What we’re setting the stage for is a future of human-machine teaming,” said Air Force Lt. Gen. John N.T.“Jack” Shanahan, director for defense intelligence for warfighter support, the Pentagon general who is overseeing the effort. Shanahan believes the concept will revolutionize the way the military fights.

“This is not machines taking over,” he said. “This is not a technological solution to a technological problem. It’s an operational solution to an operational problem.”

Called Project Maven, the effort right now is focusing on helping U.S. Special Operations Command intelligence analysts identify objects in video from small ScanEagle drones.

In coming months, the team plans to put the algorithms in the hands of more units with smaller tactical drones, before expanding the project to larger, medium-altitude Predator and Reaper drones by next summer.

Shanahan characterized the initial deployment this month as “prototype warfare” — meaning that officials had tempered expectations. Over the course of about eight days, the team refined the algorithm, six times.

“This is maybe one of our most impressive achievements is the idea of refinement to the algorithm,” Shanahan said.

Think of it as getting a new update to a smartphone application every day, each time improving its performance.

Before it deployed the technology, the team trained the algorithms using thousands of hours of archived battlefield video captured by drones in the Middle East. As it turned out, the data was different from the region where the Project Maven team deployed.

“Once you deploy it to a real location, it’s flying against a different environment than it was trained on,” Shanahan said. “Still works of course … but it’s just different enough in this location, say that there’s more scrub brush or there’s fewer buildings or there’s animals running around that we hadn’t seen in certain videos. That is why it’s so important in the first five days of a real-world deployment to optimize or refine the algorithm.”

While the algorithm is trained to identify people, vehicles and installations, it occasionally mischaracterizes an object. It’s then up to the intel analyst to correct the machine, thus helping it learning.

The team has paired the Maven algorithm with a system called Minotaur, a Navy and Marine Corps “correlation and georegistration application.” As Shanahan describes it, Maven has the algorithm, which puts boxes on the video screen, classifying an object and then tracking it. Then using Minotaur, it gets a georegistration of the coordinates, essentially displaying the location of the object on a map.

“That’s new, it’s different and it’s much needed for an analyst because this was all being done manually in the past,” the general said.

“Having those things together is really increasing situational awareness and starts the process of giving analysts a little bit of time back — which we hope will become a lot of time back over time — rather than just having to stay glued to the video screen,” Shanahan said.


Companies race to gather a newly prized currency: Our body measurements

Companies race to gather a newly prized currency: Our body measurements
By Drew Harwell
Jan 16 2018

The first step for a shopper buying a suit at the fast-growing menswear retailer Indochino is sharing his personal information: A salesperson armed with an iPad measures nearly everything on his body, from the distance between his belly button and rear to the circumference of his knees.

The next step is getting a customized, made-to-measure suit delivered to his home within a few weeks. But his body data lives on: Company executives are hoping to build a “master data model” that would connect his measurements with his advertising, shopping and spending histories.

Clothing companies now see body measurements as one of their most prized currencies, and millions of Americans are increasingly offering up their innermost personal data in search of customized pieces or a better fit.

Companies such as Indochino, Wantable and Stitch Fix, the latter of which counted nearly $1 billion in sales last year, gather dozens of data points on each customer, including weight, jobs and past pregnancies. They are being joined by, the online-retail giant that counts fashion among its fastest-growing businesses and now sells a bedroom camera that offers opinions on what a user wears.

But the corporate harvest of data about our bodies, including our faces, voices and fingerprints, also is raising privacy concerns about how much sharing is too much in service of better-fitting clothes.

“These body measurements look a lot like medical records,” said Peter Swire, a law professor at the Georgia Tech Scheller College of Business who coordinated with the White House in the 1990s during the shaping of the nation’s medical privacy law.

Those health privacy rules, Swire said, “would apply to this data if the measurements were taken at the hospital. It doesn’t apply when an online company puts them in an app.”

Companies value this data because it can lock customers in for life and make it easy to order customized clothes over the Internet without trying anything on. But some privacy experts question whether Americans have a clear idea of what they are handing over.

“There’s a little bit of a weirdness about it. You’re letting people into your life,” said Autumn Rocha, a 26-year-old student in Baltimore and Stitch Fix client. “But there’s also something cool about it: ‘This is what I’m into. What can you find for me?’ ”

85 data points

A new Stitch Fix customer fills out a profile that compiles up to 85 data points. A woman is asked if she is a mother or currently pregnant, as well as her due date. She also hands over her dress, waist and bra size; her age, job and location; parts of the body she would like to flaunt or downplay; and answers to more-abstract questions, such as whether she likes taking risks.

Algorithms use that data to pick through Stitch Fix’s inventory, referring options to a human “stylist” who decides on which to send. The customer pays to keep the clothes she likes and can send back anything she doesn’t. She can’t, however, go on the site and pick things out; her only choice is what the algorithms recommend.

The company says it can better assess style by having access to customers’ Pinterest and Instagram accounts, which many customers willingly share. Company executives said others go a step further, sharing details of life milestones — new jobs, recent divorces, upcoming vacations and funerals — to define the clothes they are looking for.


Browser as Botnet, or the Coming War on Your Web Browser

[Note:  This item comes from friend David Rosenthal.  DLH]

Browser as Botnet, or the Coming War on Your Web Browser
By Brannon Dorsey
Jan 14 2018

One spring afternoon I was having lunch with Nick Briz at a small neighborhood diner near our studio in Chicago. We were throwing around ideas for an upcoming conference in Brooklyn that we’ve been participating in for the last few years called Radical Networks. The event brings together artist, educators, journalists and activists from all over the world to foster discussion and engagement with topics of communication networks and Internet infrastructure through workshops, performances, invited speakers, and an art show.

What if websites borrowed compute resources from their visitor’s devices while they browsed as a means of distributed computing?
We’d both participated in the art show since the festival’s inception, but this year I felt compelled to break into the speaker track. In particular, I was entertaining the idea of presenting about an idea I’d had a few days prior, “what if websites borrowed compute resources from their visitor’s devices while they browsed as a means of distributed computing?”

Because of the way the web was designed, visiting a website requires your web browser to download and run code served from that website on your device. When you browse Facebook, their JavaScript code runs in your web browser on yourmachine. The code that gets executed in your browser is, of course, assumed to be code related to the functionality of the site you are browsing. Netflix serves code that allows your browser to access their movie database and stream video content, Twitter serves codes that allows you to post, view, and comment on tweets, etc…

Technically, however, there is nothing stopping a website from serving arbitrary code that has nothing to do with your browsing experience. Your web browser will blindly execute whatever JavaScript code it receives from the website you are browsing. What’s to stop high-traffic sites like Facebook and Google from abusing this feature of the web, harvesting massive compute resources from their hundreds of thousands of concurrently connected users for free? Was this idea really feasible in practice? If so, was it being used in the wild?

This post is a report of my trip down this rabbit hole of an idea, and a summary of the talk that I ended up giving at Radical Networks as a result of that research.

Before we go too deep into the implications of borrowing user’s compute resources while they unsuspectingly browse the web, I want to touch on why it would be advantageous to do so in the first place. The example scenario that I’ve posed falls into a field of computer science called Distributed computing. Distributed computing is the practice of dividing a problem into small chunks and running it on many different computers in parallel, significantly reducing the time needed to compute the problem. In general, distributed computing offers abundant compute resources like many CPUs, high network bandwidth, and a diverse set of IP addresses. For some tasks, distributed computing provides the opportunity for 1,000 computers to work together to solve a task 1,000x faster than it would take one computer to solve that same task working alone.

Distributed computing has a rich history that dates back to ARPANET in the 1960s, with a slew of community and volunteer citizen science projects popping up in the late-1990s and early-2000s (partially thanks to the Berkeley Open Infrastructure for Network Computing, or BOINC software). Projects like SETI@Home, Folding@Home, GIMPS, and many others which allow computer users to donate idle time on their computers to cure diseases, study global warming, find large prime numbers, search for alien life, and do many other types of scientific research.

A botnet is a distributed compute network where the owners of the participating computers don’t know that their computers are participating in the network.
Opposite the idea of volunteer distributed computing is the concept of a Botnet. A botnet, the portmanteau of “Robot” and “Network”, is a distributed compute network where the owners of the participating computers don’t know that their computers are participating in the network. They are associated with hacking and criminal activity and are best known for their use in nefarious activities like distributed denial of service (DDoS), e-mail spamming, spyware, click fraud, and more recently, cryptocurrency mining. Botnet software is usually installed on a user’s machine as a trojan or worm and can persist for months or years without the owner knowing, all the while providing compute cycles and bandwidth to an anonymous third party. Occasionally these botnets grow in size until they control tens of millions of unsuspected user’s computers and become informally recognized and named by members of the cybersecurity community.


‘Is whistleblowing worth prison or a life in exile?’: Edward Snowden talks to Daniel Ellsberg

‘Is whistleblowing worth prison or a life in exile?’: Edward Snowden talks to Daniel Ellsberg
The two most famous whistleblowers in modern history discuss Steven Spielberg’s new film, The Post, about Ellsberg’s leaking of the Pentagon Papers, the personal cost of what they did – and if they’d advise anybody to follow in their footsteps. Introduced by Ewen MacAskill
By Ewen MacAskill, Edward Snowdenand Daniel Ellsberg
Jan 16 2018

Daniel Ellsberg, the US whistleblower celebrated in Steven Spielberg’s new film, The Post, was called “the most dangerous man in America” by the Nixon administration in the 70s. More than 40 years later, the man he helped inspire, Edward Snowden, was called “the terrible traitor” by Donald Trump, as he called for Snowden’s execution.

The Guardian has brought the two together – the most famous whistleblower of the 20th century and the most famous of the 21st so far – to discuss leaks, press freedom and other issues raised in Spielberg’s film.

Starring Meryl Streep and Tom Hanks, The Post deals with Ellsberg’s 1971 leak of the Pentagon Papers, which revealed presidents from Truman to Nixon lying about the Vietnam war. It deals, too, with the battle of the US media, primarily the Washington Post and the New York Times, to protect press freedom.

During a two-hour internet linkup between Ellsberg in Berkeley, California, Snowden in Moscow and the Guardian in London, the whistleblowers discussed the ethics, practicalities and agonised internal debate involved in whistleblowing and how The Post has a special resonance today in Trump’s America.

They are worried about Trump’s assault on press freedom and express fear that journalists could be indicted for the first time in US history. And they are alarmed by the prospect of a US nuclear strike against North Korea, urging a new generation of whistleblowers to come forward from the Pentagon or White House to stop it.

“It is madly reckless for this president to be doing what he is doing. Whether he is, in some clinical sense, crazy or not, what he is doing is crazy,” says Ellsberg. His book based on his experience as a defence analyst and nuclear war planner, The Doomsday Machine, was published in December.

Back when Snowden was debating whether to leak secret NSA documents, showing the scale of government mass surveillance, he found inspiration in a 2009 documentary, The Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers. After Snowden handed over material to journalists in 2013, Ellsberg was among the first to express support and the two became friends, with Ellsberg visiting Snowden, who is living in exile in Moscow, in 2015.

They have a shared interest in press freedom. Ellsberg cofounded the US-based, not-for-profit Freedom of the Press Foundation, which helped organise the linkup. Snowden, who also serves on the foundation’s board, devotes much of his time in Moscow to developing tools that help journalists protect their communications and sources.

Ewen MacAskill: How has whistleblowing changed in the 40-plus years between your leaks? One of the striking images from The Post is of leaked documents having to be laboriously photocopied, in contrast with today.

Daniel Ellsberg: Certainly, the ability to copy and release hundreds of thousands of files or documents, as Chelsea Manning did, or millions of pages, as Ed Snowden did, was quite impossible then. I was using the cutting-edge technology of the day, Xerox, to do what I did do, which was to copy 7,000 “top secret” pages. That could not have been done before Xerox.

So, in a sense, it is easier to get the truth out now than it was in my day. It took me months of effort – copying night after night. On the other hand, unless you are an expert like Ed or Chelsea, their ability to trace who has done the leak is probably greater than it used to be. You can’t do it safely. As I understand it from Ed – you tell me, Ed, if I am wrong here – you felt with your counterespionage expertise you probably could have done it anonymously, but you chose not to do so. But others would be more likely to be caught.


Know-Nothings for the 21st Century

Know-Nothings for the 21st Century
By Paul Krugman
Jan 15 2018

These days calling someone a “know-nothing” could mean one of two things.

If you’re a student of history, you might be comparing that person to a member of the Know Nothing party of the 1850s, a bigoted, xenophobic, anti-immigrant group that at its peak included more than a hundred members of Congress and eight governors. More likely, however, you’re suggesting that said person is willfully ignorant, someone who rejects facts that might conflict with his or her prejudices.

The sad thing is that America is currently ruled by people who fit both definitions. And the know-nothings in power are doing all they can to undermine the very foundations of American greatness.

The parallels between anti-immigrant agitation in the mid-19th century and Trumpism are obvious. Only the identities of the maligned nationalities have changed.

After all, Ireland and Germany, the main sources of that era’s immigration wave, were the shithole countries of the day. Half of Ireland’s population emigrated in the face of famine, while Germans were fleeing both economic and political turmoil. Immigrants from both countries, but the Irish in particular, were portrayed as drunken criminals if not subhuman. They were also seen as subversives: Catholics whose first loyalty was to the pope. A few decades later, the next great immigration wave — of Italians, Jews and many other peoples — inspired similar prejudice.

And here we are again. Anti-Irish prejudice, anti-German prejudice, anti-Italian prejudice are mostly things of the past (although anti-Semitism springs eternal), but there are always new groups to hate.

But today’s Republicans — for this isn’t just about Donald Trump, it’s about a whole party — aren’t just Know-Nothings, they’re also know-nothings. The range of issues on which conservatives insist that the facts have a well-known liberal bias just keeps widening.

One result of this embrace of ignorance is a remarkable estrangement between modern conservatives and highly educated Americans, especially but not only college faculty. The right insists that the scarcity of self-identified conservatives in the academy is evidence of discrimination against their views, of political correctness run wild.

Yet conservative professors are rare even in hard sciences like physics and biology, and it’s not difficult to see why. When the more or less official position of your party is that climate change is a hoax and evolution never happened, you won’t get much support from people who take evidence seriously.

But conservatives don’t see the rejection of their orthodoxies by people who know what they’re talking about as a sign that they might need to rethink. Instead, they’ve soured on scholarship and education in general. Remarkably, a clear majority of Republicans now say that colleges and universities have a negative effect on America.

So the party that currently controls all three branches of the federal government is increasingly for bigotry and against education. That should disturb you for multiple reasons, one of which is that the G.O.P. has rejected the very values that made America great.

Think of where we’d be as a nation if we hadn’t experienced those great waves of immigrants driven by the dream of a better life. Think of where we’d be if we hadn’t led the world, first in universal basic education, then in the creation of great institutions of higher education. Surely we’d be a shrunken, stagnant, second-rate society.


Community Broadband: Privacy, Access, and Local Control

Community Broadband: Privacy, Access, and Local Control
Jan 16 2018

Communities across the United States are considering strategies to protect residents’ access to information and their right to privacy. These experiments have a long history, but a new wave of activists have been inspired to seek a local response to federal setbacks to Internet freedom, such as the FCC’s decision to roll back net neutrality protections, and Congress’ early 2017 decision to eliminate user privacy protections.

Internet service providers (ISP) have a financial incentive and the technical ability to block or slow users’ access, insert their own content on the sites we visit, or give preferential treatment to websites and services with which they have financial relationships. For many years, net neutrality principles and rules, most recently cemented in the FCC’s 2015 Open Internet Order, helped prevent much of this activity. Net neutrality helped create a landscape where new ideas and services could develop without being crowded out by political pressure or prioritized fast lanes for established commercial incumbents.

One need only look to two of America’s most dominant web presences to recognize how different the world might be without these protections. Both Facebook and Google began their path to dominance as dorm room experiments. How very different would our social, family, and professional lives look today if MySpace and AltaVista had been able to pay ISPs to prioritize their traffic and throttle that of competitors, hardening the market from competition and disruption?

While proponents of rolling back net neutrality regulations would have us believe that the market will force Internet providers to assure user access, the Federal Communication Commission’s 2016 Broadband Progress Report notes that 51 percent of Americans have access to only one provider of high-speed Internet. As a result, incumbent service providers have little incentive to behave well.

Having fought and won the first round in the fight for net neutrality only a few short years ago, we know that there is enormous grassroots energy behind preserving the Internet as a democratic forum of ideas and innovation. We also know that lawmakers at all levels bear a fundamental responsibility to develop policies that maintain privacy protections, guarantee free speech and expression, and reduce the digital divide. Here’s how some are meeting the responsibility.

DIY Broadband

In the executive summary of its 2010 “National Broadband Plan,” the FCC noted:

Broadband is the great infrastructure challenge of the early 21st century. Like electricity a century ago, broadband is a foundation for economic growth, job creation, global competitiveness and a better way of life. It is enabling entire new industries and unlocking vast new possibilities for existing ones. It is changing how we educate children, deliver health care, manage energy, ensure public safety, engage government, and organize and disseminate knowledge.

Already many communities throughout the country have begun infrastructure-building projects aimed at answering these concerns. Local governments, like those in Ammon, ID, Nelson County, VA, and Santa Fe, NM have invested in building out community-funded broadband programs. These programs allow for the creation of high-capacity access for residents and businesses, as well as improving the accessibility of high-speed broadband service to their least-resourced community members.

While some cities have chosen to build and operate their own broadband networks, many choose instead to focus on developing just the physical infrastructure, establishing an open access network, leasing broadband service access to private ISPs who then maintain user care, service and billing. These communities avoid the high costs that can be involved in finding customers and providing technical assistance and customer service. Instead, by substantially reducing initial costs for new-to-market ISPs, this model overcomes the most significant barrier to competition.