Take control of the media with this media and news literacy course

[Note:  This item comes from friend Robert Berger.  DLH]

Take control of the media with this media and news literacy course
We’re in an age of information overload, and too much of what we watch, hear and read is mistaken, deceitful or even dangerous. Yet you and I can take control and make media serve us — all of us — by being active consumers and participants. Here’s how.
Jun 30 2015

This was the theme of my last book, Mediactive (here’s Cory’s super-kind review; blush…), and it’s at the heart of my online teaching and much of my recent writing.

So it was logical to extend the mission — and next week (July 6) we’re launching a “massive open online course” (MOOC) on media/news literacy in the digital age. It’s called “MediaLIT: Overcoming Information Overload.”

That overload, in this media-saturated age, is leading to all kinds of good and not-so-good outcomes. Having vast amounts of information about just about anything means we can learn more–a lot more–about almost anything. That’s the most exciting part of what’s happening.

But all that information also means, as the jawdropping CNN “ISIS flag” debacle demonstrates, that we have to be a LOT more careful about what we believe. To use guest lecturer Howard Rheingold’s framing, we have to employ “crap detection” in a big way these days.

People like Howard have helped us take the course beyond the standard lecture-readings-quiz format. We have words of wisdom, in a collection of videos, from some experts in the media and media-literacy fields, in addition to just plain experts in subject areas who deal with the media on a regular basis.

Here’s a taste–snippets from the videos we’ll be using in the course–of their wisdom. Wikipedia’s Jimmy Wales:


Cameron reaffirms there will be no “safe spaces” from UK government snooping

Cameron reaffirms there will be no “safe spaces” from UK government snooping
But how exactly does the UK government intend to do that? Watch out, encryption.
By Glyn Moody

Jul 1 2015


The UK’s prime minister, David Cameron, has re-iterated that the UK government does not intend to “leave a safe space—a new means of communication—for terrorists to communicate with each other.” This confirms remarks he made earlier this year about encryption, when he said: “The question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.”

David Cameron was replying in the House of Commons on Monday to a question from the Conservative MP David Bellingham, who asked him whether he agreed that the “time has come for companies such as Google, Facebook and Twitter to accept and understand that their current privacy policies are completely unsustainable?” To which Cameron replied: “we must look at all the new media being produced and ensure that, in every case, we are able, in extremis and on the signature of a warrant, to get to the bottom of what is going on.”

Although Cameron’s intentions may be clear, how he intends to implement them is not. Speaking on Monday, Cameron said: “We are urging social media companies to work with us and help us deal with terrorism.” Is this just a matter of putting pressure on Google and Facebook to hand over user information more readily, or does he expect them to proactively police what is posted on their services?

And what does he intend to do about encrypted communications where companies can’t hand over keys, or where there is no company involved, as with GnuPG, the open source implementation of the OpenPGP encryption system?

Since the UK government has said that re-introducing the “Snooper’s Charter” is one of its priorities, details should soon emerge. On Monday, Cameron said the issue of “safe spaces” will “come in front of the House,” presumably meaning the new Bill. Given Cameron’s latest comments, there seems little hope the proposed legislation will be proportionate: even though he claimed “Britain is not a state that is trying to search through everybody’s emails and invade their privacy,” he nonetheless evidently wants the capability to snoop on everything UK citizens are up to online. The key issue is now whether the proposals will be realistic about what can and can’t be done when dealing with modern encrypted communications.

Leap second causes Internet hiccup, particularly in Brazil

Leap second causes Internet hiccup, particularly in Brazil
By Jeremy Kirk
Jun 30 2015

The addition of a leap second to world clocks on Wednesday caused some networks to crash although most quickly recovered.

Some 2,000 networks stopped working just after midnight Coordinated Universal Time (UTC), said Doug Madory, director of Internet analysis with Dyn, a company studies global Internet traffic flows.

Nearly 50 percent of those networks were in Brazil, which may indicate that ISPs use a common type of router that may not have been prepared for the leap second, he said.

Most of the networks quickly recovered, which may have required just a reboot of a router, Madory said.

The Internet’s global routing table, a distributed database of networks and how they connect, contains more than 500,000 networks, so the problems affected less than a half a percent, Madory said.

Just after midnight, the number of changes to the global routing table spiked to as much as 800,000 per 30 seconds, according to Dyn. Changes to connections between networks are announced by providers using BGP (Border Gateway Protocol) and propagate across the Internet to other providers.

Madory said it’s not unheard of to see a flurry of new BGP announcements around those levels, but the timing around the leap second and 2,000 networks going offline “can’t be a coincidence.”

A leap second is added every few years to keep UTC synched with solar time. The difference between the two widens due to the slowing of the earth’s rotation. Since 1971, 26 leap seconds have been added to clocks.


Paired With AI and VR, Google Earth Will Change the Planet

Paired With AI and VR, Google Earth Will Change the Planet
By Cade Metz
Jun 29 2015

The James Reserve is a place where the natural meets the digital. 

Part of the San Jacinto mountain range in Southern California, the James is a nature reserve that covers nearly 30 acres. It’s closed to the public. It’s off the grid. Vehicles aren’t allowed. But Sean Askay calls it “one of the most heavily instrumented places in the US.” Robots on high-tension cables drop climate sensors into this high-altitude forest. Bird’s nests include automated cameras and their own sensors. Overseen by the University of California, Riverside, the reserve doubles as a research field station for biologists, academics, and commercial scientists.

In 2005, as a master’s student at the university, Askay took the experiment further still, using Google Earth to create a visual interface for all those cameras and sensors. “Basically, I built a virtual representation of the entire reserve,” he says. “You could ‘fly in’ and look at live video feeds or temperature graphs from inside a bird box.”

Somewhere along the way, the project caught the eye of Google’s Vint Cerf, a founding father of the Internet, and in 2007, Askay moved to Mountain View, California, home to Google headquarters. There, he joined the team that ran Google Earth, a sweeping software service that blends satellite photos and other images to create a digital window onto our planet (and other celestial bodies). Since joining the company, the 36-year-old has used the tool to build maps of war casualties in Iraq and Afghanistan. He put the service on the International Space Station, so astronauts could better understand where they were. Working alongside Buzz Aldrin, he built a digital tour of the Apollo 11 moon landing.

Now, as Google Earth celebrates its 10th anniversary, Askay is taking over the entire project—as lead engineer—following the departure of founder Brian McClendon. He takes over at a time when the service is poised to evolve into a far more powerful research tool, an enormous echo of his work at the James Reserve. When it debuted in 2005, Google Earth was a wonderfully intriguing novelty. From your personal computer, you could zoom in on the roof of your house or get a bird’s eye view of the park where you made out with your first girlfriend. But it proved to be more than just a party trick. And with the rapid rise of two other digital technologies—neural networks and virtual reality—the possibilities will only expand.

A Visit to Prague

Neural networks—vast networks of machines that mimic the web of neurons in the human brain—can scour Google Earth in search of deforestation. They can track agricultural crops across the globe in an effort to identify future food shortages. They can examine the world’s oil tankers in an effort to predict gas prices. And it so happens that Google runs one of the most advanced neural networking operations in the world. For Google Earth, Askay says, “machine learning is the next frontier.”

According to Askay’s boss, Rebecca Moore, the company is already using neural networks to examine Google’s vast trove of satellite imagery. “We have the Google Brain,” she says, referring to the central neural networking operation Google has built inside the company, “and we’re doing some experiments.” That’s news. But it’s not that surprising. Two startups—Orbital Insight and Descartes Labs—are already doing much the same thing.

Meanwhile, virtual reality—as exhibited by headsets like Facebook’s Oculus Rift and Google Cardboard—is bringing a new level of fidelity and, indeed, realism to the kind of immersive digital experience offered by Google Earth. Today, using satellite imagery and street-level photos, Askay and Google are already building 3-D models of real-life places like Prague that you can visit from your desktop PC (see video at top). But in the near future, this experience will move into Oculus-like headsets, which can make you feel like you’re really there.

“We have so much interesting stuff,” Askay says of Google Earth’s massive collection of images. “How amazing would it be to experience Google Earth in that environment?”


Cisco Wants To Buy OpenDNS Because The Intranet Is Dead

Cisco Wants To Buy OpenDNS Because The Intranet Is Dead
Users and devices are what matter
Jun 30 2015

When I dropped in last month on David Ulevitch, the CEO of OpenDNS, he was cheerily bounding around the rapidly expanding home base of his Internet security empire in San Francisco’s SoMa district. He’d taken over the other side of the building where OpenDNS is headquartered.

Now Cisco, an investor in OpenDNS since last year, is acquiring the fast-growing company for $635 million in cash and stock. The reasons are simple and obvious to anyone who’s been paying attention to the Internet lately: Networks are porous. Firewalls are irrelevant. Work happens everywhere. And new devices are getting added to the network all the time. 

Ulevitch has been beating this drum for a while—in fact, he quietly taunted Cisco three years ago, before that company literally bought into his vision. What’s different is that the world is waking up to the reality that the old ways of securing Internet-connected computing devices are broken. 

Routing Past Danger

Consider the Sony hack, recently chronicled by Peter Elkind of Fortune: Traditional network security measures meant nothing when system administrators’ accounts were compromised and employees stashed Twitter passwords in spreadsheets.

OpenDNS offers security services through a basic layer of the Internet, the domain-name service, or DNS. DNS servers translate the location of machines on the Internet, rendered as strings of numbers known as IP addresses, into the domain names that we’re familiar with (like readwrite.com). 

It sounds like a simple function, but because it’s a crucial part of every interaction between machines on the Internet, there’s a wide range of security OpenDNS can offer based on examining, blocking, or rerouting these requests.

Crucially, this doesn’t require the installation of special hardware or software. You just route your DNS requests through OpenDNS’s servers rather than—as is typical—your Internet service provider’s machines.

This has put OpenDNS in a position where it can deal with entirely new kinds of attacks.


Surveillance Court Rules That N.S.A. Can Resume Bulk Data Collection

Surveillance Court Rules That N.S.A. Can Resume Bulk Data Collection
Jun 30 2015

WASHINGTON — The Foreign Intelligence Surveillance Court ruled late Monday that the National Security Agency may temporarily resume its once-secret program that systematically collects records of Americans’ domestic phone calls in bulk.

But the American Civil Liberties Union said Tuesday that it would ask the United States Court of Appeals for the Second Circuit, which had ruled that the surveillance program was illegal, to issue an injunction to halt the program, setting up a potential conflict between the two courts.

The program lapsed on June 1, when a law on which it was based, Section 215 of the USA Patriot Act, expired. Congress revived that provision on June 2 with a bill called the USA Freedom Act, which said the provision could not be used for bulk collection after six months.

The six-month period was intended to give intelligence agencies time to move to a new system in which the phone records — which include information like phone numbers and the duration of calls but not the contents of conversations — would stay in the hands of phone companies. Under those rules, the agency would still be able to gain access to the records to analyze links between callers and suspected terrorists.

But, complicating matters, in May the Court of Appeals for the Second Circuit, in New York, ruled in a lawsuit brought by the A.C.L.U. that Section 215 of the Patriot Act could not legitimately be interpreted as permitting bulk collection at all.

Congress did not include language in the Freedom Act contradicting the Second Circuit ruling or authorizing bulk collection even for the six-month transition. As a result, it was unclear whether the program had a lawful basis to resume in the interim.

After President Obama signed the Freedom Act on June 2, his administration applied to restart the program for six months. But a conservative and libertarian advocacy group, FreedomWorks, filed a motion in the surveillance court saying it had no legal authority to permit the program to resume, even for the interim period.

In a 26-page opinion made public on Tuesday, Judge Michael W. Mosman of the surveillance court rejected the challenge by FreedomWorks, which was represented by a former Virginia attorney general, Ken Cuccinelli, a Republican. And Judge Mosman said the Second Circuit was wrong, too.

“Second Circuit rulings are not binding” on the surveillance court, he wrote, “and this court respectfully disagrees with that court’s analysis, especially in view of the intervening enactment of the USA Freedom Act.”

When the Second Circuit issued its ruling that the program was illegal, it did not issue any injunction ordering the program halted, saying it would be prudent to see what Congress did as Section 215 neared its June 1 expiration. Jameel Jaffer, an A.C.L.U. lawyer, said on Tuesday that the group would now ask for one.

“Neither the statute nor the Constitution permits the government to subject millions of innocent people to this kind of intrusive surveillance,” Mr. Jaffer said. “We intend to ask the court to prohibit the surveillance and to order the N.S.A. to purge the records it’s already collected.”

The bulk phone records program traces back to October 2001, when the Bush administration secretly authorized the N.S.A. to collect records of Americans’ domestic phone calls in bulk as part of a broader set of post-Sept. 11 counterterrorism efforts.

The program began on the basis of presidential power alone. In 2006, the Bush administration persuaded the surveillance court to begin blessing it under of Section 215 of the Patriot Act, which says the government may collect records that are “relevant” to a national security investigation.


Supreme Court won’t weigh in on Oracle-Google API copyright battle

Supreme Court won’t weigh in on Oracle-Google API copyright battle
Lower court to decide if API utilization is copyright infringement or fair use.
By David Kravets
Jun 29 2015

The Supreme Court on Monday rejected Google’s appeal of the Google-Oracle API copyright dispute. The high court’s move lets stand an appellate court’s decision that application programming interfaces (APIs) are subject to copyright protections.

Here is how we described the issue in our earlier coverage:

The dispute centers on Google copying names, declarations, and header lines of the Java APIs in Android. Oracle filed suit, and in 2012, a San Francisco federal judge sided with Google. The judge ruled that the code in question could not be copyrighted. Oracle prevailed on appeal, however. A federal appeals court ruled that the “declaring code and the structure, sequence, and organization of the API packages are entitled to copyright protection.”

Google maintained that the code at issue is not entitled to copyright protection because it constitutes a “method of operation” or “system” that allows programs to communicate with one another.

The high court did not comment Monday about the case when announcing that it had decided against reviewing it. However, the court’s announcement comes a month after the Justice Department sided with Oracle and told the justices that APIs are copyrightable (PDF).

Computer scientists had urged the Supreme Court to review the case, saying the appellate court’s decision “poses a significant threat to the technology sector and to the public.”

Despite the high court’s inaction on the case, the Google-Oracle legal flap is far from resolved. That’s because the appeals court sent the case back to the lower courts to determine whether Google’s use of the code in Android—which it no longer uses—constitutes a “fair use.” Oracle is seeking $1 billion in damages.

“This is not the end of the road for this case—the Federal Circuit decision explicitly left open the possibility that the kinds of uses Google made were permissible under copyright’s fair use doctrine,” said Charles Duan, the director of Public Knowledge’s patent reform project.