The World Bank and tech companies want to use AI to predict famine

The World Bank and tech companies want to use AI to predict famine
A new tool using data and AI is hoping to better predict famine and help millions experiencing food insecurity.
By Abigail Higgins
Sep 29 2018
https://www.vox.com/policy-and-politics/2018/9/29/17915222/famine-world-bank-south-sudan-yemen-food-crisis-conflict

At this week’s United Nations General Assembly, the World Bank, the United Nations, and the Red Cross teamed up with tech giants Amazon, Microsoft, and Google to announce an unlikely new tool to stop famine before it starts: artificial intelligence. 

The Famine Action Mechanism (FAM), as they’re calling it, is the first global tool dedicated to preventing future famines — no small news in a world where one in nine people don’t have enough food. Building off of previous famine-prediction strategies, the tool will combine satellite data of things like rainfall and crop health with social media and news reports of more human factors, like violence or changing food prices. It will also establish a fund that will be automatically dispersed to a food crisis as soon as it meets certain criteria, speeding up the often-lengthy process for funding famine relief. 

For a famine to be declared in a country or region, three criteria have to be met: At least one in five households has an extreme lack of food; over 30 percent of children under five have acute malnutrition; and two out of 10,000 people die each day. (Famine declarations are issued jointly by United Nations agencies, the affected governments, and the Famine Early Warnings Systems Network (FEWSNET).) By that definition, there are no famines in the world right now, but conflict is threatening to plunge South Sudan, Nigeria, and Yemen into one, and many parts of the world are suffering from food insecurity.

It’s usually not until a famine is well underway that the United Nations and donor agencies begin soliciting funding. By that time, the damage has already been done. Thousands have usually already died and, for those that survive, the damage extends far into the future: for children born during a famine, their lifetime incomes are reduced by approximately 13 percent.

That’s the outcome that the FAM was created to prevent. But it faces quite a challenge — predicting famine is complicated, and even when it’s possible, it’s a whole lot harder to act on those predictions.

“If we can better predict when and where future famines will occur, we can save lives by responding earlier and more effectively,” said Brad Smith, president of Microsoft, in a statement announcing the initiative. “Artificial intelligence and machine learning hold huge promise for forecasting and detecting early signs of food shortages, like crop failures, droughts, natural disasters, and conflicts.”

It’s that last part—“conflicts”—that could prove especially challenging for a mechanism like FAM. 

Why famines are hard to predict 

The reason famines are so hard to stop is that they’re caused by that most unpredictable of factors: people. 

“Overwhelmingly, famines in particular, but humanitarian emergencies in general, are politically caused. It’s only a relatively small minority — and virtually none in modern history — that were caused exclusively, or even predominantly, by natural adversity,” Alex DeWaal, author of Mass Starvation: The History and Future of Famine, told me.

Many people assume famine is caused mainly by drought, but that really hasn’t been the case since the Industrial Revolution. Today, famines almost always involve conflict.

In February 2017, the United Nations declared famine in South Sudan. The country has been embroiled in a civil war since 2013 between pro-government and rebel factions drawn along ethnic lines. Shortly after famine was declared, government troops expelled aid workers delivering desperately needed food aid to areas they suspected were supporting rebel troops. The United States warned South Sudan it may be engaging in “deliberate” starvation tactics. Famine eventually abated last year, but the country is now teetering on the brink again.

[snip]

The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face

The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face
By Sam Biddle
Sep 26 2018
https://theintercept.com/2018/09/26/airport-facial-recognition-flight-delay/

Omnipresent facial recognition has become a golden goose for law enforcement agencies around the world. In the United States, few are as eager as the Department of Homeland Security. American airports are currently being used as laboratories for a new tool that would automatically scan your face — and confirm your identity with U.S. Customs and Border Protection — as you prepare to board a flight, despite the near-unanimous objections from privacy advocates and civil libertarians, who call such scans invasive and pointless.

According to a new report on the Biometric Entry-Exit Program by DHS itself, we can add another objection: Your flight could be late.

Although the new report, published by Homeland Security’s Office of the Inspector General, is overwhelmingly supportive in its evaluation of airport-based biometric surveillance — the practice of a computer detecting your face and pairing it with everything else in the system — the agency notes some hurdles from a recent test code-named “Sprint 8.” Among them, the report notes with palpable frustration, was that airlines insist on letting their passengers depart on time, rather than subjecting them to a Homeland Security surveillance prototype plagued by technical issues and slowdowns:

Demanding flight departure schedules posed other operational problems that significantly hampered biometric matching of passengers during the pilot in 2017. Typically, when incoming flights arrived behind schedule, the time allotted for boarding departing flights was reduced. In these cases, CBP allowed airlines to bypass biometric processing in order to save time. As such, passengers could proceed with presenting their boarding passes to gate agents without being photographed and biometrically matched by CBP first. We observed this scenario at the Atlanta Hartsfield-Jackson International Airport when an airline suspended the biometric matching process early to avoid a flight delay. This resulted in approximately 120 passengers boarding the flight without biometric confirmation.

The report goes on to again bemoan “airlines’ recurring tendency to bypass the biometric matching process in favor of boarding flights for an on-time departure.” DHS, apparently, is worried that it could be habit-forming for the airlines: “Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

These concerns, however, are difficult to square with a later assurance that “airline officials we interviewed indicated the processing time was generally acceptable and did not contribute to departure delays.”

[snip]

The Boeing 747: The plane that shrank the world

The Boeing 747: The plane that shrank the world
It became an icon of long-haul travel and exotic holidays; Boeing’s 747 could fly more people further than any plane before. Stephen Dowling looks back at an aviation classic 50 years after it was first unveiled.
By Stephen Dowling
Sep 28 2018
http://www.bbc.com/future/story/20180927-the-boeing-747-the-plane-that-shrank-the-world

It is 30 September 1968, and a crowd of thousands has gathered at the new Boeing factory at Everett, about 30 miles (50km) north of Seattle. They are here to see the airline manufacturer’s radical new design.

The 1960s has seen seismic social change, the race to put an astronaut on the Moon, the tumult of the Vietnam War and the undulations of Cold War tension. Over the course of the decade air travel has gone from being the preserve of the wealthy to something much more affordable.

Key to that has been a new generation of jet airliners. They are bigger and faster than their propeller-driven ancestors, and their powerful jet engines let them fly far higher – allowing them to climb over bad weather instead of having to fly around it. That means flights to far-flung places take a lot less time than they used to.

Boeing’s 707 has been a mainstay of ever-expanding airlines since the mid-1950s, and there are rivals from the UK, France and the Soviet Union. The bigger jets are able to take more passengers, which means airports are having to grow just to keep up with demand.

The genesis of Boeing’s new design has come not from the airliner producer itself, but from one of its customers. Juan Trippe, the chief of globetrotting airline Pan Am, has noticed increasing congestion at airports. While the number of flights is increasing, the aircraft themselves can carry only relatively small numbers of passengers. A bigger plane will help the airlines keep down running costs.

Trippe asks Boeing to design something completely different – a super-sized airliner twice the size of the Boeing 707.

The plane that Boeing unveils on this September day will become synonymous with the glamour of long-haul travel – the plane that can take you to sunny beaches a continent away. It will redefine the shape and size of airports, and become an unsung stalwart of the world’s cargo freighters, moving vast amounts of goods across the world to this day.

It will become a household name thanks to a play on its elephantine size; the ‘Jumbo Jet’. But as far as Boeing is concerned, it’s called the 747.

***

The beginnings of the 747’s story, however, began with a little-known military contract.

In the early 1960s, the US Air Force was just starting to take delivery of the Lockheed C-141 Starlifter, a huge four-engined jet designed to carry 27 tonnes of cargo over distances of some 3,500 miles (5,600km). But the air force needed something even bigger.

In March 1964 it invited aircraft builders to submit designs. The new plane would have to be able to haul 52 tonnes of cargo 5,000 miles (8,000km) – or be able to take off with 81 tonnes of cargo onboard for shorter missions. On top of that, the plane would have to have a cargo hold 17ft (5.18m) wide by 13.5ft (4.1m) high and 100ft (30m) long – big enough to comfortably drive a tank into. And it would need to have cargo ramps both front and back so vehicles could be driven in or out at either end.

Boeing tendered a design for this giant freighter, alongside rivals Douglas, General-Dynamics, Lockheed and Martin-Marietta. Boeing, Douglas and Lockheed’s designs were all selected for further study and each of them had a different approach to a central problem – where do you put the cockpit when you have to have a cargo door at the front of the plane.

Douglas’s design had a pod on top of the fuselage ahead of the wing, while Lockheed’s design had the cockpit and a cabin for additional passengers in a long ‘spine’ that ran most of the length of the aircraft. Boeing chose something somewhere between the two – and this would later work out to be very wise decision indeed.

Lockheed’s design won the military competition (their submission would become the C-5 Galaxy, the biggest aircraft in the world for the next two decades) but Boeing’s submission was to influence another aircraft with a very different role.

In 1965, Boeing engineer Joe Sutter, who had been working on the new 737 short-haul airliner, was brought in by Boeing’s president Bill Allen to work on a new project. This was a giant airliner inspired by the demands of the military contract, and Juan Trippe’s desire for a congestion-busting passenger plane.

[snip]

This major discovery upends long-held theories about the Maya civilization

This major discovery upends long-held theories about the Maya civilization
New technology allows scientists to visualize ancient Maya cities like never before
By Ben Guarino
Sep 27 2018
https://www.washingtonpost.com/science/2018/09/27/this-major-discovery-upends-long-held-theories-about-maya-civilization/

In the autumn of 1929, Anne Morrow Lindbergh and her husband Charles flew across the Yucatán Peninsula. With Charles at the controls, Anne snapped photographs of the jungles just below. She wrote in her journal of Maya structuresobscured by large humps of vegetation. A bright stone wall peeked through the leaves, “unspeakably alone and majestic and desolate — the mark of a great civilization gone.”

Nearly a century later, surveyors once again took flight over the ancient Maya empire, and mapped the Guatemala forests with lasers. The 2016 survey, whose first results were published this week in the journal Science, comprises a dozen plots covering 830 square miles, an area larger than the island of Maui. It is the largest such survey of the Maya region, ever.

The study authors describe the results as a revelation. “It’s like putting glasses on when your eyesight is blurry,” said study author Mary Jane Acuña, director of El Tintal Archaeological Project in Guatemala.

In the past, archaeologists had argued that small, disconnected city-states dotted the Maya lowlands, though that conception is falling out of favor. This study shows that the Maya could extensively “exploit and manipulate” their environment and geography, Acuña said. Maya agriculture sustained large populations, who in turn forged relationships across the region.

Combing through the scans, Acuña and her colleagues, an international 18-strong scientific team, tallied 61,480 structures. These included: 60 miles of causeways, roads and canals that connected cities; large maize farms; houses large and small; and, surprisingly, defensive fortifications that suggest the Maya came under attack from the west of Central America.

“We were all humbled,” said Tulane University anthropologist Marcello Canuto, the study’s lead author. “All of us saw things we had walked over and we realized, oh wow, we totally missed that.”

Preliminary images from the survey went public in February, to the delight of archaeologists like Sarah Parcak. Parcak, who was not involved with the research, wrote on Twitter, “Hey all: you realize that researchers just used lasers to find *60,000* new sites in Guatemala?!? This is HOLY [expletive] territory.”

Parcak, whose space archaeology program GlobalXplorer.org has been described as the love child of Google Earth and Indiana Jones, is a champion of using satellite data to remotely observe sites in Egypt and elsewhere. “The scale of information that we’re able to collect now is unprecedented,” Parcak said, adding that this survey is “going to upend long-held theories about ancient Maya society.”

With support from a Guatemala-based heritage foundation called Pacunam, the researchers conducted the massive and expensive survey using lidar, or light detection and ranging. They mapped several active archaeological sites, plus well-studied Maya cities like Tikal and Uaxactun.

Lidar’s principles are similar to radar, except instead of radio waves lidar relies on laser light. From an aircraft flying just a few thousand feet above the canopy, the surveyors prickled each square meter with 15 laser pulses. Those pulses penetrate vegetation but bounce back from hard stone surfaces. Using lidar, you can’t see the forest through the invisible trees.

Beneath the thick jungle, ruins appeared. Lots and lots of them. Extrapolated over the 36,700 square miles, which encompasses the total Maya lowland region, the authors estimate the Maya built as many as 2.7 million structures. These would have supported 7 million to 11 million people during the Classic Period of Maya civilization, around the years 650 to 800, in line with other Maya population estimates.

“We’ve been working in this area for over a century,” Canuto said. “It’s not terra incognita, but we didn’t have a good appreciation for what was really there.”

[snip]

Software disenchantment

[Note:  This item comes from friend Judi Clark.  DLH]

Software disenchantment
By Nikita Prokopov
Sep 17 2018
http://tonsky.me/blog/disenchantment/

I’ve been programming for 15 years now. Recently our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and the IT in general.

Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.

Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”:

@tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I’ll make my time back in 41 years, 24 days 🙂

You’ve probably heard this mantra: “programmer time is more expensive than computer time”. What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.

Everything is unbearably slow

Look around: our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games, watch 4K videos but not scroll web pages? How is it ok?

Google Inbox, a web app written by Google, running in Chrome browser also by Google, takes 13 seconds to open moderately-sized emails:

It also animates empty white boxes instead of showing their content because it’s the only way anything can be animated on a webpage with decent performance. No, decent doesn’t mean 60fps, it’s rather “as fast as this web page could possibly go”. I’m dying to see web community answer when 120Hz displays become mainstream. Shit barely hits 60Hz already.

Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come?

As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features. Everything works way below the possible speed. Ever wonder why your phone needs 30 to 60 seconds to boot? Why can’t it boot, say, in one second? There are no physical limitations to that. I would love to see that. I would love to see limits reached and explored, utilizing every last bit of performance we can get for something meaningful in a meaningful way.

Everything is HUUUUGE

And then there’s bloat. Web apps could open up to 10× faster if you just simply block all ads. Google begs everyone to stop shooting themselves in their feet with AMP initiative—a technology solution to a problem that doesn’t need any technology, just a little bit of common sense. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that?

Android system with no apps takes almost 6 Gb. Just think for a second how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone?

[snip]

What Happened to Facebook’s Grand Plan to Wire the World?

What Happened to Facebook’s Grand Plan to Wire the World?
Five years ago Mark Zuckerberg debuted a bold, humanitarian vision of global internet. It didn’t go as planned—forcing Facebook to reckon with the limits of its own ambition.
By JESSI HEMPEL
May 15 2018
https://www.wired.com/story/what-happened-to-facebooks-grand-plan-to-wire-the-world/

In August 2013, Mark Zuckerberg tapped out a 10-page white paper on his iPhone and shared it on Facebook. It was intended as a call to action for the tech industry: Facebook was going to help get people online. Everyone should be entitled to free basic internet service, Zuckerberg argued. Data was, like food or water, a human right. Universal basic internet service is possible, he wrote, but “it isn’t going to happen by itself.” Wiring the world required powerful players—institutions like Facebook. For this plan to be feasible, getting data to people had to become a hundred times cheaper.

Zuckerberg said this should be possible within five to 10 years.

It was an audacious proposal for the founder of a social software company to make. But the Zuckerberg of 2013 had not yet been humbled by any significant failure. In a few months, the service he’d launched between classes at Harvard would turn 10. A few months after that, he would be turning 30. It was a moment for taking stock, for reflecting on the immense responsibility that he felt came with the outsize success of his youth, and for doing something with his accumulated power that mattered.

A few days later, Facebook unveiled what that something would be: Internet.org. Launched with six partners, it was a collection of initiatives intended to get people hooked on the net. Its projects fell into two groups. For people who were within range of the internet but not connected, the company would strike business deals with phone carriers to make a small number of stripped-down web services (including Facebook) available for free through an app. For those who lived beyond the web’s reach—an estimated 10 to 15 percent of the world’s population—Zuckerberg would recruit engineers to work on innovative networking technologies like lasers and drones.

The work was presented as a humanitarian effort. Its name ended in “dot-org,” appropriating the suffix nonprofits use to signal their do-gooder status on the web. Zuckerberg wrote that he wasn’t expecting Facebook to earn a profit from “serv[ing]the next few billion people,” suggesting he was motivated by a moral imperative, not a financial one. The company released a promotional video featuring John F. Kennedy’s voice reading excerpts from a 1963 speech imploring the students of American University to remember that “we all cherish our children’s future. And we are all mortal.” Andrew Carnegie believed in libraries. Bill Gates believed in health care. Zuckerberg believed in the internet.

Zuckerberg was sincere in his swashbuckling belief that Facebook was among a small number of players that had the money, know-how, and global reach to fast-forward history, jump-starting the economic lives of the 5 billion people who do not yet surf the web. He believed peer-to-peer communications would be responsible for redistributing global power, making it possible for any individual to access and share information. “The story of the next century is the transition from an industrial, resource-based economy to a knowledge economy,” he said in an interview with WIRED at the time. “If you know something, then you can share that, and then the whole world gets richer.” The result would be that a kid in India—he loved this hypothetical about this kid in India—could potentially go online and learn all of math.

For three years, Zuckerberg included Internet.org in his top priorities, pouring resources, publicity, and a good deal of his own time into the project. He traveled to India and Africa to promote the initiative and spoke about it at the Mobile World Congress in Barcelona two years in a row. He appeared before the UN General Assembly to push the idea that internet access was a human right. He amassed a team of engineers in his Connectivity Lab to work on internet-distribution projects, which had radically different production cycles than the software to which he was accustomed.

But from the start, critics were skeptical of Zuckerberg’s intentions. The company’s peers, like Google and Microsoft, never signed on as partners, preferring instead to pursue their own strategies for getting people online. Skeptics questioned the hubris of an American boy-billionaire who believed the world needed his help and posited that existing businesses and governments are better positioned to spread connectivity. They criticized Facebook’s app for allowing free access only to a Facebook-sanctioned set of services. At one point, 67 human rights groups signed an open letter to Zuckerberg that accused Facebook of “building a walled garden in which the world’s poorest people will only be able to access a limited set of insecure websites and services.”

At first, Zuckerberg defended his efforts in public speeches, op-eds, and impassioned videos that he published on his own platform. I had a front-row seat for these events, as I spent most of 2015 reporting an article on Facebook’s connectivity efforts that took me to South Africa, London, Spain, New York, and Southern California to observe the company’s efforts to advance its version of universal connectivity.

My story was published in January 2016, a month before India banned Facebook’s app altogether. Shortly after that, Facebook stopped talking about Internet.org. While bits of news about the company’s drone project or new connectivity efforts still emerge, Facebook hasn’t updated the press releases on the Internet.orgwebsite in a year. That led me to wonder, what exactly happened to Internet.org?

[snip]

Re: Is 5G a Spectrum-eating Monster that Destroys Competition?

[Note:  This comment comes from friend Dave Burstein.  DLH]

From: Dave Burstein <daveb@dslprime.com>
Subject: Re: [Dewayne-Net] Is 5G a Spectrum-eating Monster that Destroys Competition?
Date: September 28, 2018 at 1:30:52 AM EDT
To: dewayne@warpspeed.com

Fred, Dewayne

There are important points here. Fred’s underlying policy recommendation – keep lots of spectrum open for new and other users, is on target. (Sharing works, as Wi-Fi proved. It’s a mistake to add monopoly spectrum, especially in the 3.7 GHz band under consideration.) 

But I believe the answer to 
Is 5G a Spectrum-eating Monster that Destroys Competition? 
is no. 

In 20 years in this business, I haven’t seen any major new entrants in either U.S. wireless or broadband. Lots of ideas are still floating around; I’ve volunteered in two community networks. Europe isn’t much different, except when satellite of cable companies offer more to their customer base.

The result: U.S. prices are 50% higher than many of our peers in Europe. (Where diminishing competition is also becoming a problem.) We who want a great Internet affordable for all need a different solution.

It’s highly unlikely that any additional spectrum plan will result in much more competition. Even if spectrum were completely free, the economics of adding a new mobile network are unlikely to work.  That’s because it takes about $10B to build a new U.S. network and another $5-10B to cover the losses until you have enough customers to break even. 

Blair thought spectrum freed by the Broadband Plan will allow new competition. That was nine years ago and it hasn’t happened. I love solving problems with competition, but in the real world the competition is unlikely. Regulation has severe problems of course, but dreams of more competition haven’t worked.

I’ve written about some possible alternatives to strong regulation and welcome ideas.

If we want better and cheaper networks, we need a strategy that works with the current market structure. 

(It’s also important to understand that 5G now is mostly low and mid-band, except Verizon. I and all the network people objected, but the marketing people, regulators, and 90% of analysts have accepted a new definition of 5G that includes low and mid-band. 90% of the “5G” being built in the next five years will be below 4.2 GHz. It’s really just 4G with a software tweak, NR. We lost that battle. What I do for my technical readers is say millimeter wave, not 5G, unless I want to include 600 MHz up. That’s how the world understands 5G.)    

Dave Burstein

p.s. There are some possibilities for new entrants in fixed wireless in broadband, but it’s a longshot. No one has broken out yet. 

Editor, http://Fastnet.news http://wirelessone.news gfastnews.com
Author with Jennie Bourne  DSL (Wiley) and Web Video: Making It Great, Getting It Noticed (Peachpit)

 

Is 5G a Spectrum-eating Monster that Destroys Competition?
By Fred Goldstein
Jun 15 2018
https://www.techzone360.com/topics/techzone/articles/2018/06/15/438482-5g-spectrum-eating-monster-that-destroys-competition.htm