The World Bank and tech companies want to use AI to predict famine

The World Bank and tech companies want to use AI to predict famine
A new tool using data and AI is hoping to better predict famine and help millions experiencing food insecurity.
By Abigail Higgins
Sep 29 2018
https://www.vox.com/policy-and-politics/2018/9/29/17915222/famine-world-bank-south-sudan-yemen-food-crisis-conflict

At this week’s United Nations General Assembly, the World Bank, the United Nations, and the Red Cross teamed up with tech giants Amazon, Microsoft, and Google to announce an unlikely new tool to stop famine before it starts: artificial intelligence. 

The Famine Action Mechanism (FAM), as they’re calling it, is the first global tool dedicated to preventing future famines — no small news in a world where one in nine people don’t have enough food. Building off of previous famine-prediction strategies, the tool will combine satellite data of things like rainfall and crop health with social media and news reports of more human factors, like violence or changing food prices. It will also establish a fund that will be automatically dispersed to a food crisis as soon as it meets certain criteria, speeding up the often-lengthy process for funding famine relief. 

For a famine to be declared in a country or region, three criteria have to be met: At least one in five households has an extreme lack of food; over 30 percent of children under five have acute malnutrition; and two out of 10,000 people die each day. (Famine declarations are issued jointly by United Nations agencies, the affected governments, and the Famine Early Warnings Systems Network (FEWSNET).) By that definition, there are no famines in the world right now, but conflict is threatening to plunge South Sudan, Nigeria, and Yemen into one, and many parts of the world are suffering from food insecurity.

It’s usually not until a famine is well underway that the United Nations and donor agencies begin soliciting funding. By that time, the damage has already been done. Thousands have usually already died and, for those that survive, the damage extends far into the future: for children born during a famine, their lifetime incomes are reduced by approximately 13 percent.

That’s the outcome that the FAM was created to prevent. But it faces quite a challenge — predicting famine is complicated, and even when it’s possible, it’s a whole lot harder to act on those predictions.

“If we can better predict when and where future famines will occur, we can save lives by responding earlier and more effectively,” said Brad Smith, president of Microsoft, in a statement announcing the initiative. “Artificial intelligence and machine learning hold huge promise for forecasting and detecting early signs of food shortages, like crop failures, droughts, natural disasters, and conflicts.”

It’s that last part—“conflicts”—that could prove especially challenging for a mechanism like FAM. 

Why famines are hard to predict 

The reason famines are so hard to stop is that they’re caused by that most unpredictable of factors: people. 

“Overwhelmingly, famines in particular, but humanitarian emergencies in general, are politically caused. It’s only a relatively small minority — and virtually none in modern history — that were caused exclusively, or even predominantly, by natural adversity,” Alex DeWaal, author of Mass Starvation: The History and Future of Famine, told me.

Many people assume famine is caused mainly by drought, but that really hasn’t been the case since the Industrial Revolution. Today, famines almost always involve conflict.

In February 2017, the United Nations declared famine in South Sudan. The country has been embroiled in a civil war since 2013 between pro-government and rebel factions drawn along ethnic lines. Shortly after famine was declared, government troops expelled aid workers delivering desperately needed food aid to areas they suspected were supporting rebel troops. The United States warned South Sudan it may be engaging in “deliberate” starvation tactics. Famine eventually abated last year, but the country is now teetering on the brink again.

[snip]

The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face

The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face
By Sam Biddle
Sep 26 2018
https://theintercept.com/2018/09/26/airport-facial-recognition-flight-delay/

Omnipresent facial recognition has become a golden goose for law enforcement agencies around the world. In the United States, few are as eager as the Department of Homeland Security. American airports are currently being used as laboratories for a new tool that would automatically scan your face — and confirm your identity with U.S. Customs and Border Protection — as you prepare to board a flight, despite the near-unanimous objections from privacy advocates and civil libertarians, who call such scans invasive and pointless.

According to a new report on the Biometric Entry-Exit Program by DHS itself, we can add another objection: Your flight could be late.

Although the new report, published by Homeland Security’s Office of the Inspector General, is overwhelmingly supportive in its evaluation of airport-based biometric surveillance — the practice of a computer detecting your face and pairing it with everything else in the system — the agency notes some hurdles from a recent test code-named “Sprint 8.” Among them, the report notes with palpable frustration, was that airlines insist on letting their passengers depart on time, rather than subjecting them to a Homeland Security surveillance prototype plagued by technical issues and slowdowns:

Demanding flight departure schedules posed other operational problems that significantly hampered biometric matching of passengers during the pilot in 2017. Typically, when incoming flights arrived behind schedule, the time allotted for boarding departing flights was reduced. In these cases, CBP allowed airlines to bypass biometric processing in order to save time. As such, passengers could proceed with presenting their boarding passes to gate agents without being photographed and biometrically matched by CBP first. We observed this scenario at the Atlanta Hartsfield-Jackson International Airport when an airline suspended the biometric matching process early to avoid a flight delay. This resulted in approximately 120 passengers boarding the flight without biometric confirmation.

The report goes on to again bemoan “airlines’ recurring tendency to bypass the biometric matching process in favor of boarding flights for an on-time departure.” DHS, apparently, is worried that it could be habit-forming for the airlines: “Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

These concerns, however, are difficult to square with a later assurance that “airline officials we interviewed indicated the processing time was generally acceptable and did not contribute to departure delays.”

[snip]

The Boeing 747: The plane that shrank the world

The Boeing 747: The plane that shrank the world
It became an icon of long-haul travel and exotic holidays; Boeing’s 747 could fly more people further than any plane before. Stephen Dowling looks back at an aviation classic 50 years after it was first unveiled.
By Stephen Dowling
Sep 28 2018
http://www.bbc.com/future/story/20180927-the-boeing-747-the-plane-that-shrank-the-world

It is 30 September 1968, and a crowd of thousands has gathered at the new Boeing factory at Everett, about 30 miles (50km) north of Seattle. They are here to see the airline manufacturer’s radical new design.

The 1960s has seen seismic social change, the race to put an astronaut on the Moon, the tumult of the Vietnam War and the undulations of Cold War tension. Over the course of the decade air travel has gone from being the preserve of the wealthy to something much more affordable.

Key to that has been a new generation of jet airliners. They are bigger and faster than their propeller-driven ancestors, and their powerful jet engines let them fly far higher – allowing them to climb over bad weather instead of having to fly around it. That means flights to far-flung places take a lot less time than they used to.

Boeing’s 707 has been a mainstay of ever-expanding airlines since the mid-1950s, and there are rivals from the UK, France and the Soviet Union. The bigger jets are able to take more passengers, which means airports are having to grow just to keep up with demand.

The genesis of Boeing’s new design has come not from the airliner producer itself, but from one of its customers. Juan Trippe, the chief of globetrotting airline Pan Am, has noticed increasing congestion at airports. While the number of flights is increasing, the aircraft themselves can carry only relatively small numbers of passengers. A bigger plane will help the airlines keep down running costs.

Trippe asks Boeing to design something completely different – a super-sized airliner twice the size of the Boeing 707.

The plane that Boeing unveils on this September day will become synonymous with the glamour of long-haul travel – the plane that can take you to sunny beaches a continent away. It will redefine the shape and size of airports, and become an unsung stalwart of the world’s cargo freighters, moving vast amounts of goods across the world to this day.

It will become a household name thanks to a play on its elephantine size; the ‘Jumbo Jet’. But as far as Boeing is concerned, it’s called the 747.

***

The beginnings of the 747’s story, however, began with a little-known military contract.

In the early 1960s, the US Air Force was just starting to take delivery of the Lockheed C-141 Starlifter, a huge four-engined jet designed to carry 27 tonnes of cargo over distances of some 3,500 miles (5,600km). But the air force needed something even bigger.

In March 1964 it invited aircraft builders to submit designs. The new plane would have to be able to haul 52 tonnes of cargo 5,000 miles (8,000km) – or be able to take off with 81 tonnes of cargo onboard for shorter missions. On top of that, the plane would have to have a cargo hold 17ft (5.18m) wide by 13.5ft (4.1m) high and 100ft (30m) long – big enough to comfortably drive a tank into. And it would need to have cargo ramps both front and back so vehicles could be driven in or out at either end.

Boeing tendered a design for this giant freighter, alongside rivals Douglas, General-Dynamics, Lockheed and Martin-Marietta. Boeing, Douglas and Lockheed’s designs were all selected for further study and each of them had a different approach to a central problem – where do you put the cockpit when you have to have a cargo door at the front of the plane.

Douglas’s design had a pod on top of the fuselage ahead of the wing, while Lockheed’s design had the cockpit and a cabin for additional passengers in a long ‘spine’ that ran most of the length of the aircraft. Boeing chose something somewhere between the two – and this would later work out to be very wise decision indeed.

Lockheed’s design won the military competition (their submission would become the C-5 Galaxy, the biggest aircraft in the world for the next two decades) but Boeing’s submission was to influence another aircraft with a very different role.

In 1965, Boeing engineer Joe Sutter, who had been working on the new 737 short-haul airliner, was brought in by Boeing’s president Bill Allen to work on a new project. This was a giant airliner inspired by the demands of the military contract, and Juan Trippe’s desire for a congestion-busting passenger plane.

[snip]

This major discovery upends long-held theories about the Maya civilization

This major discovery upends long-held theories about the Maya civilization
New technology allows scientists to visualize ancient Maya cities like never before
By Ben Guarino
Sep 27 2018
https://www.washingtonpost.com/science/2018/09/27/this-major-discovery-upends-long-held-theories-about-maya-civilization/

In the autumn of 1929, Anne Morrow Lindbergh and her husband Charles flew across the Yucatán Peninsula. With Charles at the controls, Anne snapped photographs of the jungles just below. She wrote in her journal of Maya structuresobscured by large humps of vegetation. A bright stone wall peeked through the leaves, “unspeakably alone and majestic and desolate — the mark of a great civilization gone.”

Nearly a century later, surveyors once again took flight over the ancient Maya empire, and mapped the Guatemala forests with lasers. The 2016 survey, whose first results were published this week in the journal Science, comprises a dozen plots covering 830 square miles, an area larger than the island of Maui. It is the largest such survey of the Maya region, ever.

The study authors describe the results as a revelation. “It’s like putting glasses on when your eyesight is blurry,” said study author Mary Jane Acuña, director of El Tintal Archaeological Project in Guatemala.

In the past, archaeologists had argued that small, disconnected city-states dotted the Maya lowlands, though that conception is falling out of favor. This study shows that the Maya could extensively “exploit and manipulate” their environment and geography, Acuña said. Maya agriculture sustained large populations, who in turn forged relationships across the region.

Combing through the scans, Acuña and her colleagues, an international 18-strong scientific team, tallied 61,480 structures. These included: 60 miles of causeways, roads and canals that connected cities; large maize farms; houses large and small; and, surprisingly, defensive fortifications that suggest the Maya came under attack from the west of Central America.

“We were all humbled,” said Tulane University anthropologist Marcello Canuto, the study’s lead author. “All of us saw things we had walked over and we realized, oh wow, we totally missed that.”

Preliminary images from the survey went public in February, to the delight of archaeologists like Sarah Parcak. Parcak, who was not involved with the research, wrote on Twitter, “Hey all: you realize that researchers just used lasers to find *60,000* new sites in Guatemala?!? This is HOLY [expletive] territory.”

Parcak, whose space archaeology program GlobalXplorer.org has been described as the love child of Google Earth and Indiana Jones, is a champion of using satellite data to remotely observe sites in Egypt and elsewhere. “The scale of information that we’re able to collect now is unprecedented,” Parcak said, adding that this survey is “going to upend long-held theories about ancient Maya society.”

With support from a Guatemala-based heritage foundation called Pacunam, the researchers conducted the massive and expensive survey using lidar, or light detection and ranging. They mapped several active archaeological sites, plus well-studied Maya cities like Tikal and Uaxactun.

Lidar’s principles are similar to radar, except instead of radio waves lidar relies on laser light. From an aircraft flying just a few thousand feet above the canopy, the surveyors prickled each square meter with 15 laser pulses. Those pulses penetrate vegetation but bounce back from hard stone surfaces. Using lidar, you can’t see the forest through the invisible trees.

Beneath the thick jungle, ruins appeared. Lots and lots of them. Extrapolated over the 36,700 square miles, which encompasses the total Maya lowland region, the authors estimate the Maya built as many as 2.7 million structures. These would have supported 7 million to 11 million people during the Classic Period of Maya civilization, around the years 650 to 800, in line with other Maya population estimates.

“We’ve been working in this area for over a century,” Canuto said. “It’s not terra incognita, but we didn’t have a good appreciation for what was really there.”

[snip]

Software disenchantment

[Note:  This item comes from friend Judi Clark.  DLH]

Software disenchantment
By Nikita Prokopov
Sep 17 2018
http://tonsky.me/blog/disenchantment/

I’ve been programming for 15 years now. Recently our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and the IT in general.

Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.

Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”:

@tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I’ll make my time back in 41 years, 24 days 🙂

You’ve probably heard this mantra: “programmer time is more expensive than computer time”. What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.

Everything is unbearably slow

Look around: our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games, watch 4K videos but not scroll web pages? How is it ok?

Google Inbox, a web app written by Google, running in Chrome browser also by Google, takes 13 seconds to open moderately-sized emails:

It also animates empty white boxes instead of showing their content because it’s the only way anything can be animated on a webpage with decent performance. No, decent doesn’t mean 60fps, it’s rather “as fast as this web page could possibly go”. I’m dying to see web community answer when 120Hz displays become mainstream. Shit barely hits 60Hz already.

Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come?

As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features. Everything works way below the possible speed. Ever wonder why your phone needs 30 to 60 seconds to boot? Why can’t it boot, say, in one second? There are no physical limitations to that. I would love to see that. I would love to see limits reached and explored, utilizing every last bit of performance we can get for something meaningful in a meaningful way.

Everything is HUUUUGE

And then there’s bloat. Web apps could open up to 10× faster if you just simply block all ads. Google begs everyone to stop shooting themselves in their feet with AMP initiative—a technology solution to a problem that doesn’t need any technology, just a little bit of common sense. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that?

Android system with no apps takes almost 6 Gb. Just think for a second how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone?

[snip]

What Happened to Facebook’s Grand Plan to Wire the World?

What Happened to Facebook’s Grand Plan to Wire the World?
Five years ago Mark Zuckerberg debuted a bold, humanitarian vision of global internet. It didn’t go as planned—forcing Facebook to reckon with the limits of its own ambition.
By JESSI HEMPEL
May 15 2018
https://www.wired.com/story/what-happened-to-facebooks-grand-plan-to-wire-the-world/

In August 2013, Mark Zuckerberg tapped out a 10-page white paper on his iPhone and shared it on Facebook. It was intended as a call to action for the tech industry: Facebook was going to help get people online. Everyone should be entitled to free basic internet service, Zuckerberg argued. Data was, like food or water, a human right. Universal basic internet service is possible, he wrote, but “it isn’t going to happen by itself.” Wiring the world required powerful players—institutions like Facebook. For this plan to be feasible, getting data to people had to become a hundred times cheaper.

Zuckerberg said this should be possible within five to 10 years.

It was an audacious proposal for the founder of a social software company to make. But the Zuckerberg of 2013 had not yet been humbled by any significant failure. In a few months, the service he’d launched between classes at Harvard would turn 10. A few months after that, he would be turning 30. It was a moment for taking stock, for reflecting on the immense responsibility that he felt came with the outsize success of his youth, and for doing something with his accumulated power that mattered.

A few days later, Facebook unveiled what that something would be: Internet.org. Launched with six partners, it was a collection of initiatives intended to get people hooked on the net. Its projects fell into two groups. For people who were within range of the internet but not connected, the company would strike business deals with phone carriers to make a small number of stripped-down web services (including Facebook) available for free through an app. For those who lived beyond the web’s reach—an estimated 10 to 15 percent of the world’s population—Zuckerberg would recruit engineers to work on innovative networking technologies like lasers and drones.

The work was presented as a humanitarian effort. Its name ended in “dot-org,” appropriating the suffix nonprofits use to signal their do-gooder status on the web. Zuckerberg wrote that he wasn’t expecting Facebook to earn a profit from “serv[ing]the next few billion people,” suggesting he was motivated by a moral imperative, not a financial one. The company released a promotional video featuring John F. Kennedy’s voice reading excerpts from a 1963 speech imploring the students of American University to remember that “we all cherish our children’s future. And we are all mortal.” Andrew Carnegie believed in libraries. Bill Gates believed in health care. Zuckerberg believed in the internet.

Zuckerberg was sincere in his swashbuckling belief that Facebook was among a small number of players that had the money, know-how, and global reach to fast-forward history, jump-starting the economic lives of the 5 billion people who do not yet surf the web. He believed peer-to-peer communications would be responsible for redistributing global power, making it possible for any individual to access and share information. “The story of the next century is the transition from an industrial, resource-based economy to a knowledge economy,” he said in an interview with WIRED at the time. “If you know something, then you can share that, and then the whole world gets richer.” The result would be that a kid in India—he loved this hypothetical about this kid in India—could potentially go online and learn all of math.

For three years, Zuckerberg included Internet.org in his top priorities, pouring resources, publicity, and a good deal of his own time into the project. He traveled to India and Africa to promote the initiative and spoke about it at the Mobile World Congress in Barcelona two years in a row. He appeared before the UN General Assembly to push the idea that internet access was a human right. He amassed a team of engineers in his Connectivity Lab to work on internet-distribution projects, which had radically different production cycles than the software to which he was accustomed.

But from the start, critics were skeptical of Zuckerberg’s intentions. The company’s peers, like Google and Microsoft, never signed on as partners, preferring instead to pursue their own strategies for getting people online. Skeptics questioned the hubris of an American boy-billionaire who believed the world needed his help and posited that existing businesses and governments are better positioned to spread connectivity. They criticized Facebook’s app for allowing free access only to a Facebook-sanctioned set of services. At one point, 67 human rights groups signed an open letter to Zuckerberg that accused Facebook of “building a walled garden in which the world’s poorest people will only be able to access a limited set of insecure websites and services.”

At first, Zuckerberg defended his efforts in public speeches, op-eds, and impassioned videos that he published on his own platform. I had a front-row seat for these events, as I spent most of 2015 reporting an article on Facebook’s connectivity efforts that took me to South Africa, London, Spain, New York, and Southern California to observe the company’s efforts to advance its version of universal connectivity.

My story was published in January 2016, a month before India banned Facebook’s app altogether. Shortly after that, Facebook stopped talking about Internet.org. While bits of news about the company’s drone project or new connectivity efforts still emerge, Facebook hasn’t updated the press releases on the Internet.orgwebsite in a year. That led me to wonder, what exactly happened to Internet.org?

[snip]

Re: Is 5G a Spectrum-eating Monster that Destroys Competition?

[Note:  This comment comes from friend Dave Burstein.  DLH]

From: Dave Burstein <daveb@dslprime.com>
Subject: Re: [Dewayne-Net] Is 5G a Spectrum-eating Monster that Destroys Competition?
Date: September 28, 2018 at 1:30:52 AM EDT
To: dewayne@warpspeed.com

Fred, Dewayne

There are important points here. Fred’s underlying policy recommendation – keep lots of spectrum open for new and other users, is on target. (Sharing works, as Wi-Fi proved. It’s a mistake to add monopoly spectrum, especially in the 3.7 GHz band under consideration.) 

But I believe the answer to 
Is 5G a Spectrum-eating Monster that Destroys Competition? 
is no. 

In 20 years in this business, I haven’t seen any major new entrants in either U.S. wireless or broadband. Lots of ideas are still floating around; I’ve volunteered in two community networks. Europe isn’t much different, except when satellite of cable companies offer more to their customer base.

The result: U.S. prices are 50% higher than many of our peers in Europe. (Where diminishing competition is also becoming a problem.) We who want a great Internet affordable for all need a different solution.

It’s highly unlikely that any additional spectrum plan will result in much more competition. Even if spectrum were completely free, the economics of adding a new mobile network are unlikely to work.  That’s because it takes about $10B to build a new U.S. network and another $5-10B to cover the losses until you have enough customers to break even. 

Blair thought spectrum freed by the Broadband Plan will allow new competition. That was nine years ago and it hasn’t happened. I love solving problems with competition, but in the real world the competition is unlikely. Regulation has severe problems of course, but dreams of more competition haven’t worked.

I’ve written about some possible alternatives to strong regulation and welcome ideas.

If we want better and cheaper networks, we need a strategy that works with the current market structure. 

(It’s also important to understand that 5G now is mostly low and mid-band, except Verizon. I and all the network people objected, but the marketing people, regulators, and 90% of analysts have accepted a new definition of 5G that includes low and mid-band. 90% of the “5G” being built in the next five years will be below 4.2 GHz. It’s really just 4G with a software tweak, NR. We lost that battle. What I do for my technical readers is say millimeter wave, not 5G, unless I want to include 600 MHz up. That’s how the world understands 5G.)    

Dave Burstein

p.s. There are some possibilities for new entrants in fixed wireless in broadband, but it’s a longshot. No one has broken out yet. 

Editor, http://Fastnet.news http://wirelessone.news gfastnews.com
Author with Jennie Bourne  DSL (Wiley) and Web Video: Making It Great, Getting It Noticed (Peachpit)

 

Is 5G a Spectrum-eating Monster that Destroys Competition?
By Fred Goldstein
Jun 15 2018
https://www.techzone360.com/topics/techzone/articles/2018/06/15/438482-5g-spectrum-eating-monster-that-destroys-competition.htm

Is 5G a Spectrum-eating Monster that Destroys Competition?

Is 5G a Spectrum-eating Monster that Destroys Competition?
By Fred Goldstein
Jun 15 2018
https://www.techzone360.com/topics/techzone/articles/2018/06/15/438482-5g-spectrum-eating-monster-that-destroys-competition.htm

To hear the current FCC talk about it, 5G mobile service is the be-all and end-all of not only mobile communications, but the answer to most of the country’s ills. The snake oil pitchmen of the 1800s were tyros compared to the claims being made for 5G. Yet nobody even quite knows what 5G is! To be blunt, 5G simply seems to refer to anything that comes after 4G, which is LTE. After all, 5 is the next number after 4.

The key technological advance in 5G seems to be its ability to operate on multiple frequency bands at once, on any and all spectrum above 600 MHz, including higher frequencies than those actually useful for mobility. It can thus consume spectrum the way a black hole sucks in matter. But 5G isn’t, as the FCC members tweet, a race that the US has to somehow “win” against China, lest uncertain horrors result.

The likely real purpose of 5G is less obvious than its technology. 5G is more like a cult, a sacrificial cult that is being designed to kill off what little competition is left in the telecom industry.

The Aztecs were notorious for their vast use of human sacrifice, culminating in 1489’s sacrifice of 20,000 prisoners of war on the pyramids of Tenochtitlan. They did not see themselves as being particularly brutal, though. They worshiped the sun god and thought that if they failed to continue human sacrifices, the sun would not rise in the morning.

Today the biggest carriers and their backers have a new sun god called 5G. But unlike the sun, we don’t know who really needs it. It’s based on a supplier-driven model, not a demand pull, given that 4G LTE has been both a technical and market success, and continues to be enhanced. But the FCC knows that 5G needs a lot of spectrum. LOTS of spectrum. So they’re basically handing any and all available spectrum over to the big mobile carriers who are promising “5G”. It is a vast sacrifice of precious spectrum. The FCC seems to fear that if they fail to give more and more spectrum over to whatever 5G may turn out to be, the US will somehow “fall behind” in a “race”, and maybe the sun won’t shine any more. And to promote this kill-all-prisoners approach, they attribute preposterous miracles to 5G, like saving energy, making self-driving vehicles practical and safe, and, like the original snake oil, curing diseases.

5G, in other words, is buncombe. It is a mythical monster that is worshipped by killing off access to spectrum to all except the big mobile carriers who can afford to pay top dollar at auction.

There is precedent for this. In 1948, microwave transmission itself was new technology, and the television networks, just starting up, wanted to use it to link their affiliates together. The FCC ruled instead that civilian use of the microwave spectrum was limited to AT&T Long Lines, and the networks had to buy their connections from AT&T. It was a high point for monopoly.

In 1959, however, in its landmark Above 890 decision, the Commission authorized private microwave systems. Eventually that led to at least some competition in the telecom sector, and may have been the hole in the dike that eventually led to the breakup of the old AT&T and the birth of the public Internet. The existing private microwave spectrum is now quite crowded in many places. Not only have fiber optics not replaced microwave, but the FCC’s deregulation of the telecom incumbents, and market consolidation, have been making fiber services more expensive and less widely available. Microwave gear, on the other hand, has become faster, better and cheaper (pick 3).

Cellular mobility was originally predicated on the idea that capacity could be increased by reusing the same frequencies over smaller areas – more cells. But it’s often cheaper to use more spectrum and fewer cells. And the carriers are now promoting the use of cell phones to carry video, which uses tremendous amounts of capacity. They want more spectrum so they can show even more TV to addicted small-screen viewers.

Not coincidentally, the two biggest mobile carriers are also the two biggest wireline incumbents, who want to abandon most of their wireline business. FiOS was last decade’s news. Verizon has begun to refer to high-speed wireless to the home as “FiOS” too. It’s cheaper to build than fiber, after all. The wireless ISP community has proven that fixed wireless it is very effective for Internet access, though that is mostly done in rural areas, and doesn’t carry hundreds of TV channels. AT&T has likewise given up on expanding GigaPower as well as U-Verse. They will require some additional spectrum. The FCC’s auction policy will allow the two of them to essentially buy it all up in order to exclude competitors. Because 5G.

And you thought the Aztec sun god was powerful. 5G is a mythical monster whose hunger for spectrum is insatiable, but which its believers think must be satisfied lest the wireless sun stop rising.

Or maybe it’s just an excuse to undo decades of competition and return the bulk of the Above 890 spectrum to the descendants of the old Bell System.

5G was the stated reason why almost all of the lower millimeter wave spectrum, from 24 to 57 GHz, was not given over to regular microwave licensing, on a point to point coordinated basis that ensures efficient use of the band by anyone who needs it, including cellular backhaul. Instead, last year’s Spectrum Frontiers Order has the bulk of it being auctioned off in large-scale exclusive geographic areas, as if it were a mobile band. Not that millimeter waves work for mobility — they don’t. And they don’t go far – useful for a mile or two if absolutely nothing is in the way, and they won’t penetrate walls or cars. But the mythical 5G monster is supposed to find a use for them.

[snip]

Panda Games: Corporate Disclosure in the Eclipse of Search

[Note:  This item comes from friend David Rosenthal.  DLH]

Panda Games: Corporate Disclosure in the Eclipse of Search
By Kemin Wang, Xiaoyun Yu, Bohui Zhang
Sep 26 2018
http://voxchina.org/show-3-100.html

We conduct a textual analysis and exploit an exogenous event — Google’s 2010 surprising withdrawal from the Chinese mainland — which significantly hampered domestic investors’ ability to access foreign information. Following Google’s exit, Chinese firms’ announcements concerning their foreign transactions become more bullish in comparison to similar announcements prior to the exit and to those that involve only domestic transactions. This finding suggests that firms strategically alter their disclosure behaviors when the channel to transmit information is severed.

Researchers and policymakers have long recognized the importance of a transparent information environment in capital markets. In general, corporate transparency can be achieved by encouraging market participants to produce information or by facilitating information dissemination to investors. While most of the literature focuses on the information production effort of various intermediaries including news media and financial analyses, in this paper, we study how information transmission efficiency affects corporate transparency and shapes investors’ information sets.  Specifically, we use Google’s exit from China as a controlled experiment to identify and evaluate the efficiency of information dissemination, rather than production, in shaping corporate disclosure strategies. 

In 2006, Google officially entered the Chinese mainland market with a local search engine, Google.cn, after agreeing to abide by China’s censorship rules. Prior to 2006, the search engine market in China was monopolized by Baidu, a Google-like search engine that has been publicly traded on NASDAQ since 2005. After 2006, Google’s market share steadily increased, reaching one third of China’s market for internet searches in 2009. In comparison, as of January 2010, Baidu controlled 63 percent of China’s market share. The search engine market in China had become a duopoly. In many ways, Google and Baidu share common ground. Both focus on the internet search business and operate their own proprietary search algorithms. Both generate revenue via paid advertising platforms, provide their own webmaster and keyword analysis tools, and use geo-targeting to generate more relevant query results for users.

However, the two differ significantly in that by providing internet searches globally, Google ranks the quality of the content without any bias in its search. Baidu, on the other hand, primarily serves the Chinese market and ranks Chinese language content higher. In fact, many analysts have attributed Baidu’s leading position in the Chinese market to a combination of factors, including a keen understanding of local tastes. While Google executives insist they had better technology, Baidu counters that it has local expertise. For these reasons, it has become a general consensus among Chinese web users to use Baidu to search for local (Chinese-based) information and to use Google for non-local information.

The two firms also differ in their focus on search quality. While Google ranks the quality content and inbound link quality higher compared to quantity when a search is executed, Baidu does not have a very strong quality content requirement, ranking high on both inbound link quantity and quality. Baidu also gravitates more towards a “commercial search,” allowing brands to pay a high premium to display at the top of the search results. As such, highly profitable keywords rank higher than an organic search. Lastly, by focusing primarily on the Chinese mainland market and searches in the Chinese language, Baidu lags behind Google in search quality, especially for foreign information.

Google’s 2010 Exit

On January 12, 2010, Google publicly announced the discovery of a large-scale cyberattack originating from China, which occurred in late 2009, that Google believed was aimed at gathering information on Chinese human rights activists as well as plundering its intellectual property. As a result, it was “no longer willing to continue censoring” results on Google.cn and threatened to shut down its China operation. On March 23, 2010, Google began its partial withdrawal from the Chinese market by ceasing to censor internet search results as required by local law and moving its search engine for Chinese web users offshore. Internet users who typed in the search engine’s address were redirected to an office based in Hong Kong, where the local government doesn’t censor Web browsing. On March 30, 2010, searching via all Google search sites in all languages was banned in the Chinese mainland. Any attempt to search using Google resulted in a DNS error. On June 30, 2010, Google ended the automatic redirect of Google China to Google Hong Kong. Google search has continued to be blocked in China since its departure from the Chinese mainland. 

[snip]

BitTorrent Traffic is Not Dead, It’s Making a Comeback

BitTorrent Traffic is Not Dead, It’s Making a Comeback
File-sharing traffic, BitTorrent in particular, is making a comeback. New data from Sandvine, shared exclusively with TorrentFreak, reveals that BitTorrent is still a dominant source of upstream traffic worldwide. According to Sandvine, increased fragmentation in the legal streaming market may play a role in this resurgence.
By ERNESTO
Sep 26 2018
https://torrentfreak.com/bittorrent-traffic-is-not-dead-its-making-a-comeback-180926/

Many Internet traffic reports have been published over the years, documenting how traffic patterns change over time.

One of the trends that emerged in recent years, is that BitTorrent’s share of total Internet traffic decreased.

With the growth of services such as YouTube and Netflix, streaming started to generate massive amounts of bandwidth. As a result, BitTorrent lost a significant chunk of its ‘market share.’ 

This trend gradually increased, until recently. In some parts of the world file-sharing traffic, BitTorrent in particular, is growing.

That’s what’s suggested by Canadian broadband management company Sandvine, which has kept a close eye on these developments for over a decade. The company will release its latest Global Internet Phenomena report next month but gave us an exclusive sneak peek.

Globally, across both mobile and fixed access networks file-sharing accounts for 3% of downstream and 22% of upstream traffic. More than 97% of this upstream is BitTorrent, which makes it the dominant P2P force.

In the EMEA region, which covers Europe, the Middle East, and Africa there’s a clear upward trend. BitTorrent traffic now accounts for 32% of all upstream traffic. This means that roughly a third of all uploads are torrent-related.

Keep in mind that overall bandwidth usage per household also increased during this period, which means that the volume of BitTorrent traffic grew even more aggressively.

BitTorrent traffic also remains the top upstream source in the Asia Pacific region with 19% of total traffic. Percentage-wise this is down compared to two years ago, but in volume, it’s relatively stable according to Sandvine. 

Other popular file-sharing upload sources in the Asia Pacific region are the Korean P2P app “K grid” (7%) and “Afreeca TV” (2%).

In the Americas, BitTorrent is the second largest source of upstream traffic. It has a market share of little over 9% and is most popular in Latin America. BitTorrent is only a fraction behind MPEG-TS, which is used for backhauling data from video cameras and security systems.

TorrentFreak spoke to Sandvine’s Vice President of Solutions Marketing Cam Cullen, who notes that more details will be released in the upcoming report. However, it’s clear that BitTorrent is not dead yet.

The next question is why BitTorrent traffic is on the rise again? According to Cullen, increased fragmentation in the streaming service market may play an important role.

“More sources than ever are producing ‘exclusive’ content available on a single streaming or broadcast service – think Game of Thrones for HBO, House of Cards for Netflix, The Handmaid’s Tale for Hulu, or Jack Ryan for Amazon. To get access to all of these services, it gets very expensive for a consumer, so they subscribe to one or two and pirate the rest.

[snip]