Spread the Digital Wealth

Spread the Digital Wealth
There are plenty of ways to deliver tech jobs to rural communities.
By Ro Khanna
Mr. Khanna, a Democrat, represents Silicon Valley in the U.S. House.
Dec 30 2018

One key question for the United States in the 21st century is whether noncoastal towns and rural communities, including many communities of color, will be able to participate in the digital revolution. We know that almost all Americans are avid consumers of technology, but many lack the opportunity to do the creative work that fuels our digital economy.

At stake is the dignity of millions of people. Within the next 10 years, nearly 60 percent of jobs could have a third of their tasks automated by artificial intelligence.Many traditional industries are becoming digital. Recently, a senior hotel executive described his business to me as essentially a digital one, explaining that his profit margins were contingent on the effectiveness of his software architects. Today’s hospitality vendors, precision farmers and electricians spend significant time on digital work.

Economists keep telling those left out of our digital future to move to the tech hubs. Sometimes I wonder if they have ever been to places like Jefferson, Iowa, or Beckley, W.Va. If they visit, they will realize that many people there are not looking to move. They are proud of their small-town values and enjoy being close to family. They brag that their town doesn’t need many traffic lights. And they worry about a brain drain.

These places also are not looking to become the next Silicon Valley. They are self-aware enough to recognize that there are benefits for the world’s top engineers and computer scientists to flock to Palo Alto, Calif., or Austin, Tex. They understand why venture capitalists betting millions of dollars would want to be close to the start-ups they fund to have some control and accountability. But the choice facing small towns should not be binary — it should not be “adopt the Silicon moniker or miss out on the tech future.”

Although the most advanced software innovation may take place in big cities with research universities, there is a lot of work concerning the application of software to business processes and the administration and maintenance of software systems that can be done remotely. Shame on us for shipping over 211,000 of these jobs offshoreto countries like Malaysia and Brazil. Americans have an advantage in doing them because of a cultural understanding of what businesses need and a more convenient time zone.

Small towns can also sustain entrepreneurial activity that is tailored to their needs. Consider the fifth-generation internet service provider in Jefferson, Iowa, that was willing to make a bet on investing in fiber to serve a 4,200-person town. Jefferson is not chasing brand-name venture capitalists on Sand Hill Road but is seeking more modest investments in local businesses that will solve local problems.

So, what more can we do to make sure rural America has its share of middle-class jobs and businesses that will be the backbone of the digital economy? We need to provide additional funds to existing community colleges and land-grant universities to create tech institutes in places left behind. West Virginia University’s new tech institute in Beckley provides a model, equipping students with practical degrees or credentials that lead to jobs.

The federal government also should invest $80 billion to have affordable high-speed internet — preferably fiber — in every corner of this nation. We need to pay special attention to the racial gap in the affordability and adoption of broadband. Our infrastructure should not be a barrier to remote work.

Finally, the federal government can change incentives. When awarding federal software contracts, agencies should give favorable consideration if at least 10 percent of the work force is rural. We should, moreover, adopt stronger Equal Employment Opportunity reporting requirements for companies on the number of programmers they hire by country and location.


The Story of 2018 Was Climate Change

The Story of 2018 Was Climate Change
Future generations may ask why we were distracted by lesser matters.
By David Leonhardt
Dec 30 2018

Our best hope may be the weather.

For a long time, many people thought that it was a mistake to use the weather as evidence of climate change. Weather patterns contain a lot of randomness. Even as the earth warms and extreme weather becomes more common, some years are colder and calmer than others. If you argue that climate change is causing some weather trend, a climate denier may respond by making grand claims about a recent snowfall. 

And yet the weather still has one big advantage over every other argument about the urgency of climate change: We experience the weather. We see it and feel it.

It is not a complex data series in an academic study or government report. It’s not a measurement of sea level or ice depth in a place you’ve never been. It’s right in front of you. And although weather patterns do have a lot of randomness, they are indeed changing. That’s the thing about climate change: It changes the climate.

I wanted to write my last column of 2018 about the climate as a kind of plea: Amid everything else going on, don’t lose sight of the most important story of the year.

I know there was a lot of competition for that title, including some more obvious contenders, like President Trump and Robert Mueller. But nothing else measures up to the rising toll and enormous dangers of climate change. I worry that our children and grandchildren will one day ask us, bitterly, why we spent so much time distracted by lesser matters.

The story of climate change in 2018 was complicated — overwhelmingly bad, yet with two reasons for hope. The bad and the good were connected, too: Thanks to the changing weather, more Americans seem to be waking up to the problem.

I’ll start with the alarming parts of the story. The past year is on pace to be the earth’s fourth warmest on record, and the five warmest years have all occurred since 2010. This warming is now starting to cause a lot of damage. 

In 2018, heat waves killed people in Montreal, Karachi, Tokyo and elsewhere. Extreme rain battered North Carolina and the Indian state of Kerala. The Horn of Africa suffered from drought. Large swaths of the American West burned. When I was in Portland, Ore., this summer, the air quality — from nearby wildfires — was among the worst in the world. It would have been healthier to be breathing outdoorsin Beijing or Mumbai.

The Rise of Extreme Hurricanes

From year to year, the number of serious hurricanes fluctuates. But the last few decades show a clear and disturbing trend.

Amid all of this destruction, Trump’s climate agenda consists of making the problem worse. His administration is filled with former corporate lobbyists, and they have been changing federal policy to make it easier for companies to pollute. These officials like to talk about free enterprise and scientific uncertainty, but their real motive is usually money. Sometimes, they don’t even wait to return to industry jobs. Both Scott Pruitt and Ryan Zinke, two now-departed pro-pollution cabinet secretaries, engaged in on-the-job corruption.

I often want to ask these officials: Deep down, do you really believe that future generations of your own family will be immune from climate change’s damage? Or have you chosen not to think very much about them?

As for the two main reasons for hope: The first is that the Trump administration is an outlier. Most major governments are trying to slow climate change. So are many states in this country, as well as some big companies and nonprofit groups. This global coalition is the reason that the recent climate summit in Poland “yielded much more,” as Nat Keohane of the Environmental Defense Fund said, “than many of us had thought might be possible.”

The second reason for hope is public opinion. No, it isn’t changing nearly as rapidly as I wish. Yet it is changing, and the weather seems to be a factor. The growing number of extreme events — wildfires, storms, floods and so on — are hard to ignore.


Glorified and Vilified, Representative-Elect Ilhan Omar Tells Critics: ‘Just Deal’

Glorified and Vilified, Representative-Elect Ilhan Omar Tells Critics: ‘Just Deal’
By Sheryl Gay Stolberg
Dec 30 2018

WASHINGTON — As a 12-year-old refugee from Somalia adjusting to life in the Virginia suburbs, Ilhan Omar fended off bullies who stuck gum on her scarf, knocked her down stairs and jumped her when she changed clothes for gym class.

Her father “sat me down, and he said, ‘Listen, these people who are doing all of these things to you, they’re not doing something to you because they dislike you,’” Ms. Omar recalled in a recent interview. “They are doing something to you because they feel threatened in some way by your existence.”

Now Ms. Omar is Representative-elect Ilhan Omar, Democrat of Minnesota, and her father’s words still hold. Nearly a quarter-century later, as Democrats prepare to assume control of the House with an extraordinarily diverse freshman class, she is perhaps Washington’s most glorified and vilified newcomer — a vehicle for the hopes of millions of Muslims and others touched by her life story, and for the fears of those who feel threatened by her.

When she is sworn in on Thursday, Ms. Omar will take her place in the history books as one of the first two Muslim women in Congress — and the first to wear a hijab, or head covering, on the House floor. Her push to change a 181-year-old rule barring headwear in the chamber — which Democrats are expected to immediately adopt — has drawn fire from a Christian pastor, who warned that the floor of the House “is now going to look like an Islamic republic.”

Her support for the boycott, divest and sanctions movement to pressure Israel to improve treatment of Palestinians is making Jewish leaders nervous. In Saudi Arabia, a state-owned newspaper recently suggested she was part of an Islamist plotto control Congress.

And at home in Minnesota, Ms. Omar has been dogged by claims that she briefly married her brother for immigration purposes — which she called “absurd and offensive” — and by charges filed by a conservative Republican colleague in the Minnesota Legislature that she violated campaign finance laws; a state campaign finance board found “probable cause” that she did and is investigating.

Ms. Omar came up in politics as a community organizer, working on issues like hunger and inequities in the juvenile justice system.Erin Schaff for The New York Times

Yet in her short political career, which began two years ago when she unseated a 44-year incumbent to win a seat in the Minnesota Legislature, Ms. Omar has also been featured on the cover of a Time magazine edition spotlighting “women who are changing the world”; appeared on “The Daily Show,” where she publicly invited President Trump to tea; danced in a Maroon 5 music video; and become the subject of her own documentary, “Time for Ilhan.”

“She’s the epitome of the so-called American dream, but for much of white Christian America, she’s an American nightmare,” said Larycia Hawkins, who teaches politics and religion at the University of Virginia, and lost her job at an evangelical college after wearing a hijab in solidarity with Muslim women.

Ms. Omar, a slight 36-year-old with a soft voice and delicate features, envisions herself as a voice in Washington for the disenfranchised, for marginalized people and for immigrants like herself. She came up in politics as a community organizer, working on issues like hunger and inequities in the juvenile justice system. When she arrived in the capital for freshman orientation, she ran into Representative John Lewis, Democrat of Georgia and a civil-rights icon — and burst into tears.

“I said to him, ‘Sir, I read about you in middle school, and you’re here in the flesh, and I get to be your colleague,’” she recalled, tearing up again. She added, “There are moments — every single minute — that I’ve been here where I almost want to pinch myself.”

Ms. Omar’s life story is, in many respects, uniquely American, an immigrant who worked hard and made good. She also embodies the complicated crosscurrents around immigration, race and religion that dominate Mr. Trump’s Washington.

When she was 8, Somalia erupted into civil war, and her extended family fled to a refugee camp in Kenya, where they spent four years before seeking asylum in the United States in 1995. They settled first in Arlington, Va., and later in Minneapolis, whose large Somali population Mr. Trump has called “a disaster” for Minnesota. Her father, a teacher in Somalia, picked up work driving taxis and later got a job at the post office; her mother died when Ms. Omar was 2.


“I Had To Quit For My Sanity”: Teachers Resigning At Highest Rate Ever Recorded

“I Had To Quit For My Sanity”: Teachers Resigning At Highest Rate Ever Recorded
By Tyler Durden
Dec 31 2018

Teachers and other public education employees are quitting their jobs at the fastest pace on record after roughly 10% of the industry quit over a 12 month period ending in October, according to data from the Labor Department.

While US workers overall at the highest rate since 2001 amid a tight labor market and historically low unemployment, quitting a job in education is notable since the field is known for stability and rewarding longevity, reports the Wall Street Journal’s Michelle Hackman and Eric Morath. 

The educators may be finding new jobs at other schools, or leaving education altogether: The departures come alongside protests this year in six states where teachers in some cases shut down schools over tight budgets, small raises and poor conditions.

In the first 10 months of 2018, public educators quit at an average rate of 83 per 10,000 a month, according to the Labor Department. While that is still well below the rate for American workers overall—231 voluntary departures per 10,000 workers in 2018—it is the highest rate for public educators since such records began in 2001. -WSJ

Sara Jorve, a 43-year-old fifth-grade math and science teacher from Oklahoma, protested alongside other teachers last spring for better pay and classroom conditions – eventually quitting in May after a dozen years as an educator. Jorve, a single mother, said she had to rely on her parents for financial assistance due to the meager pay – though she returned during the summer to become a cardiovascular ultrasound technician. 

“I had to quit for my sanity,” said Jorve.

The rising number of departures among public education workers is in contrast with 2009, when the economy was first emerging from a deep recession. Then, the rate was just 48 per 10,000 public education workers, a record low.

“During the recession, education was a safe place to be,” said Julia Pollak, labor economist at ZipRecruiter.

That year, the unemployment rate touched 10%, the highest since the 1980s. This year, the jobless rate fell to 3.7%, the lowest reading since 1969. That has created very different incentives for teachers and their public education colleagues.

“It’s a more boring place now, and they see their friends finding exciting opportunities,” Ms. Pollak said. -WSJ

Since 2015 school districts have reported a shortage of qualified teachers to fill open slots, which resulted in more states opening temporary teaching jobs to underqualified applicants, according to the Learning Policy Institute. Qualified teachers leaving the field at a record pace will likely exacerbate that trend, according to the Journal. 

In the 12 month period ending in October, one million people quit the public education sector according to the most recent Labor Department data, out of more than 10 million Americans in the field. 


How Chip Makers Are Circumventing Moore’s Law to Build Super-Fast CPUs of Tomorrow

How Chip Makers Are Circumventing Moore’s Law to Build Super-Fast CPUs of Tomorrow
By Alex Cranz
Dec 28 2018

The elephant in the room has been, for a very long time, Moore’s Law—or really, its eventual end game. Intel co-founder Gordon Moore predicted in a 1965 paper that the number of transistors on a chip would double each year. More transistors mean more speed, and that steady increase has fueled decades of computer progress. It is the traditional way CPU makers make their CPUs faster. But those advances in transistors are showing signs of slowing down. “That’s running out of steam,” said Natalie Jerger, a professor of electrical and computer engineering at the University of Toronto.

Jerger’s not the only one saying it. In 2016, MIT’s Technology Review declared, “Moore’s Law is dead,” and in January of this year, the Register issued a “death notice” for Moore’s Law. And if you’ve purchased a laptop in the last couple of years, you’ve probably noticed it too. CPUs don’t seem to be getting that much faster year over year. Intel, which makes the CPUs found in the majority of our laptops, desktops, and servers, has rarely been able to boast more than a 15-percent improvement in performance since 2014, and AMD, even with some rather radical new approaches to design, is typically only keeping pace with Intel in head-to-head battles.

In the typical “monolithic” style of design incorporated by Intel and (until very recently) AMD, the CPU is composed of semiconductor material—almost always silicon. This is called the die. On top of the die are a series of transistors that communicate with each other quickly because they’re all on the same die. More transistors mean faster processing, and ideally, when you shrink the size of the die, the transistors are packed closer together and can communicate even more quickly with one another, leading to faster processes and better energy efficiency. In 1974, the very first microprocessor, Intel’s 8080, was built on a 6-micrometer die. Next year’s AMD processors are expected to be built on a 7-nanometer die. That’s close to 1,000 times smaller, and a whole lot faster.

But AMD achieved its biggest speed gains recently with its ridiculous-sounding Threadripper CPUs. These are CPUs with a core count that starts as low as 8 and goes all the way up to 32. A core is kind of like the engine of the CPU. In modern computing, multiple cores can function in parallel, allowing certain processes that take advantage of multiple cores to go even faster. Having 32 cores can take something like the rendering of a 3D file in Blender from 10 minutes down to only a minute and a half, as seen in this benchmark run by PCWorld.

Also, just saying you have a 32 core processor sounds cool! And AMD accomplished it by embracing chiplet design. All of its modern CPUs use something called Infinity Fabric. When speaking to Gizmodo earlier this year, this is what Jim Anderson, former general manager of AMD’s computing and graphics business group, called the “secret sauce” of AMD’s latest microarchitecture, Zen. CTO Mark Papermaster, meanwhile, dubbed it “a hidden gem.”

Infinity Fabric is a new system bus architecture based on the open source Hyper Transport. A system bus does what you think it would—bus data from one point to another. Infinity Fabric’s neat accomplishment is that it busses that data around really fast and allows processors built with it to overcome one of the primary hurdles of chiplet CPU design: latency.

Chiplet design isn’t new, but it’s often been difficult to accomplish because it’s hard to make a whole bunch of transistor on separate die talk to each other as quickly as they can on a single piece of silicon. But with AMD’s Threadrippers, you have a number of its typical Ryzen CPUs laid out on the Infinity Fabric and communicating nearly as quickly as if they were on a single die.

It works really well, and the results are a super-fast processor that is so cheap to make that AMD can sell it for a fraction of the price of something comparable from Intel—which continues to use monolithic design in its high-core-count CPUs. In a way, Infinity Fabric is a way to cheat Moore’s Law because it’s not a single fast CPU—it’s a whole bunch attached via the Infinity Fabric. So it’s not AMD overcoming the limitations of Moore’s Law, but circumventing it.

“If you step back in and say, ‘Well, Moore’s Law is really just about greater integration of functionality,’ I do think that the chiplets—it does not in any way help integrate more smaller transistors, but it does help us build systems that have greater functionality and greater capabilities than the generation before,” Jerger said.

She noted that in some cases, this conversation around chiplet design is a deflection from a company’s more notable failures. She’s referring to Intel, which has, for the last few years, notably struggled with the limitations of transistors that can’t shrink forever. It’s been stuck on a 14nm processor and promising, but failing to deliver, a 10nm processor for over a year. It’s been a terrible embarrassment for Intel that’s only been compounded as other chip makers have run laps around the incumbent chip giant. This year, Apple sold a few million phones and iPads with a 7nm processor inside, while AMD shipped 12nm processors and promised 7nm ones in 2019. AMD also publicly embarrassed Intel at Computex in Taipei this year: Intel promised a 28-core CPU by the end of the year (it still has not shipped), and days later AMD announced a 32-core CPU that has been shipping since August and costs half the Intel CPU’s forecasted price. Intel’s recent promise of a long-delayed shift to 10nm in 2019 looks kind of pathetic in comparison.

Which is why you shouldn’t view its embrace of chiplet CPU design as a coincidence. In part, this seems like Intel is talking up cool innovations to distract from a significant failure to innovate, or even keep up with the competition.


A Woman’s Rights

A Woman’s Rights
More and more laws are treating a fetus as a person, and a woman as less of one, as states charge pregnant women with crimes…
By NYT Editorial Board
Dec 28 2018

You might be surprised to learn that in the United States a woman coping with the heartbreak of losing her pregnancy might also find herself facing jail time. Say she got in a car accident in New York or gave birth to a stillborn in Indiana: In such cases, women have been charged with manslaughter.

In fact, a fetus need not die for the state to charge a pregnant woman with a crime. Women who fell down the stairs, who ate a poppy seed bagel and failed a drug test or who took legal drugs during pregnancy — drugs prescribed by their doctors — all have been accused of endangering their children.

Such cases are rare. There have been several hundred of them since the Supreme Court issued its decision ratifying abortion rights in Roe v. Wade, in 1973. But they illuminate a deep shift in American society, away from a centuries-long tradition in Western law and toward the embrace of a relatively new concept: that a fetus in the womb has the same rights as a fully formed person.

This idea has now worked its way into federal and state regulations and the thinking of police officers and prosecutors. As it has done so, it’s begun not only to extend rights to clusters of cells that have not yet developed into viable human beings, but also to erode the existing rights of a particular class of people — women. Women who are pregnant have found themselves stripped of the right to consent to surgery, the right to receive treatment for a medical condition and even something as basic as the freedom to hold a baby in the moments after birth.

How the idea of fetal rights gained currency is a story of social reaction — to the Roe decision and, more broadly, to a perceived new permissiveness in the 1970s — combined with a determined, sophisticated campaign by the anti-abortion movement to affirm the notion of fetal personhood in law and to degrade Roe’s protections.

Political ambition has also played a powerful role. Out of concern for individual freedom, the Republican Party once treated abortion as a private matter. When Ronald Reagan was governor of California, he signed one of the most liberal abortion laws in the land, in 1967. As late as 1972, a Gallup poll found that 68 percent of Republicans thought that the decision to have an abortion should be made solely by a woman and her doctor.

But after Roe, a handful of Republican strategists recognized in abortion an explosively emotional issue that could motivate evangelical voters and divide Democrats. In 1980, as Mr. Reagan ran for president, he raised the cause high, and he framed it in terms of the rights of the unborn. “With regard to the freedom of the individual for choice with regard to abortion, there’s one individual who’s not being considered at all. That’s the one who is being aborted,” he said in a debate that year. “And I’ve noticed that everybody that is for abortion has already been born.”

Out of concern for individual freedom, the Republican Party once treated abortion as a private matter.

The crack epidemic of the late 1980s and early 1990s also had the effect of popularizing the idea of fetal rights. Many Americans became seized with the fear — fanned by racism and, as it turned out, false — that crack-addicted black mothers in inner cities were giving birth to a generation of damaged and possibly vicious children. This false fear supplied considerable force to the idea that the interests of a fetus could come in conflict with those of the woman carrying it — and that the woman may have forfeited any claim on society’s protection.

The creation of the legal scaffolding for the idea that the fetus is a person has been the steady work of the anti-abortion movement, at the national level and in every state. Today, at least 38 states and the federal government have so-called fetal homicide laws, which treat the fetus as a potential crime victim separate and apart from the woman who carries it.

The movement has pressed for dozens of other measures to at least implicitly affirm the idea that a fetus is a person, such as laws to issue birth certificates for stillborn fetuses or deny pregnant women the freedom to make end-of-life decisions for themselves. Some of these laws are also intended to create a basis for challenging and eventually overturning Roe. 

In the hands of zealous prosecutors, cautious doctors and litigious attorneys, these laws are creating a system of social control that polices pregnancy, as the editorials in this series show. Because of the newly fortified conservative majority on the Supreme Court, such laws are likely to multiply — and the control to become more pervasive — whether or not Roe is overturned.


What made solar panels so cheap? Thank government policy.

What made solar panels so cheap? Thank government policy.
We know how to make clean energy cheap. We’ve done it.
By David Roberts
Dec 28 2018

From an economic perspective, the core challenge of climate change is that the standard way of doing things — the dirty, carbon-intensive way — is typically cheaper than newer, lower-carbon alternatives. 

Solving the problem means driving down the cost of those alternatives. Simple, right? 

But in practice, it’s not so simple. In fact, we still don’t have a very good grasp on exactly what drives technological innovation and improvement. Is it basic scientific research? Early-stage R&D? Learning by doing? Economies of scale? 

If we want to make clean technologies cheaper, we need a better understanding of how the process works. Among other things, Silicon Valley types are spending billions on “moonshot” startup initiatives — it would be nice if that money were spent effectively.

There is a voluminous academic literature on these subjects, but a new paper in the journal Energy Policy helps to cut through the fog. It focuses on one specific technology and seeks to identify, and quantify, the various forces that drove down costs.

That technology: good old solar photovoltaic (PV) panels, which have declined in cost by around 99 percent over recent decades.

The authors are MIT associate professor Jessika Trancik, postdoc Goksin Kavlak, and research scientist James McNerney. They are part of a team that, working with the Department of Energy’s Solar Energy Evolution and Diffusion Studies (SEEDS) program, is attempting to develop an overarching theory of technology innovation, using solar PV as its focus. 

“Evaluating the Causes of Photovoltaics Cost Reduction” lays out the results — what caused PV costs to decline so fast, and when. 

The details are worth examining, but the big lesson is pretty simple: It didn’t just happen. It was driven, at every stage, by smart public policy. 

Solar PV has gotten cheaper at a positively ridiculous rate

First, by way of background, it’s important to wrap your head around the remarkable evolution of solar PV. Again, solar module costs have dropped by around 99 percentover the past 40 years.

Suffice to say, those declines have continued since 2015, and market experts expect them to accelerate for the foreseeable future. 

Solar PV has defied all projections, continuing to get cheaper and deploy faster — even as experts predict, again and again, that it will level off.

This headlong decline in costs is a baffling and amazing phenomenon. It demands explanation.

There have been many studies on the subject, of course, but most have relied on “correlational analysis,” tying the drop in PV costs to other ongoing trends. For instance, it is popular to point out, based in part on this paper, that PV costs drop by about 20 percent for every doubling of cumulative capacity (the two trends correlate). 

There are also device-level studies that examine the components of PV systems, and their contribution to costs, at a snapshot in time.

“Missing from these studies,” the team at MIT writes, “is a method of accurately quantifying how each change to a feature of the technology or manufacturing process contributes to cost reductions, when many changes occur simultaneously.” That’s what the team has attempted to create — a dynamic model that can distinguish and quantify the component causes of price declines over time.


The case for “conditional optimism” on climate change

[Note:  This item comes from reader Randall Head.  DLH]

The case for “conditional optimism” on climate change
Limiting the damage requires rapid, radical change — but such changes have happened before.
By David Roberts
Dec 28 2018

Is there any hope on climate change, or are we just screwed?

I hear this question all the time. When people find out what I do for a living, it is generally the first thing they ask. I never have a straightforward or satisfying answer, so I usually dodge it, but in recent years it has come up more and more often.

So let’s tackle it head on. In this post, I will lay out the case for pessimism and the case for (cautious) optimism, pivoting off a new series of papers from leading climate economists. 

First, though, let’s talk about the question itself, which contains a number of dubious assumptions, and see if we can hone it into something more concrete and answerable.

“Is there hope?” is the wrong question

When people ask about hope, I don’t think they are after an objective assessment of the odds. Hope is not a prediction that things will go well. It’s not a forecast or an expectation. But then, what is it exactly?

It’s less intellectual than emotional; it’s a feeling. As I wrote at length in this old post, the feeling people are groping for is fellowship. People can face even overwhelming odds with good spirits if they feel part of a community dedicated to a common purpose. What’s terrible is not facing great threat and long odds — what’s terrible is facing them alone. Happily, those working to address climate change are not alone. There are more people involved and more avenues for engagement every day. There’s plenty of fellowship to be found.

More importantly, though, when it comes to climate change, “Is there hope?” is just a malformed question. It mistakes the nature of the problem.

The atmosphere is steadily warming. Things are going to get worse for humanity the more it warms. (To be technical about it, there are a few high-latitude regions that may see improved agricultural production or more temperate weather in the short- to mid-term, but in the long haul, the net negative global changes will swamp those temporary effects.) 

The international community has agreed, most recently in the Paris climate accord, to try to limit the rise in global average temperature to no more than 2 degrees Celsius above preindustrial levels, with efforts to keep it to 1.5 degrees.

But there’s nothing magic about 2 degrees. It doesn’t mark a line between not-screwed and screwed. 

In a sense, we’re already screwed, at least to some extent. The climate is already changing and it’s already taking a measurable toll. Lots more change is “baked in” by recent and current emissions. One way or another, when it comes to the effects of climate change, we’re in for worse. 

But we have some choice in how screwed we are, and that choice will remain open to us no matter how hot it gets. Even if temperature rise exceeds 2 degrees, the basic structure of the challenge will remain the same. It will still be warming. It will still get worse for humanity the more it warms. Two degrees will be bad, but three would be worse, four worse than that, and five worse still. 

Indeed, if we cross 2 degrees, the need for sustainability becomes more urgent, not less. At that point, we will be flirting with non-trivial tail risks of species-threatening — or at least civilization-threatening — effects. 

In sum: humanity faces the urgent imperative to reduce greenhouse gas emissions, then eliminate them, and then go “net carbon negative,” i.e., absorb and sequester more carbon from the atmosphere than it emits. It will face that imperative for several generations to come, no matter what the temperature is.

Yes, it’s going to get worse, but nobody gets to give up hope or stop fighting. Sorry.

Rather than just rejecting the question, though, let’s give it a little more specificity, so we can discuss some real answers. Let’s ask: What are the reasonable odds that the current international regime, the one that will likely be in charge for the next dozen crucial years, will reduce global carbon emissions enough to hit the 2 degree target?

Remember, the answer to that question will not tell us whether there is hope, or whether we’re screwed. But it will tell us a great deal about what we’re capable of, whether we can restrain and channel our collective development in a sustainable direction.

With all that said, let’s get to the papers.


Why “Wi-Fi 6” Tells You Exactly What You’re Buying, But “5G” Doesn’t Tell You Anything.

Why “Wi-Fi 6” Tells You Exactly What You’re Buying, But “5G” Doesn’t Tell You Anything.
By Harold Feld
Dec 28 2018

Welcome to 2019, where you will find aggressively marketed to you a new upgrade in Wi-Fi called “Wi-Fi 6” and just about every mobile provider will try to sell you some “new, exciting, 5G service!” But funny thing. If you buy a new “Wi-Fi 6” wireless router you know exactly what you’re getting. It supports the latest IEEE 802.11ax protocol, operating on existing Wi-Fi frequencies of 2.4 GHz and 5 GHz, and any other frequencies listed on the package. By contrast, not only does the term “5G” tell you nothing about the capabilities (or frequencies, for them what care) of the device, but what “5G” means will vary tremendously from carrier to carrier. So while you can fairly easily decide whether you want a new Wi-Fi 6 router, and then just buy one from anywhere, you are going to want to very carefully and very thoroughly interrogate any mobile carrier about what their “5G” service does and what limitations (including geographic limitations) it has.

Why the difference? It’s not simply that we live in a world where the Federal Trade Commission (FTC) lets mobile carriers get away with whatever marketing hype they can think up, such as selling “unlimited” plans that are not, in fact unlimited. It has to do with the fact that back in the early 00s, the unlicensed spectrum/Wi-Fi community decided to solve the confusion problem by eliminating confusion, whereas the licensed/mobile carrier world decided to solve the confusion problem by embracing it. As I explain below, that wasn’t necessarily the wrong decision given the nature of licensed mobile service v. unlicensed. But it does mean that 5G will suffer from Forest Gump Syndrome for the foreseeable future. (“5G is like a box of chocolates, you never know what to expect.”) It also means that, for the foreseeable future, consumers will basically need to become experts in a bunch of different technologies to figure out what flavor of “5G” they want, or whether to just wait a few years for the market to stabilize.

More below . . . .

What Is A “G” Again?

I covered this several months ago in my rather lengthy post trying to explain what the heck 5G actually does and whether it’s got any substance or just a marketing scam (answer: some of each, but for consumers mostly scam at the moment). A “G” for wireless is a very informal term meaning a new generation of technology. Typically, we mean a shift in technology that radically changes the capabilities and the network architecture of wireless systems. Often this includes opening new frequencies (which may or may not mean different physics, which also impacts both the potential capabilities and mandates changes in the architecture). For example, the shift from “2G” to “3G” meant going from a purely voice system to a system capable of some data functions that were fairly low-bandwidth, such as reading email. The shift from “3G” to “4G” meant going from a system designed primarily for voice to a system designed as a data network using packet-switched technology able to perform the same functions and a wireline Internet connection (adjusting for things like reliability, which is always going to be worse for wireless than wireline for reasons I will not get into now).

Because these “Gs” describe a shift in capabilities (with the implied massive shift in architecture and protocols necessary to support this upgrade in capabilities), the term “G” is of necessity pretty loosey-goosey about the precise technologies (and therefore the precise capabilities) employed. Back in 3G, we had two fairly widespread technologies that were globally accepted as “3G”: CDMA and GSM. In the late 00s, as we ramped up to 4G, we had a combination of possible technologies that all represented a significant jump over 3G capabilities, although in somewhat different ways. For awhile we had T-Mobile selling HSPA+ as 4G, while Sprint pushed WiMax as 4G and AT&T and Verizon pushed LTE as 4G. Eventually, for reasons I won’t get into here, LTE beat out HSPA+ and WiMax to become the default technology for 4G mobile services worldwide. 

Because for the last 7 or 8 years “4G” has been synonymous with LTE, we’ve gotten used to the idea that describing something as 4G over 3G means a specific technology (LTE) and a specific set of capabilities (it does what LTE enables). But that’s not because some standards body like 3GPP or an agency like the Federal Communications Commission (FCC) defined what “4G” meant. It happened because the global market which made economies of scale and global interoperability so important drove carriers to all select one technology for the current generation of mobile services. 

But as I described back in the summer, what we call “5G” is actually a rather confused set of new technologies and functionalities that have yet to settle out yet. The one thing they all have in common is that they differ either in network architecture, or frequency, or both, from existing LTE networks. So it’s completely honest, albeit extremely confusing for consumers, for AT&T to call a new mobile product incorporating some of these features “5G Evolution” and for Verizon to call its fixed wireless cable competitor “5G,” while T-Mobile claims that its essentially 4G deployment on its new 600 MHz spectrum is “5G” because it is an entirely new set of frequencies for mobile services.


2018 Was A Trying Year For Social Media Platforms–And Their Users: Three Pathways Forward

2018 Was A Trying Year For Social Media Platforms–And Their Users: Three Pathways Forward
By John Bowers
Dec 27 2018

The public’s trust in social media platforms is swiftly eroding – the 2018 Edelman Trust Barometer reports that global trust in social media stands at just 41%, with a year-on-year drop of 11% in the United States. Against this troubling backdrop, a contentious public debate has emerged regarding the platforms’ future, one which has increasingly turned toward calls for regulation. It is now impossible to ignore the unprecedented level of influence wielded by social media companies. They design the algorithms responsible for curating the material that billions of people consume every day, unilaterally decide which forms of legal but objectionable content are taken down or left up, and determine which sensitive information about usersadvertisers should be able to leverage in targeting advertisements. As private enterprises, their decisions are subject to a minimal degree of oversight and control on the part of the public and its representatives. In short, they wield enormous power without accountability – and there has been a lot to account for.

The scandals and controversies faced by social media companies in 2018 reflect two major problems. First, that platforms’ aggressive collection and handling of user data often implicates very significant – even unprecedented – privacy and security concerns. Facebook, for example, came under fire for allowing the shadowy political consultancy group Cambridge Analytica to harvest data from millions of users under the guise of academic research – a misstep which landed Facebook founder Mark Zuckerberg in front of Congress – and Google moved up the timeline for the shutdown of its Google+ social network following multiple data leaks.

Second, that platforms have failed to fully recognize the responsibilities that accompany their expansive powers over determining the content to which users are exposed. Twitter – among other platforms including Spotify, Youtube, Apple, and Facebook – faced controversy for the unilateral nature of its decision to ban conspiracy theorist Alex Jones, even as many applauded the move. And governmental inquiries into foreign actors’ use of disinformation to manipulate public opinion and undercut democratic processes through social media continued, including an inaugural “International Grand Committee on Disinformation” held in London this November (this time, Zuckerberg didn’t show up, despite a request from lawmakers from nine countries).

We are starting to make meaningful progress toward addressing the first problem. 2018 saw some of the first real action toward regulating how big technology companies deal with user data – at least in Europe. In March, the European Union implemented the General Data Protection Regulation (GDPR), a sweeping set of reforms which have already substantially reconfigured how businesses handle, share, and sell user data. The GDPR creates new standards for reporting data breaches, binds companies to a stringent user consent framework, and gives individuals more control over how their data is represented and shared. Perhaps most importantly, the GDPR invests regulators with the power to levy enormous fines – potentially running billions of dollars for the likes of Google and Facebook – against companies which violate its mandates. India, which is home to almost 500 million internet users, has followed suit, and is moving toward an expansive data protection bill of its own. No viable legislation of comparable scope has emerged in the United States, which has almost no broadly applicable data privacy regulations (though specific rules do exist in industries like banking). Even so, the size and influence of the European market means that many US business have had to adapt to the GDPR regardless. And California lawmakers have taken action at the state level, passing the Consumer Privacy Act – legislation which provides some GDPR-style rights and protections to Californians.

But data protection standards alone aren’t enough to make social media companies accountable – they can’t solve our second problem. What makes social media companies so enormously influential – and, in some instances, so dangerous – is their ability to determine what users see and when they see it. The problems with this largely unrestricted power don’t dissolve in the presence of robust user consent frameworks and privacy-driven data handling architectures. And it’s tough to hold social media companies accountable when they neglect to wield it responsibly, even on a disastrous scale.

Take, for example, disinformation campaigns orchestrated on Facebook and Twitter by Russian agents hoping to manipulate the outcomes of United States elections. Russian intelligence didn’t have to rely on privacy and security flaws. Disinformation worked because it was optimized to reach vulnerable audiences, and because the platforms often treated it like any other form of content – cat videos, say, or articles from the New York Times. Two Senate Select Committee on Intelligence reportsreleased in mid-December highlighted the scope and complexity of these operations, as well as the limited nature of technology companies’ cooperation with government agencies.

Facebook and Twitter at first simply sat back and watched their systems spread whatever content (however dangerous) triggered the right algorithms. Today they appear to be developing proactive content moderation that takes sides against certain pages and publishers – even at the cost of controversy and allegations of censorship. But the policies governing content moderation and the tools with which it is carried out have been – and will likely continue to be – developed and deployed largely behind closed doors, with limited public input. And outside of bad PR, the platforms have not borne the costs of abusive user behavior – provisions like Section 230 of the United States’ Communications Decency Act insulate social media platforms from most liability relating to user-generated content. Though, as Eric Goldman writes, CDA 230 itself has faced a number of recent reductions in its scope – and more dramatic change may be on the horizon, as US and UK lawmakers including Senator Mark Warner (D-VA) call for less forgiving platform liability standards.

So if the progress we’ve seen around data privacy and security standards in 2018 – at least in Europe – isn’t enough to solve social media’s problems, what would be? The last year has provided some hints as to what a more comprehensive, accountability-focused regulatory solution might look like.

First, some social media platforms have taken steps toward greater transparency and more robust self-governance. In October, Twitter released a massive dataset listing accounts and content it had linked to Russian disinformation efforts. A month later, Mark Zuckerberg released a lengthy essay describing Facebook’s efforts to ensure better transparency and policymaking around content moderation practices, including a plan to explore the possibility of independent oversight by what Zuckerberg has called a Facebook “Supreme Court.” But while these steps show promise, they might be too little, too late – and platforms definitely aren’t willing to hand over the reins to regulators and critics. The solutions that Facebook and its peers implement will almost certainly be incremental – carefully calibrated tweaks which protect their core business models. And with public trust in social media running low, quicker and more decisive action might prove necessary.