How Chip Makers Are Circumventing Moore’s Law to Build Super-Fast CPUs of Tomorrow

How Chip Makers Are Circumventing Moore’s Law to Build Super-Fast CPUs of Tomorrow
By Alex Cranz
Dec 28 2018

The elephant in the room has been, for a very long time, Moore’s Law—or really, its eventual end game. Intel co-founder Gordon Moore predicted in a 1965 paper that the number of transistors on a chip would double each year. More transistors mean more speed, and that steady increase has fueled decades of computer progress. It is the traditional way CPU makers make their CPUs faster. But those advances in transistors are showing signs of slowing down. “That’s running out of steam,” said Natalie Jerger, a professor of electrical and computer engineering at the University of Toronto.

Jerger’s not the only one saying it. In 2016, MIT’s Technology Review declared, “Moore’s Law is dead,” and in January of this year, the Register issued a “death notice” for Moore’s Law. And if you’ve purchased a laptop in the last couple of years, you’ve probably noticed it too. CPUs don’t seem to be getting that much faster year over year. Intel, which makes the CPUs found in the majority of our laptops, desktops, and servers, has rarely been able to boast more than a 15-percent improvement in performance since 2014, and AMD, even with some rather radical new approaches to design, is typically only keeping pace with Intel in head-to-head battles.

In the typical “monolithic” style of design incorporated by Intel and (until very recently) AMD, the CPU is composed of semiconductor material—almost always silicon. This is called the die. On top of the die are a series of transistors that communicate with each other quickly because they’re all on the same die. More transistors mean faster processing, and ideally, when you shrink the size of the die, the transistors are packed closer together and can communicate even more quickly with one another, leading to faster processes and better energy efficiency. In 1974, the very first microprocessor, Intel’s 8080, was built on a 6-micrometer die. Next year’s AMD processors are expected to be built on a 7-nanometer die. That’s close to 1,000 times smaller, and a whole lot faster.

But AMD achieved its biggest speed gains recently with its ridiculous-sounding Threadripper CPUs. These are CPUs with a core count that starts as low as 8 and goes all the way up to 32. A core is kind of like the engine of the CPU. In modern computing, multiple cores can function in parallel, allowing certain processes that take advantage of multiple cores to go even faster. Having 32 cores can take something like the rendering of a 3D file in Blender from 10 minutes down to only a minute and a half, as seen in this benchmark run by PCWorld.

Also, just saying you have a 32 core processor sounds cool! And AMD accomplished it by embracing chiplet design. All of its modern CPUs use something called Infinity Fabric. When speaking to Gizmodo earlier this year, this is what Jim Anderson, former general manager of AMD’s computing and graphics business group, called the “secret sauce” of AMD’s latest microarchitecture, Zen. CTO Mark Papermaster, meanwhile, dubbed it “a hidden gem.”

Infinity Fabric is a new system bus architecture based on the open source Hyper Transport. A system bus does what you think it would—bus data from one point to another. Infinity Fabric’s neat accomplishment is that it busses that data around really fast and allows processors built with it to overcome one of the primary hurdles of chiplet CPU design: latency.

Chiplet design isn’t new, but it’s often been difficult to accomplish because it’s hard to make a whole bunch of transistor on separate die talk to each other as quickly as they can on a single piece of silicon. But with AMD’s Threadrippers, you have a number of its typical Ryzen CPUs laid out on the Infinity Fabric and communicating nearly as quickly as if they were on a single die.

It works really well, and the results are a super-fast processor that is so cheap to make that AMD can sell it for a fraction of the price of something comparable from Intel—which continues to use monolithic design in its high-core-count CPUs. In a way, Infinity Fabric is a way to cheat Moore’s Law because it’s not a single fast CPU—it’s a whole bunch attached via the Infinity Fabric. So it’s not AMD overcoming the limitations of Moore’s Law, but circumventing it.

“If you step back in and say, ‘Well, Moore’s Law is really just about greater integration of functionality,’ I do think that the chiplets—it does not in any way help integrate more smaller transistors, but it does help us build systems that have greater functionality and greater capabilities than the generation before,” Jerger said.

She noted that in some cases, this conversation around chiplet design is a deflection from a company’s more notable failures. She’s referring to Intel, which has, for the last few years, notably struggled with the limitations of transistors that can’t shrink forever. It’s been stuck on a 14nm processor and promising, but failing to deliver, a 10nm processor for over a year. It’s been a terrible embarrassment for Intel that’s only been compounded as other chip makers have run laps around the incumbent chip giant. This year, Apple sold a few million phones and iPads with a 7nm processor inside, while AMD shipped 12nm processors and promised 7nm ones in 2019. AMD also publicly embarrassed Intel at Computex in Taipei this year: Intel promised a 28-core CPU by the end of the year (it still has not shipped), and days later AMD announced a 32-core CPU that has been shipping since August and costs half the Intel CPU’s forecasted price. Intel’s recent promise of a long-delayed shift to 10nm in 2019 looks kind of pathetic in comparison.

Which is why you shouldn’t view its embrace of chiplet CPU design as a coincidence. In part, this seems like Intel is talking up cool innovations to distract from a significant failure to innovate, or even keep up with the competition.


A Woman’s Rights

A Woman’s Rights
More and more laws are treating a fetus as a person, and a woman as less of one, as states charge pregnant women with crimes…
By NYT Editorial Board
Dec 28 2018

You might be surprised to learn that in the United States a woman coping with the heartbreak of losing her pregnancy might also find herself facing jail time. Say she got in a car accident in New York or gave birth to a stillborn in Indiana: In such cases, women have been charged with manslaughter.

In fact, a fetus need not die for the state to charge a pregnant woman with a crime. Women who fell down the stairs, who ate a poppy seed bagel and failed a drug test or who took legal drugs during pregnancy — drugs prescribed by their doctors — all have been accused of endangering their children.

Such cases are rare. There have been several hundred of them since the Supreme Court issued its decision ratifying abortion rights in Roe v. Wade, in 1973. But they illuminate a deep shift in American society, away from a centuries-long tradition in Western law and toward the embrace of a relatively new concept: that a fetus in the womb has the same rights as a fully formed person.

This idea has now worked its way into federal and state regulations and the thinking of police officers and prosecutors. As it has done so, it’s begun not only to extend rights to clusters of cells that have not yet developed into viable human beings, but also to erode the existing rights of a particular class of people — women. Women who are pregnant have found themselves stripped of the right to consent to surgery, the right to receive treatment for a medical condition and even something as basic as the freedom to hold a baby in the moments after birth.

How the idea of fetal rights gained currency is a story of social reaction — to the Roe decision and, more broadly, to a perceived new permissiveness in the 1970s — combined with a determined, sophisticated campaign by the anti-abortion movement to affirm the notion of fetal personhood in law and to degrade Roe’s protections.

Political ambition has also played a powerful role. Out of concern for individual freedom, the Republican Party once treated abortion as a private matter. When Ronald Reagan was governor of California, he signed one of the most liberal abortion laws in the land, in 1967. As late as 1972, a Gallup poll found that 68 percent of Republicans thought that the decision to have an abortion should be made solely by a woman and her doctor.

But after Roe, a handful of Republican strategists recognized in abortion an explosively emotional issue that could motivate evangelical voters and divide Democrats. In 1980, as Mr. Reagan ran for president, he raised the cause high, and he framed it in terms of the rights of the unborn. “With regard to the freedom of the individual for choice with regard to abortion, there’s one individual who’s not being considered at all. That’s the one who is being aborted,” he said in a debate that year. “And I’ve noticed that everybody that is for abortion has already been born.”

Out of concern for individual freedom, the Republican Party once treated abortion as a private matter.

The crack epidemic of the late 1980s and early 1990s also had the effect of popularizing the idea of fetal rights. Many Americans became seized with the fear — fanned by racism and, as it turned out, false — that crack-addicted black mothers in inner cities were giving birth to a generation of damaged and possibly vicious children. This false fear supplied considerable force to the idea that the interests of a fetus could come in conflict with those of the woman carrying it — and that the woman may have forfeited any claim on society’s protection.

The creation of the legal scaffolding for the idea that the fetus is a person has been the steady work of the anti-abortion movement, at the national level and in every state. Today, at least 38 states and the federal government have so-called fetal homicide laws, which treat the fetus as a potential crime victim separate and apart from the woman who carries it.

The movement has pressed for dozens of other measures to at least implicitly affirm the idea that a fetus is a person, such as laws to issue birth certificates for stillborn fetuses or deny pregnant women the freedom to make end-of-life decisions for themselves. Some of these laws are also intended to create a basis for challenging and eventually overturning Roe. 

In the hands of zealous prosecutors, cautious doctors and litigious attorneys, these laws are creating a system of social control that polices pregnancy, as the editorials in this series show. Because of the newly fortified conservative majority on the Supreme Court, such laws are likely to multiply — and the control to become more pervasive — whether or not Roe is overturned.


What made solar panels so cheap? Thank government policy.

What made solar panels so cheap? Thank government policy.
We know how to make clean energy cheap. We’ve done it.
By David Roberts
Dec 28 2018

From an economic perspective, the core challenge of climate change is that the standard way of doing things — the dirty, carbon-intensive way — is typically cheaper than newer, lower-carbon alternatives. 

Solving the problem means driving down the cost of those alternatives. Simple, right? 

But in practice, it’s not so simple. In fact, we still don’t have a very good grasp on exactly what drives technological innovation and improvement. Is it basic scientific research? Early-stage R&D? Learning by doing? Economies of scale? 

If we want to make clean technologies cheaper, we need a better understanding of how the process works. Among other things, Silicon Valley types are spending billions on “moonshot” startup initiatives — it would be nice if that money were spent effectively.

There is a voluminous academic literature on these subjects, but a new paper in the journal Energy Policy helps to cut through the fog. It focuses on one specific technology and seeks to identify, and quantify, the various forces that drove down costs.

That technology: good old solar photovoltaic (PV) panels, which have declined in cost by around 99 percent over recent decades.

The authors are MIT associate professor Jessika Trancik, postdoc Goksin Kavlak, and research scientist James McNerney. They are part of a team that, working with the Department of Energy’s Solar Energy Evolution and Diffusion Studies (SEEDS) program, is attempting to develop an overarching theory of technology innovation, using solar PV as its focus. 

“Evaluating the Causes of Photovoltaics Cost Reduction” lays out the results — what caused PV costs to decline so fast, and when. 

The details are worth examining, but the big lesson is pretty simple: It didn’t just happen. It was driven, at every stage, by smart public policy. 

Solar PV has gotten cheaper at a positively ridiculous rate

First, by way of background, it’s important to wrap your head around the remarkable evolution of solar PV. Again, solar module costs have dropped by around 99 percentover the past 40 years.

Suffice to say, those declines have continued since 2015, and market experts expect them to accelerate for the foreseeable future. 

Solar PV has defied all projections, continuing to get cheaper and deploy faster — even as experts predict, again and again, that it will level off.

This headlong decline in costs is a baffling and amazing phenomenon. It demands explanation.

There have been many studies on the subject, of course, but most have relied on “correlational analysis,” tying the drop in PV costs to other ongoing trends. For instance, it is popular to point out, based in part on this paper, that PV costs drop by about 20 percent for every doubling of cumulative capacity (the two trends correlate). 

There are also device-level studies that examine the components of PV systems, and their contribution to costs, at a snapshot in time.

“Missing from these studies,” the team at MIT writes, “is a method of accurately quantifying how each change to a feature of the technology or manufacturing process contributes to cost reductions, when many changes occur simultaneously.” That’s what the team has attempted to create — a dynamic model that can distinguish and quantify the component causes of price declines over time.


The case for “conditional optimism” on climate change

[Note:  This item comes from reader Randall Head.  DLH]

The case for “conditional optimism” on climate change
Limiting the damage requires rapid, radical change — but such changes have happened before.
By David Roberts
Dec 28 2018

Is there any hope on climate change, or are we just screwed?

I hear this question all the time. When people find out what I do for a living, it is generally the first thing they ask. I never have a straightforward or satisfying answer, so I usually dodge it, but in recent years it has come up more and more often.

So let’s tackle it head on. In this post, I will lay out the case for pessimism and the case for (cautious) optimism, pivoting off a new series of papers from leading climate economists. 

First, though, let’s talk about the question itself, which contains a number of dubious assumptions, and see if we can hone it into something more concrete and answerable.

“Is there hope?” is the wrong question

When people ask about hope, I don’t think they are after an objective assessment of the odds. Hope is not a prediction that things will go well. It’s not a forecast or an expectation. But then, what is it exactly?

It’s less intellectual than emotional; it’s a feeling. As I wrote at length in this old post, the feeling people are groping for is fellowship. People can face even overwhelming odds with good spirits if they feel part of a community dedicated to a common purpose. What’s terrible is not facing great threat and long odds — what’s terrible is facing them alone. Happily, those working to address climate change are not alone. There are more people involved and more avenues for engagement every day. There’s plenty of fellowship to be found.

More importantly, though, when it comes to climate change, “Is there hope?” is just a malformed question. It mistakes the nature of the problem.

The atmosphere is steadily warming. Things are going to get worse for humanity the more it warms. (To be technical about it, there are a few high-latitude regions that may see improved agricultural production or more temperate weather in the short- to mid-term, but in the long haul, the net negative global changes will swamp those temporary effects.) 

The international community has agreed, most recently in the Paris climate accord, to try to limit the rise in global average temperature to no more than 2 degrees Celsius above preindustrial levels, with efforts to keep it to 1.5 degrees.

But there’s nothing magic about 2 degrees. It doesn’t mark a line between not-screwed and screwed. 

In a sense, we’re already screwed, at least to some extent. The climate is already changing and it’s already taking a measurable toll. Lots more change is “baked in” by recent and current emissions. One way or another, when it comes to the effects of climate change, we’re in for worse. 

But we have some choice in how screwed we are, and that choice will remain open to us no matter how hot it gets. Even if temperature rise exceeds 2 degrees, the basic structure of the challenge will remain the same. It will still be warming. It will still get worse for humanity the more it warms. Two degrees will be bad, but three would be worse, four worse than that, and five worse still. 

Indeed, if we cross 2 degrees, the need for sustainability becomes more urgent, not less. At that point, we will be flirting with non-trivial tail risks of species-threatening — or at least civilization-threatening — effects. 

In sum: humanity faces the urgent imperative to reduce greenhouse gas emissions, then eliminate them, and then go “net carbon negative,” i.e., absorb and sequester more carbon from the atmosphere than it emits. It will face that imperative for several generations to come, no matter what the temperature is.

Yes, it’s going to get worse, but nobody gets to give up hope or stop fighting. Sorry.

Rather than just rejecting the question, though, let’s give it a little more specificity, so we can discuss some real answers. Let’s ask: What are the reasonable odds that the current international regime, the one that will likely be in charge for the next dozen crucial years, will reduce global carbon emissions enough to hit the 2 degree target?

Remember, the answer to that question will not tell us whether there is hope, or whether we’re screwed. But it will tell us a great deal about what we’re capable of, whether we can restrain and channel our collective development in a sustainable direction.

With all that said, let’s get to the papers.


Why “Wi-Fi 6” Tells You Exactly What You’re Buying, But “5G” Doesn’t Tell You Anything.

Why “Wi-Fi 6” Tells You Exactly What You’re Buying, But “5G” Doesn’t Tell You Anything.
By Harold Feld
Dec 28 2018

Welcome to 2019, where you will find aggressively marketed to you a new upgrade in Wi-Fi called “Wi-Fi 6” and just about every mobile provider will try to sell you some “new, exciting, 5G service!” But funny thing. If you buy a new “Wi-Fi 6” wireless router you know exactly what you’re getting. It supports the latest IEEE 802.11ax protocol, operating on existing Wi-Fi frequencies of 2.4 GHz and 5 GHz, and any other frequencies listed on the package. By contrast, not only does the term “5G” tell you nothing about the capabilities (or frequencies, for them what care) of the device, but what “5G” means will vary tremendously from carrier to carrier. So while you can fairly easily decide whether you want a new Wi-Fi 6 router, and then just buy one from anywhere, you are going to want to very carefully and very thoroughly interrogate any mobile carrier about what their “5G” service does and what limitations (including geographic limitations) it has.

Why the difference? It’s not simply that we live in a world where the Federal Trade Commission (FTC) lets mobile carriers get away with whatever marketing hype they can think up, such as selling “unlimited” plans that are not, in fact unlimited. It has to do with the fact that back in the early 00s, the unlicensed spectrum/Wi-Fi community decided to solve the confusion problem by eliminating confusion, whereas the licensed/mobile carrier world decided to solve the confusion problem by embracing it. As I explain below, that wasn’t necessarily the wrong decision given the nature of licensed mobile service v. unlicensed. But it does mean that 5G will suffer from Forest Gump Syndrome for the foreseeable future. (“5G is like a box of chocolates, you never know what to expect.”) It also means that, for the foreseeable future, consumers will basically need to become experts in a bunch of different technologies to figure out what flavor of “5G” they want, or whether to just wait a few years for the market to stabilize.

More below . . . .

What Is A “G” Again?

I covered this several months ago in my rather lengthy post trying to explain what the heck 5G actually does and whether it’s got any substance or just a marketing scam (answer: some of each, but for consumers mostly scam at the moment). A “G” for wireless is a very informal term meaning a new generation of technology. Typically, we mean a shift in technology that radically changes the capabilities and the network architecture of wireless systems. Often this includes opening new frequencies (which may or may not mean different physics, which also impacts both the potential capabilities and mandates changes in the architecture). For example, the shift from “2G” to “3G” meant going from a purely voice system to a system capable of some data functions that were fairly low-bandwidth, such as reading email. The shift from “3G” to “4G” meant going from a system designed primarily for voice to a system designed as a data network using packet-switched technology able to perform the same functions and a wireline Internet connection (adjusting for things like reliability, which is always going to be worse for wireless than wireline for reasons I will not get into now).

Because these “Gs” describe a shift in capabilities (with the implied massive shift in architecture and protocols necessary to support this upgrade in capabilities), the term “G” is of necessity pretty loosey-goosey about the precise technologies (and therefore the precise capabilities) employed. Back in 3G, we had two fairly widespread technologies that were globally accepted as “3G”: CDMA and GSM. In the late 00s, as we ramped up to 4G, we had a combination of possible technologies that all represented a significant jump over 3G capabilities, although in somewhat different ways. For awhile we had T-Mobile selling HSPA+ as 4G, while Sprint pushed WiMax as 4G and AT&T and Verizon pushed LTE as 4G. Eventually, for reasons I won’t get into here, LTE beat out HSPA+ and WiMax to become the default technology for 4G mobile services worldwide. 

Because for the last 7 or 8 years “4G” has been synonymous with LTE, we’ve gotten used to the idea that describing something as 4G over 3G means a specific technology (LTE) and a specific set of capabilities (it does what LTE enables). But that’s not because some standards body like 3GPP or an agency like the Federal Communications Commission (FCC) defined what “4G” meant. It happened because the global market which made economies of scale and global interoperability so important drove carriers to all select one technology for the current generation of mobile services. 

But as I described back in the summer, what we call “5G” is actually a rather confused set of new technologies and functionalities that have yet to settle out yet. The one thing they all have in common is that they differ either in network architecture, or frequency, or both, from existing LTE networks. So it’s completely honest, albeit extremely confusing for consumers, for AT&T to call a new mobile product incorporating some of these features “5G Evolution” and for Verizon to call its fixed wireless cable competitor “5G,” while T-Mobile claims that its essentially 4G deployment on its new 600 MHz spectrum is “5G” because it is an entirely new set of frequencies for mobile services.


2018 Was A Trying Year For Social Media Platforms–And Their Users: Three Pathways Forward

2018 Was A Trying Year For Social Media Platforms–And Their Users: Three Pathways Forward
By John Bowers
Dec 27 2018

The public’s trust in social media platforms is swiftly eroding – the 2018 Edelman Trust Barometer reports that global trust in social media stands at just 41%, with a year-on-year drop of 11% in the United States. Against this troubling backdrop, a contentious public debate has emerged regarding the platforms’ future, one which has increasingly turned toward calls for regulation. It is now impossible to ignore the unprecedented level of influence wielded by social media companies. They design the algorithms responsible for curating the material that billions of people consume every day, unilaterally decide which forms of legal but objectionable content are taken down or left up, and determine which sensitive information about usersadvertisers should be able to leverage in targeting advertisements. As private enterprises, their decisions are subject to a minimal degree of oversight and control on the part of the public and its representatives. In short, they wield enormous power without accountability – and there has been a lot to account for.

The scandals and controversies faced by social media companies in 2018 reflect two major problems. First, that platforms’ aggressive collection and handling of user data often implicates very significant – even unprecedented – privacy and security concerns. Facebook, for example, came under fire for allowing the shadowy political consultancy group Cambridge Analytica to harvest data from millions of users under the guise of academic research – a misstep which landed Facebook founder Mark Zuckerberg in front of Congress – and Google moved up the timeline for the shutdown of its Google+ social network following multiple data leaks.

Second, that platforms have failed to fully recognize the responsibilities that accompany their expansive powers over determining the content to which users are exposed. Twitter – among other platforms including Spotify, Youtube, Apple, and Facebook – faced controversy for the unilateral nature of its decision to ban conspiracy theorist Alex Jones, even as many applauded the move. And governmental inquiries into foreign actors’ use of disinformation to manipulate public opinion and undercut democratic processes through social media continued, including an inaugural “International Grand Committee on Disinformation” held in London this November (this time, Zuckerberg didn’t show up, despite a request from lawmakers from nine countries).

We are starting to make meaningful progress toward addressing the first problem. 2018 saw some of the first real action toward regulating how big technology companies deal with user data – at least in Europe. In March, the European Union implemented the General Data Protection Regulation (GDPR), a sweeping set of reforms which have already substantially reconfigured how businesses handle, share, and sell user data. The GDPR creates new standards for reporting data breaches, binds companies to a stringent user consent framework, and gives individuals more control over how their data is represented and shared. Perhaps most importantly, the GDPR invests regulators with the power to levy enormous fines – potentially running billions of dollars for the likes of Google and Facebook – against companies which violate its mandates. India, which is home to almost 500 million internet users, has followed suit, and is moving toward an expansive data protection bill of its own. No viable legislation of comparable scope has emerged in the United States, which has almost no broadly applicable data privacy regulations (though specific rules do exist in industries like banking). Even so, the size and influence of the European market means that many US business have had to adapt to the GDPR regardless. And California lawmakers have taken action at the state level, passing the Consumer Privacy Act – legislation which provides some GDPR-style rights and protections to Californians.

But data protection standards alone aren’t enough to make social media companies accountable – they can’t solve our second problem. What makes social media companies so enormously influential – and, in some instances, so dangerous – is their ability to determine what users see and when they see it. The problems with this largely unrestricted power don’t dissolve in the presence of robust user consent frameworks and privacy-driven data handling architectures. And it’s tough to hold social media companies accountable when they neglect to wield it responsibly, even on a disastrous scale.

Take, for example, disinformation campaigns orchestrated on Facebook and Twitter by Russian agents hoping to manipulate the outcomes of United States elections. Russian intelligence didn’t have to rely on privacy and security flaws. Disinformation worked because it was optimized to reach vulnerable audiences, and because the platforms often treated it like any other form of content – cat videos, say, or articles from the New York Times. Two Senate Select Committee on Intelligence reportsreleased in mid-December highlighted the scope and complexity of these operations, as well as the limited nature of technology companies’ cooperation with government agencies.

Facebook and Twitter at first simply sat back and watched their systems spread whatever content (however dangerous) triggered the right algorithms. Today they appear to be developing proactive content moderation that takes sides against certain pages and publishers – even at the cost of controversy and allegations of censorship. But the policies governing content moderation and the tools with which it is carried out have been – and will likely continue to be – developed and deployed largely behind closed doors, with limited public input. And outside of bad PR, the platforms have not borne the costs of abusive user behavior – provisions like Section 230 of the United States’ Communications Decency Act insulate social media platforms from most liability relating to user-generated content. Though, as Eric Goldman writes, CDA 230 itself has faced a number of recent reductions in its scope – and more dramatic change may be on the horizon, as US and UK lawmakers including Senator Mark Warner (D-VA) call for less forgiving platform liability standards.

So if the progress we’ve seen around data privacy and security standards in 2018 – at least in Europe – isn’t enough to solve social media’s problems, what would be? The last year has provided some hints as to what a more comprehensive, accountability-focused regulatory solution might look like.

First, some social media platforms have taken steps toward greater transparency and more robust self-governance. In October, Twitter released a massive dataset listing accounts and content it had linked to Russian disinformation efforts. A month later, Mark Zuckerberg released a lengthy essay describing Facebook’s efforts to ensure better transparency and policymaking around content moderation practices, including a plan to explore the possibility of independent oversight by what Zuckerberg has called a Facebook “Supreme Court.” But while these steps show promise, they might be too little, too late – and platforms definitely aren’t willing to hand over the reins to regulators and critics. The solutions that Facebook and its peers implement will almost certainly be incremental – carefully calibrated tweaks which protect their core business models. And with public trust in social media running low, quicker and more decisive action might prove necessary.


How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.

[Note:  This item comes from friend Geoff Goodfellow.  DLH]

How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.
By Max Read
Dec 26 2018

In late November, the Justice Department unsealed indictments against eight people accused of fleecing advertisers of $36 million in two of the largest digital ad-fraud operations ever uncovered. Digital advertisers tend to want two things: people to look at their ads and “premium” websites — i.e., established and legitimate publications — on which to host them.
The two schemes at issue in the case, dubbed Methbot and 3ve by the security researchers who found them, faked both. Hucksters infected 1.7 million computers with malware that remotely directed traffic to “spoofed” websites — “empty websites designed for bot traffic” that served up a video ad purchased from one of the internet’s vast programmatic ad-exchanges, but that were designed, according to the indictments, “to fool advertisers into thinking that an impression of their ad was served on a premium publisher site,” like that of Vogue or The Economist. Views, meanwhile, were faked by malware-infected computers with marvelously sophisticated techniques to imitate humans: bots “faked clicks, mouse movements, and social network login information to masquerade as engaged human consumers.” Some were sent to browse the internet to gather tracking cookies from other websites, just as a human visitor would have done through regular behavior. Fake people with fake cookies and fake social-media accounts, fake-moving their fake cursors, fake-clicking on fake websites — the fraudsters had essentially created a simulacrum of the internet, where the only real things were the ads.

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed may be both at once, or in succession, as you turn it over in your head.

The metrics are fake.

Take something as seemingly simple as how we measure web traffic. Metrics should be the most real thing on the internet: They are countable, trackable, and verifiable, and their existence undergirds the advertising business that drives our biggest social and search platforms. Yet not even Facebook, the world’s greatest data–gathering organization, seems able to produce genuine figures. In October, small advertisers filed suit against the social-media giant, accusing it of covering up, for a year, its significant overstatements of the time users spent watching videos on the platform (by 60 to 80 percent, Facebook says; by 150 to 900 percent, the plaintiffs say). According to an exhaustive list at MarketingLand, over the past two years Facebook has admitted to misreporting the reach of posts on Facebook Pages (in two different ways), the rate at which viewers complete ad videos, the average time spent reading its “Instant Articles,” the amount of referral traffic from Facebook to external websites, the number of views that videos received via Facebook’s mobile site, and the number of video views in Instant Articles.

Can we still trust the metrics? After the Inversion, what’s the point? Even when we put our faith in their accuracy, there’s something not quite real about them: My favorite statistic this year was Facebook’s claim that 75 million people watched at least a minute of Facebook Watch videos every day — though, as Facebook admitted, the 60 seconds in that one minute didn’t need to be watched consecutively. Real videos, real people, fake minutes.