The Much-Needed and Sane Congressional Office That Gingrich Killed Off and We Need Back

The Much-Needed and Sane Congressional Office That Gingrich Killed Off and We Need Back
Our technological choices are becoming ever more complex. Don’t you think our Senators and Representatives need some nonpartisan help?
Oct 26 2012

On October 13, 1972, in an act of bipartisanship and scientific literacy that we would be shocked to see come out of Congress today, the Technology Assessment Act was put into law. This act created the Office of Technology Assessment (OTA), an organization responsible for providing Congress with authoritative and unbiased reports on a wide range of present and emerging issues in science and technology.

When the OTA was established we were rocketing into — well, space, but also into — an age of ubiquitous, pervasive, fast-growing, and complex technologies that simultaneously promised us great benefits and great risks. 

The Cold War brought on a persistent panic over national security. Information technologies were reshaping the way that people connected and interacted. And 10 years earlier Silent Spring, the book that kickstarted the environmental movement, helped sensitize the public to hazards surrounding pollution, pesticides, and general technological control over nature.

All of this (and more) made obvious the inextricable coupling between policy and science. As such, Congress recognized that it could not afford to wander blindly forward without an organization that would bridge technical expertise and political decision-making.

Yet, capture by a particular political party posed a real threat to the OTA’s success and authority. To prevent this from happening, it was overseen by the “Technology Assessment Board,” which was made up of 13 total members: a non-voting director, six senators (three each from the minority and majority party), and six representatives (three each again).

In addition, OTA was known to provide a range of policy options in its reports, but without advocating for a specific one. This allowed policy-makers in Congress to weigh the choices themselves, and helped add to the OTA’s non-partisan persona.

Throughout its existence it released over 750 studies on an impressive range of topics like environmental (acid rain, climate change, and resource use), national security (technology transfer to China and bioterrorism), health (disease and medical-waste management), and social issues (workplace automation and how technology affects certain social groups).

However, regardless of the OTA’s pragmatic style, attention to societal impact, and the international praise lauded on its thorough and accessible reports, the 1980 book Fat City: How Washington Waste Your Taxes argued that the OTA was redundant and unnecessary. This signaled the beginning of its long, politically-charged dirge. 

More political unease followed when the OTA released a controversial 1984 report that all but called one of President Reagan’s pet projects — the space-based missile system, the Strategic Defense Initiative (SDI) — a wishful fantasy. This report was followed by two additional studies, released in 1985 and 1988, that were even more in-depth and just as damning. The 1988 report noted that the SDI had a noticeable possibility of ending up as a “catastrophic failure.” 

All of this lead up to the OTA’s final death knell in 1995 as it was placed on the Gingrich Republican’s altar of slashed budgets. In a 2005 article from the Bulletin of the Atomic Scientists titled “Requiem for an Office” [PDF], Chris Mooney describes how defunding the OTA was as much a political performance as it was a way of making room for new, ideology-friendly science advisory roles: 

In OTA’s absence, however, the new Republican majority could freely call upon its own favorable scientific “experts” and rely upon more questionable and self-interested analyses prepared by lobbyists, think tanks, and interest groups. A 2001 comment by Gingrich, explaining the reason OTA was killed, pretty much said it all: “We constantly found scientists who thought what they were saying was not correct.”

While the OTA has been defunded for 17 years there has been vocal support by many prominent scholars and politicians to either re-fund it or establish a similar method of technology assessment. For example, Representative Rush Holt wrote an op-ed in Wired a few years ago that argued for “restoring a once robust science resource to its rightful place.” And the Woodrow Wilson International Center for Scholars released a 2010 report titled “Reinventing Technology Assessment: A 21st-Century Model” that drew on lessons learned from the OTA as jump-off point for creating a contemporary method of assessment.

David H. Guston, a leading scholar on the OTA and co-director of the Consortium for Science, Policy & Outcomes as well as Director of the Center for Nanotechnology in Society, told me that the ideas behind it still hold relevance today for two main reasons.

“The first, to provide the Congress with an analytical capacity in issues of science and technology, is evergreen” because the legislative branch is often at a disadvantage in its possession of technical expertise and advice when compared to the executive branch. The OTA served as way of filling this void.

Guston described the second reason as the ability to, “look toward and even over the horizon on scientific and technical issues and help decision makers understand and make policies for emerging technologies like synthetic biology or military drones, or infrastructural challenges like climate adaptation, or resource management issues like fisheries, is perhaps even more important than it was when OTA was chartered 40 years ago, because of the vast scope and increasing pace of knowledge production and technological change.”

The OTA holds an important place in history and as technology becomes increasingly complex, socially embedded, and wrapped up in policy we should look to it for lessons about how to apply these techniques in the present and future.


How Journalists Covered the Rise of Mussolini and Hitler

How Journalists Covered the Rise of Mussolini and Hitler
Reports on the rise of fascism in Europe were not the American media’s finest hour
By John Broich
Dec 13 2016

How to cover the rise of a political leader who’s left a paper trail of anti-constitutionalism, racism and the encouragement of violence? Does the press take the position that its subject acts outside the norms of society? Or does it take the position that someone who wins a fair election is by definition “normal,” because his leadership reflects the will of the people?

These are the questions that confronted the U.S. press after the ascendance of fascist leaders in Italy and Germany in the 1920s and 1930s.

A leader for life

Benito Mussolini secured Italy’s premiership by marching on Rome with 30,000 blackshirts in 1922. By 1925 he had declared himself leader for life. While this hardly reflected American values, Mussolini was a darling of the American press, appearing in at least 150 articles from 1925-1932, most neutral, bemused or positive in tone.

The Saturday Evening Post even serialized Il Duce’s autobiography in 1928. Acknowledging that the new “Fascisti movement” was a bit “rough in its methods,” papers ranging from the New York Tribune to the Cleveland Plain Dealer to the Chicago Tribune credited it with saving Italy from the far left and revitalizing its economy. From their perspective, the post-WWI surge of anti-capitalism in Europe was a vastly worse threat than Fascism.

Ironically, while the media acknowledged that Fascism was a new “experiment,” papers like The New York Times commonly credited it with returning turbulent Italy to what it called “normalcy.”

Yet some journalists like Hemingway and journals like the New Yorker rejected the normalization of anti-democratic Mussolini. John Gunther of Harper’s, meanwhile, wrote a razor-sharp account of Mussolini’s masterful manipulation of a U.S. press that couldn’t resist him.

The ‘German Mussolini’

Mussolini’s success in Italy normalized Hitler’s success in the eyes of the American press who, in the late 1920s and early 1930s, routinely called him “the German Mussolini.” Given Mussolini’s positive press reception in that period, it was a good place from which to start. Hitler also had the advantage that his Nazi party enjoyed stunning leaps at the polls from the mid ’20’s to early ’30’s, going from a fringe party to winning a dominant share of parliamentary seats in free elections in 1932.

But the main way that the press defanged Hitler was by portraying him as something of a joke. He was a “nonsensical” screecher of “wild words” whose appearance, according to Newsweek, “suggests Charlie Chaplin.” His “countenance is a caricature.” He was as “voluble” as he was “insecure,” stated Cosmopolitan.

When Hitler’s party won influence in Parliament, and even after he was made chancellor of Germany in 1933 – about a year and a half before seizing dictatorial power – many American press outlets judged that he would either be outplayed by more traditional politicians or that he would have to become more moderate. Sure, he had a following, but his followers were “impressionable voters” duped by “radical doctrines and quack remedies,” claimed The Washington Post. Now that Hitler actually had to operate within a government the “sober” politicians would “submerge” this movement, according to The New York Times and Christian Science Monitor. A “keen sense of dramatic instinct” was not enough. When it came to time to govern, his lack of “gravity” and “profundity of thought” would be exposed.

In fact, The New York Times wrote after Hitler’s appointment to the chancellorship that success would only “let him expose to the German public his own futility.” Journalists wondered whether Hitler now regretted leaving the rally for the cabinet meeting, where he would have to assume some responsibility.

Yes, the American press tended to condemn Hitler’s well-documented anti-Semitism in the early 1930s. But there were plenty of exceptions. Some papers downplayed reports of violence against Germany’s Jewish citizens as propaganda like that which proliferated during the foregoing World War. Many, even those who categorically condemned the violence, repeatedly declared it to be at an end, showing a tendency to look for a return to normalcy.

Journalists were aware that they could only criticize the German regime so much and maintain their access. When a CBS broadcaster’s son was beaten up by brownshirts for not saluting the Führer, he didn’t report it. When the Chicago Daily News’ Edgar Mowrer wrote that Germany was becoming “an insane asylum” in 1933, the Germans pressured the State Department to rein in American reporters. Allen Dulles, who eventually became director of the CIA, told Mowrer he was “taking the German situation too seriously.” Mowrer’s publisher then transferred him out of Germany in fear of his life.

By the later 1930s, most U.S. journalists realized their mistake in underestimating Hitler or failing to imagine just how bad things could get. (Though there remained infamous exceptions, like Douglas Chandler, who wrote a loving paean to “Changing Berlin” for National Geographic in 1937.) Dorothy Thompson, who judged Hitler a man of “startling insignificance” in 1928, realized her mistake by mid-decade when she, like Mowrer, began raising the alarm.


Geeky license plate earns hacker $12,000 in parking tickets

[Note:  This item comes from friend David Rosenthal.  See also:  DLH]

Geeky license plate earns hacker $12,000 in parking tickets
A California man’s vanity license plate backfires spectacularly.
Aug 13 2019

The relationship between Americans and their automobiles is a complicated one. More than mere transport, cars can become extensions of one’s personality—think of stereotypes about drivers of a particular model like a Corvette, for instance. Since cars are mass-produced, it’s natural that people want to personalize them. Sometimes it’s covering them with every bit of chromed plastic you can find at JC Whitney. Sometimes it’s plastering them in stickers. And sometimes, it might just be a personalized number plate.

The rules for personalized plates vary depending on the state in which you’re registering your car. These can foster creativity, but today we have a cautionary tale from California, which reveals the risks of being too creative. It’s the story of a security researcher known as Droogie, who presented his experience at the recent DEF CON conference in Las Vegas. Droogie decided his new vanity plate should read “NULL.” While he did this mainly for the giggles, he told the audience that there was an ulterior motive, as reported by Mashable:

“I was like, ‘I’m the shit,'” he joked to the crowd. “‘I’m gonna be invisible.’ Instead, I got all the tickets.”

Droogie’s hope was that the new plate would exploit California’s DMV ticketing system in a similar manner to the classic xkcd “Bobby Tables” cartoon. With any luck, the DMV’s ticket database would see “NULL” and consign any of his tickets to the void. Unfortunately, the exact opposite happened.

First, Droogie got a parking ticket, incurred for an actual parking infraction—so much for being invisible. Then, once a particular database of outstanding tickets had associated the license plate NULL with his address, it sent him every other ticket that lacked a real plate. The total came to $12,049 worth of tickets. Droogie told the DEF CON audience that he received little sympathy from either the California DMV or the Los Angeles Police Department, both telling him to just change his plate to something else. That remains something he refuses to do.


Advertisers Blacklisting News, Other Stories with “Controversial” Words Like “Trump”

[Note:  This item comes from friend David Rosenthal.  DLH]

Advertisers Blacklisting News, Other Stories with “Controversial” Words Like “Trump”
By Yves Smith
Aug 16 2019

It’s no longer paranoid to say that “they” are out to kill news. First it was the Internet almost entirely displacing classified ads, which had accounted for roughly half of newspaper industry revenues in the US. The Internet also turned most people save those who are now oldsters off print newspapers, even though nothing is so efficient to scan, taking with it higher subscription rates and display ads. Then Facebook and Google sucked most online advertiser revenues to themselves.

To add insult to injury, Google implemented algos hostile to smaller sites, first targeting those that did what Google deemed to be too much aggregation, like our daily Links feature. Google deemed those sites to be “low quality”. One wonders if the real issue was that they competed with Google News. Then Google downgraded sites it deemed not to be “authoritative,” whacking not only many left and right leaning sites but even The Intercept. Facebook’s parallel action was to change its search and newsfeed algos, supposedly to combat fake news, but also hurting left-leaning publishers.

Now, as the Wall Street Journal reports, many major advertisers have created blacklists, nixing ad placements that appear next to or in stories with headlines using naughty words like “bomb” that amount to a partial or total ban on news content. It isn’t isn’t just fluffy feel good brands that want to steer clear of controversy. Startlingly, even some financial services companies like Fidelity want to stay away from hot words like “Trump” even though “Trump” appears regularly in business news headlines, such as ones discussing his China trade spat, his tax cuts, his deregulatory efforts, and today, his interest in buying Greenland.

It appears advertisers just want us to take our Soma and shop rather than know about anything in the world at large. One wonders if words like “climate” and “strike” are on some blacklists. From the Journal:

Like many advertisers, Fidelity Investments wants to avoid advertising online near controversial content. The Boston-based financial-services company has a lengthy blacklist of words it considers off-limits.

If one of those words is in an article’s headline, Fidelity won’t place an ad there. Its list earlier this year, reviewed by The Wall Street Journal, contained more than 400 words, including “bomb,” “immigration” and “racism.” Also off-limits: “Trump.”….

“Political stories are, regardless of party affiliation, not relevant to our brand,” a Fidelity spokesman said in a written statement. The company also avoids several other topics that it says don’t align with published content about business and finance…

Integral Ad Science Inc., a firm that ensures ads run in content deemed safe for advertisers, said that of the 2,637 advertisers running campaigns with it in June, 1,085 brands blocked the word “shooting,” 314 blocked “ISIS” and 207 blocked “Russia.” Almost 560 advertisers blocked “Trump,” while 83 blocked “Obama.”

The average number of keywords the company’s advertisers were blocking in the first quarter was 261. One advertiser blocked 1,553 words, it said.

Notice how obituaries are on the verboten list for many advertisers. So would be articles discussing deaths of despair and many on the opioid epidemic. 

The Journal points out that advertisers have long had blacklists, but in the past they weren’t as widely used and were narrower. But after a 2017 Times of London story exposed how many advertisers were running ads on YouTube channels with hate speech thanks to automated ad buys on online ad marketplaces. Up sprung a new mini-industry of “brand safety” firms that have generated blacklists so far-reaching as to imperil already-strained publishers:

Online news publishers are feeling the impact, from smaller outlets to large players such as, USA Today-owner Gannett Co., the Washington Post and the Journal, according to news and ad executives.

The ad-blacklisting threatens to hit publications’ revenue and is creating incentives to produce more lifestyle-oriented coverage that is less controversial than hard news…

Consumer-products company Colgate-Palmolive Co., sandwich chain Subway and fast-food giant McDonald’s Corp. are among the many companies blocking digital ad placements in hard news to various degrees, according to people familiar with those companies’ strategies.

Some companies are creating keyword blacklists so detailed as to make almost all political or hard-news stories off-limits for their ads. “It is de facto news blocking,” said Megan Pagliuca, chief data officer at Hearts & Science, an ad-buying firm owned by Omnicom Group Inc.

The Guardian said some advertisers have banned the word “Brexit”. Advertisers like Subway claimed they wanted their ads to be associated with things like “positivity”. I don’t think eating fast food that makes dodgy pretenses about being healthy is positive, but to each his own.


Britain’s infrastructure is breaking down. And here’s why no one’s fixing it

[Note:  This item comes from friend David Rosenthal.  DLH]

Britain’s infrastructure is breaking down. And here’s why no one’s fixing it
There’s no lobby for public assets like the National Grid. That goes too for our libraries, youth clubs, parks and pubs
By Aditya Chakrabortty
Aug 14 2019

Whole swaths of Britain experience a blackout and the country lights up with fury. Cabinet minsters, the press, members of the public rightly demand answers: why was Newcastle airport plunged into darkness? Who is responsible for rail services being halted for hours? Threats are issued of a whopping fine, an official inquiry, heads rolling. Days of rage for a power cut of less than an hour.

When it works, infrastructure is invisible. Point out the crumbliness, by all means, and lament the dangerous compromises – but as long as the wretched system judders on, voters shrug and politicians look the other way. Until the day the bridges collapse, the trains seize up and the lights no longer come on. By which time it is too late for anything but blame in 24-point headlines.

Between these two extremes lies a much rarer phenomenon, which blights Britain today. We are right in the middle of an infrastructure breakdown – we just haven’t named it yet. You’ll know what I mean when we list the component parts. More than 760 youth clubs have shut across the UK since 2012. A pub closes every 12 hours. Nearly 130 libraries were scrapped last year, and those that survive in England have lopped off 230,000 opening hours.

Each of the above is a news story. Each stings a different group: the books trade, the real-ale aficionados, the trade unions. But knit them together and a far darker picture emerges. Britain is being stripped of its social infrastructure: the institutions that make up its daily life, the buildings and spaces that host friends and gently push strangers together. Public parks are disappearing. Playgrounds are being sold off. High streets are fast turning to desert. These trends are national, but their greatest force is felt in the poorest towns and suburbs, the most remote parts of the countryside, where there isn’t the footfall to lure in the businesses or household wealth to save the local boozer.

When I am out reporting it is not uncommon to go into a suburban postcode short of money yet still bustling with people – but the banks have nearly all cleared out, the church has gone and all that’s left of the last pub is an empty hulk. The private sector has buggered off, the state is a remote and vengeful god who dispenses benefits or sanctions, and the “big society” never made it out of the pages of a report from a Westminster thinktank. I’ve seen this in the suburbs of London and in the valleys of south Wales, and the word that most comes to mind is “abandoned”.

Politicians bemoan the loss of community, but that resonant word is not precise enough. A large part of what’s missing is social infrastructure. It can be public or private. It is often slightly dog-eared and usually overlooked. But when it vanishes, the social damage can be huge.

The American sociologist Eric Klinenberg lists some in his recent book, Palaces for the People: “People reduce the time they spend in public settings and hunker down in their safe houses. Social networks weaken. Crime rises. Older and sick people grow isolated. Younger people get addicted to drugs … Distrust rises and civic participation wanes.” A New York University professor, Klinenberg’s observations hold as true for Brexit Britain as they do for Trump’s America. How often have you read about a grandmother found dead in her own home, with no one popping by for days? How many news stories do you read about teenagers experiencing mental illness as they compare themselves to the images on their screens? And how many times have you complained that everyone is so stuck in their own bubble that politics is hopelessly polarised?

In ripping out our social infrastructure, we are outraging a wisdom that goes back centuries and spans countries. Millions of Britons will spend part of this summer on a plaza or a piazza or people-watching on the public square outside Paris’s Centre Pompidou. The architectural historian Shu
mi Bose points out that library designs proliferated during the Enlightenment, alongside blueprints for monuments “to the exercise of the sovereignty of the people”. During the second world war, the Mass Observation collective wrote of the British pub: “Once a man has bought or been bought his glass of beer, he has entered an environment in which he is participant, rather than spectator.”


In order to understand the brutality of American capitalism, you have to start on the plantation.

[Note:  This item comes from friend Mike Cheponis.  DLH]

In order to understand the brutality of American capitalism, you have to start on the plantation.
By Matthew Desmond
Aug 14 2019

A couple of years before he was convicted of securities fraud, Martin Shkreli was the chief executive of a pharmaceutical company that acquired the rights to Daraprim, a lifesaving antiparasitic drug. Previously the drug cost $13.50 a pill, but in Shkreli’s hands, the price quickly increased by a factor of 56, to $750 a pill. At a health care conference, Shkreli told the audience that he should have raised the price even higher. “No one wants to say it, no one’s proud of it,” he explained. “But this is a capitalist society, a capitalist system and capitalist rules.”

This is a capitalist society. It’s a fatalistic mantra that seems to get repeated to anyone who questions why America can’t be more fair or equal. But around the world, there are many types of capitalist societies, ranging from liberating to exploitative, protective to abusive, democratic to unregulated. When Americans declare that “we live in a capitalist society” — as a real estate mogul told The Miami Herald last year when explaining his feelings about small-business owners being evicted from their Little Haiti storefronts — what they’re often defending is our nation’s peculiarly brutal economy. “Low-road capitalism,” the University of Wisconsin-Madison sociologist Joel Rogers has called it. In a capitalist society that goes low, wages are depressed as businesses compete over the price, not the quality, of goods; so-called unskilled workers are typically incentivized through punishments, not promotions; inequality reigns and poverty spreads. In the United States, the richest 1 percent of Americans own 40 percent of the country’s wealth, while a larger share of working-age people (18-65) live in poverty than in any other nation belonging to the Organization for Economic Cooperation and Development (O.E.C.D.).

Or consider worker rights in different capitalist nations. In Iceland, 90 percent of wage and salaried workers belong to trade unions authorized to fight for living wages and fair working conditions. Thirty-four percent of Italian workers are unionized, as are 26 percent of Canadian workers. Only 10 percent of American wage and salaried workers carry union cards. The O.E.C.D. scores nations along a number of indicators, such as how countries regulate temporary work arrangements. Scores run from 5 (“very strict”) to 1 (“very loose”). Brazil scores 4.1 and Thailand, 3.7, signaling toothy regulations on temp work. Further down the list are Norway (3.4), India (2.5) and Japan (1.3). The United States scored 0.3, tied for second to last place with Malaysia. How easy is it to fire workers? Countries like Indonesia (4.1) and Portugal (3) have strong rules about severance pay and reasons for dismissal. Those rules relax somewhat in places like Denmark (2.1) and Mexico (1.9). They virtually disappear in the United States, ranked dead last out of 71 nations with a score of 0.5.

Those searching for reasons the American economy is uniquely severe and unbridled have found answers in many places (religion, politics, culture). But recently, historians have pointed persuasively to the gnatty fields of Georgia and Alabama, to the cotton houses and slave auction blocks, as the birthplace of America’s low-road approach to capitalism.

Slavery was undeniably a font of phenomenal wealth. By the eve of the Civil War, the Mississippi Valley was home to more millionaires per capita than anywhere else in the United States. Cotton grown and picked by enslaved workers was the nation’s most valuable export. The combined value of enslaved people exceeded that of all the railroads and factories in the nation. New Orleans boasted a denser concentration of banking capital than New York City. What made the cotton economy boom in the United States, and not in all the other far-flung parts of the world with climates and soil suitable to the crop, was our nation’s unflinching willingness to use violence on nonwhite people and to exert its will on seemingly endless supplies of land and labor. Given the choice between modernity and barbarism, prosperity and poverty, lawfulness and cruelty, democracy and totalitarianism, America chose all of the above.

Nearly two average American lifetimes (79 years) have passed since the end of slavery, only two. It is not surprising that we can still feel the looming presence of this institution, which helped turn a poor, fledgling nation into a financial colossus. The surprising bit has to do with the many eerily specific ways slavery can still be felt in our economic life. “American slavery is necessarily imprinted on the DNA of American capitalism,” write the historians Sven Beckert and Seth Rockman. The task now, they argue, is “cataloging the dominant and recessive traits” that have been passed down to us, tracing the unsettling and often unrecognized lines of descent by which America’s national sin is now being visited upon the third and fourth generations.

They picked in long rows, bent bodies shuffling through cotton fields white in bloom. Men, women and children picked, using both hands to hurry the work. Some picked in Negro cloth, their raw product returning to them by way of New England mills. Some picked completely naked. Young children ran water across the humped rows, while overseers peered down from horses. Enslaved workers placed each cotton boll into a sack slung around their necks. Their haul would be weighed after the sunlight stalked away from the fields and, as the freedman Charles Ball recalled, you couldn’t “distinguish the weeds from the cotton plants.” If the haul came up light, enslaved workers were often whipped. “A short day’s work was always punished,” Ball wrote.

Cotton was to the 19th century what oil was to the 20th: among the world’s most widely traded commodities. Cotton is everywhere, in our clothes, hospitals, soap. Before the industrialization of cotton, people wore expensive clothes made of wool or linen and dressed their beds in furs or straw. Whoever mastered cotton could make a killing. But cotton needed land. A field could only tolerate a few straight years of the crop before its soil became depleted. Planters watched as acres that had initially produced 1,000 pounds of cotton yielded only 400 a few seasons later. The thirst for new farmland grew even more intense after the invention of the cotton gin in the early 1790s. Before the gin, enslaved workers grew more cotton than they could clean. The gin broke the bottleneck, making it possible to clean as much cotton as you could grow.

Enslaved workers felled trees by ax, burned the underbrush and leveled the earth for planting. “Whole forests were literally dragged out by the roots,” John Parker, an enslaved worker, remembered. A lush, twisted mass of vegetation was replaced by a single crop. An origin of American money exerting its will on the earth, spoiling the environment for profit, is found in the cotton plantation. Floods became bigger and more common. The lack of biodiversity exhausted the soil and, to quote the historian Walter Johnson, “rendered one of the richest agricultural regions of the earth dependent on upriver trade for food.”

As slave labor camps spread throughout the South, production surged. By 1831, the country was delivering nearly half the world’s raw cotton crop, with 350 million pounds picked that year. Just four years later, it harvested 500 million pounds. Southern white elites grew rich, as did their counterparts in the North, who erected textile mills to form, in the words of the Massachusetts senator Charles Sumner, an “unhallowed alliance between the lords of the lash and the lords of the loom.” The large-scale cultivation of cotton hastened the invention of the factory, an institution that propelled the Industrial Revolution and changed the course of history. In 1810, there were 87,000 cotton spindles in America. Fifty years later, there were five million. Slavery, wrote one of its defenders in De Bow’s Review, a widely read agricultural magazine, was the “nursing mother of the prosperity of the North.” Cotton planters, millers and consumers were fashioning a new economy, one that was global in scope and required the movement of capital, labor and products across long distances. In other words, they were fashioning a capitalist economy. “The beating heart of this new system,” Beckert writes, “was slavery.”

Perhaps you’re reading this at work, maybe at a multinational corporation that runs like a soft-purring engine. You report to someone, and someone reports to you. Everything is tracked, recorded and analyzed, via vertical reporting systems, double-entry record-keeping and precise quantification. Data seems to hold sway over every operation. It feels like a cutting-edge approach to management, but many of these techniques that we now take for granted were developed by and for large plantations.


Elon Musk’s Neuralink: Both an evolution and a plan for radical change

Elon Musk’s Neuralink: Both an evolution and a plan for radical change
Neuralink will probably fail in interesting and worthwhile ways.
Aug 13 2019

When Elon Musk first started talking about launching a brain-computer interface company, he made a number of comments that set expectations for what that idea might entail. The company, he said, was motivated by his concerns about AI ending up hostile to humans: providing humans with an interface directly into the AI’s home turf might prevent hostilities from developing. Musk also suggested that he hoped to avoid any electrodes implanted in the brain, since that might pose a barrier to adoption.

At his recent public launch of the company (since named Neuralink), worries about hostile AIs did get a mention—but only in passing. Instead, we got a detailed technical description of the hardware behind Neuralink’s brain-computer interface, which would rely on surgery and implanted hardware. In the process, Neuralink went from something in the realm of science fiction to a company that would be pushing for an aggressive evolution of existing neural-implant hardware.

Those changes in tone and topic are a sign that Musk has been listening to the people he hired to build Neuralink. So, how precisely is Neuralink pushing the envelope on what we can already do in this space? And does it still veer a bit closer to science fiction in some aspects?

The big picture

Before taking a look at the individual components that Neuralink announced recently, let’s start with an overview of what the company hopes to accomplish technology-wise. The plan is to access the brain via a hole less than eight millimeters across. This small hole would allow Neuralink to implant an even smaller (4mm x 4mm) chip and its associated wiring into the brain. The chip will get power from, and communicate with, some wireless hardware located behind the ear, much like current cochlear implants.

Inside the brain, the chip will be connected to a series of small threads that carry electrodes to the relevant area, where they can listen in on the electrical activity of neurons. These threads will be put in place using a surgical robot, which allows the surgeon to insert them in a manner that avoids damaging blood vessels.

The chip will take the raw readings of neural activity and process them to a very compact form that preserves key information, which will be easier for their wireless hardware to transmit back across the skull. Electrical impulses can also be sent to the neurons via the same electrodes, stimulating brain activity. Musk thinks that it would be safe to insert as many as 10 of these chips into a single brain, though Neuralink will obviously start testing with far fewer.

All of that is an evolution of some of the existing work on brain-computer interfaces. But the details behind some of these features provides a better sense of how Neuralink is pushing the field forward.

The robot

The Neuralink introduction included a video of the brain during surgery, revealing how the wrinkly organ constantly shifts with breathing and blood flow. This makes implanting electrodes a challenge, especially since much of the brain is laced with blood vessels that the electrodes could easily puncture. Plus, due to their incredibly small size, the electrodes themselves are susceptible to damage.

The robot keeps a surgeon in charge, but it turns the process of electrode implantation into something closer to a video game. Using a microscope integrated into the robot, a surgeon is given a static view of the underlying brain, thanks to software that compensates for the pulsing and shifting. With the static view, implanting the electrodes becomes something like a point-and-click activity: the surgeon selects a location, and the robot inserts the electrode there while compensating for any ensuing movement of the underlying tissue. Although video showed its insertion method as looking like a violent stab, the hardware protects the electrodes from damage at this point.

This method certainly has the potential to make electrode implantation safer, in part by minimizing the risk of blood-vessel damage. But let me be clear: while the electrodes are small enough that they’re not dramatically larger than the neurons they interact with, there’s still the potential for damage to those neurons or their support cells during the electrode insertion, as well as some disruption of the connections among neurons. That potential may be lowered by the robot, but it’s not going away.

One other issue that the robot doesn’t obviously solve is that several of the images displayed during the Neuralink introduction showed the chips being located somewhere other than where the electrodes were targeted. There’s certainly enough play in the wiring of the electrodes to allow a bit of distance between the two, but it’s hard to understand how this can be managed with a single, small surgical incision.

The electrodes

In existing systems, the electrodes are their own distinct hardware component, but Neuralink is seeking to change this. The company hopes to do so by producing the metal portion of the electrodes as it’s building layers of metal into the chips used for processing the electrode data. This provides some real advantages, as the process technology used there is already operating at the sort of fine scales that make structure of the electrodes easy.

This setup would also do away with any bulky connector hardware currently needed to link electrodes with the rest of the system—they’re already part of it. Presumably, Neuralink will manufacture chips with electrodes of different lengths to allow for flexibility in the implantation process.

In use, multiple electrodes will be combined into a single “thread,” with polymer layers providing insulation to avoid cross-talk. Additional polymer layers will protect the thread from the environment of the brain, which Vanessa Tolosa of Neuralink described as “harsh.” The electrode and polymer materials were both chosen to limit inflammatory and other immune responses.

Overall, this part of Neuralink’s approach seemed solid, although a full evaluation will have to wait for longer-term studies of a thread’s safety and useful lifetime inside an actual brain. Scar development was a real problem with early electrodes made by others, but further development has limited this problem. Presumably, Neuralink has already learned from others here.