Is a ‘Netflix effect’ killing prestige films?

Is a ‘Netflix effect’ killing prestige films?
By Steven Zeitchik
Nov 29 2018

Netflix may be great for independent-minded filmmakers.

But it’s bad for a lot of the companies that produce independent films — and maybe the film business as a whole.

At least that’s the argument quietly being advanced by executives in parts of the movie industry — specifically the parts that produce and distribute the upscale independent movies that seize the public imagination this time of the year.

These executives are looking at some tough box-office numbers for their movies and pointing the finger at Netflix, which for the first time this year has dived into a prestige-film pool at which it once only sunbathed. The subject was on the mind of many industry leaders at the New York-based Gotham Awards earlier this week (where Netflix films were very much in contention). One even had a word for it: the “Netflix Effect.” As in, “it’s really difficult to open movies because of the Netflix Effect.”

Netflix, as your queue might already tell you, is going hard on the prestige-film business this year. That business is characterized by movies of a more distinguished pedigree, which come out in the fourth quarter in the hope of landing awards and critical attention, and thus eyeballs. (For the purpose of this story we’re counting such films from both independent firms such as A24 and studio specialty divisions such as Fox Searchlight, but not the big studios, which operate at a different budget level and barely make these movies anymore anyway).

In past years, Netflix had one or two such titles: “The Meyerowitz Stories,” “Mudbound,” “Beasts of No Nation.” But this year, the streamer is really going for it, financing, producing, distributing, campaigning for or otherwise getting involved with some of the most decorated filmmakers around. It has hired a large staff, including one of the most respected award consultants around.

The company has already released five such films this quarter, including Tamara Jenkins’s fertility dramedy “Private Life” and Paul Greengrass’s far-right thriller “22 July,” with Alfonso Cuaron’s awards front-runner “Roma” still to come. And that’s yielding what rival executives say is a certain effect.

The hypothesis is basically this: Because Netflix is now releasing these kinds of movies and then making them available to us in the convenience of our homes, it’s deterring us from doing what we’ve long done, which is come out and pay for similar movies in theaters.

Essentially, it’s a cannibalization argument. Netflix may be nobly investing in these movies. Greengrass, who also made “Captain Phillips” and “United 93” but switched to Netflix for his latest work, said at the Gothams that Netflix is showing “tremendous support of difficult films.” But the availability of these kinds of films at home helps ensure that people don’t want to leave it.

It’s as though everybody is a press person with access to their own online screening room. And we know how often those freeloaders pay for movie tickets.

(It should be noted that Netflix is releasing a few of these movies in theaters. But don’t be fooled by the headlines — the releases are of very limited scope. The company continues to hold to a principle that films should be on its service before or at the same time as theaters, a proposition most theater chains resist.)

If the argument of a Netflix effect is true, it would give the lie to the idea, put forth by the streamer and its filmmakers, that it is adding to and enhancing the market. In fact, it would make the opposite case: that these films could push the companies that have been making these movies for years to the economic brink.

The question is, is the contention really accurate? Are people less likely to come out to new upscale movies because titles of similar quality are suddenly available on Netflix?


How Restaurants Got So Loud

How Restaurants Got So Loud
Fashionable minimalism replaced plush opulence. That’s a recipe for commotion.
Nov 27 2018

Let me describe what I hear as I sit in a coffee shop writing this article. It’s late morning on a Saturday, between the breakfast and lunch rushes. People talk in hushed voices at tables. The staff make pithy jokes amongst themselves, enjoying the downtime. Fingers clack on keyboards, and glasses clink against wood and stone countertops. Occasionally, the espresso machines grind and roar. The coffee shop is quiet, probably as quiet as it can be while still being occupied. Even at its slowest and most hushed, the average background noise level hovered around 73 decibels (as measured with my calibrated meter).

That’s not dangerous—noise levels become harmful to human hearing above 85 decibels—but it is certainly not quiet. Other sounds that reach 70 decibels include freeway noise, an alarm clock, and a sewing machine. But it’s still quiet for a restaurant. Others I visited in Baltimore and New York City while researching this story were even louder: 80 decibels in a dimly lit wine bar at dinnertime; 86 decibels at a high-end food court during brunch; 90 decibels at a brewpub in a rehabbed fire station during Friday happy hour.

Restaurants are so loud because architects don’t design them to be quiet. Much of this shift in design boils down to changing conceptions of what makes a space seem upscale or luxurious, as well as evolving trends in food service. Right now, high-end surfaces connote luxury, such as the slate and wood of restaurants including The Osprey in Brooklyn or Atomix in Manhattan.

This trend is not limited to New York. According to Architectural Digest, mid-century modern and minimalism are both here to stay. That means sparse, modern decor; high, exposed ceilings; and almost no soft goods, such as curtains, upholstery, or carpets. These design features are a feast for the eyes, but a nightmare for the ears. No soft goods and tall ceilings mean nothing is absorbing sound energy, and a room full of hard surfaces serves as a big sonic mirror, reflecting sound around the room.

The result is a loud space that renders speech unintelligible. Now that it’s so commonplace, the din of a loud restaurant is unavoidable. That’s bad for your health—and worse for the staff who works there. But it also degrades the thing that eating out is meant to culture: a shared social experience that rejuvenates, rather than harms, its participants.

Luxury didn’t always mean loud, and there are lessons to be learned from the glamorous restaurants of the past, including actual mid-century-modern eateries. From the 1940s through the early 1990s, fine-dining establishments expressed luxury through generous seating, plush interiors, and ornate decor. But more important, acoustic treatments themselves were a big part of that luxury.

Surfaces that today’s consumers now consider old-fashioned were still relatively new and exciting in the interwar and postwar periods. Just as stainless-steel tabletops, slate-tile floors, and exposed ductwork seem au courant today, so did wall paneling and drop ceilings with acoustic tiles in the 1950s and ’60s.

Architects also had different conceptions of what ideal work and leisure spaces should sound like. In the early to mid-20th century, designers were startled to discover that they might have some control over the aural impression of a physical space. Just as automobiles and kitchen appliances were seen as technological solutions to problems of everyday life, so ambient noise shifted from a symbol of progress in the machine age to a problem it produced—one that demanded a solution.

Early acoustics materials focused on absorbing sound—soaking up sonic energy rather than reflecting it. That approach produced its own idiosyncratic soundscape. As the science historian Emily Thompson explains in her book The Soundscape of Modernity, absorptive materials removed reverberation, producing “clear and direct” sound. “In a culture preoccupied with noise and efficiency,” Thompson writes, “reverberation became just another form of noise, an unnecessary sound that was inefficient and best eliminated.”

Absorptive design found its way first into schools and offices, where acoustics products were marketed as essential to creating quieter interiors and thus more efficient and less distraction-prone workers (or students). These products were advertised as “sound-conditioning” devices that would purify an environment of “unnatural” sounds. In catalogs for commercial and home interiors, sound-absorptive surfaces were linked directly to comfort, sophistication, and luxury.

Today’s interior designs are often seen as throwbacks to classic mid-century-modern spaces—sparse and sleek, with hardwood floors and colorful Danish chairs with tapered legs seated beside long, light-colored wood tables. The contemporary revival of this style tends to highlight these features to excess. However, photographs ofrestaurants from the 1950s through the 1970s reveal that interiors were opulent in the more luxurious lounges and supper clubs. Trends that today’s diners associate with luxury, such as hard surfaces and open kitchens, were, in mid-century, mainly relegated to lowbrow spaces such as cafés, cafeterias, and diners. The finest eateries—such as French and specialty restaurants, exclusive lounges, and cocktail bars—were the most highly ornamented and plush. Even high-modernist interiors made extensive use of soft goods, including cloth tablecloths, heavy drapes, carpeted floors, and upholstered seating. Across the board, mid-century restaurants had low ceilings, often with acoustic ceiling tiles.


Madagascar women jailed for crimes male relatives are accused of

Madagascar women jailed for crimes male relatives are accused of
Nov 30 2018

Women in Madagascar say they are being jailed for crimes their male relatives are accused of.
100 Women has had access to Madagascan prisons to speak to women who have been detained for months – and sometimes years – told they were “accomplices” or should have known what their husbands, brothers or sons were doing.

Video: 5:12 min

An Anti-Vaxxer’s New Crusade

[Note:  This item comes from friend Janos Gereben.  DLH]

An Anti-Vaxxer’s New Crusade
Dr. David Ayoub used to be active in the anti-vaccination movement. Now he’s challenging mainstream science again—as an expert witness for accused child abusers.
By David Armstrong
Nov 27 2018

This article is a collaboration between The New Yorker and ProPublica.

On the morning of April 19, 2016, Melanie Lilliston received an urgent call from the Little Dreamers day-care center, in Rockville, Maryland. Her six-month-old daughter, Millie, was being rushed to the hospital. Doctors there found that Millie had fractured ribs, facial bruises, and a severe brain injury. Melanie watched as her daughter was loaded onto a helicopter for emergency transport to Children’s National medical center, in Washington, D.C., where doctors discovered more injuries: a fractured leg and arm, and bleeding in her eyes. Millie died three days later.

The day-care operator, Kia Divband, told police that Millie had started choking while drinking a bottle of milk and lost consciousness. The Montgomery County medical examiner, however, determined that her injuries were caused by blunt force. Investigators discovered, on Divband’s phone and computer, Internet searches for “broken bones in children” and “why are bone fractures in children sometimes hard to detect.” A former employer of Divband’s told them that, the day before Millie was hospitalized, Divband had called to inquire about a job, and a baby could be heard wailing in the background. Divband told him the baby wouldn’t stop crying and that “he just couldn’t take it anymore,” the former boss recalled. Divband was arrested and charged with fatally abusing Millie.

At Divband’s trial, last year, a radiologist named David Ayoub testified for the defense. Ayoub, who is a partner in a private radiology practice in Springfield, Illinois, told jurors that he had reviewed X-rays and other medical records, and had concluded that Millie had rickets, a rare condition that causes fragile bones. The disorder, which is usually brought on by a prolonged and severe lack of Vitamin D, could explain Millie’s injuries, Ayoub said.

Seeking to cast doubt on Ayoub’s credibility, the prosecutor brought up a different issue. Was it true, she asked, that Ayoub believed that Gavi, the Vaccine Alliance, a charity funded by the Bill and Melinda Gates Foundation to increase vaccination rates in poor countries, was committing genocide? “That’s right,” Ayoub said.

The prosecutor asked if Ayoub believed that Gavi—along with the World Health Organization, the Gates Foundation, and UNICEF—was using vaccinations to force sterilization on people in Third World countries. “Yes, that’s my belief,” Ayoub said.

As evidence, he cited a report from 1972 by a commission headed by the philanthropist John D. Rockefeller III and a 1974 study overseen by Henry Kissinger, who was the Secretary of State at the time, warning about the dangers of population growth. It’s “no leap of faith” to believe that vaccination is being used to carry out this agenda, Ayoub said.

The prosecutor also questioned Ayoub about a speech he delivered in 2005, in which he said that his views on vaccination—including his belief that it has contributed to a rise in autism—put him in a “fringe group,” and even in the “fringe of that fringe.” Ayoub acknowledged making the statement. “Thinking that vaccines were associated with autism, you’re clearly a minority view if you’re a physician,” Ayoub testified. “If you think it’s done intentionally for nefarious purposes, you’re clearly another level of—you know—different.”

In an e-mail, Ayoub said that he did not mean to accuse the Alliance or the Gates Foundation of intentional genocide, though he realized that his 2005 lecture might give that impression. “I was concerned by confirmed sporadic reports that some vaccines distributed in third-world countries contained fertility-reducing substances,” he said. “Regardless of whether this was deliberate, careless, unintentional or a cost-cutting measure, I felt that there was a potential for abuse and that this should be investigated.”

In the past decade, Ayoub, who is fifty-nine, has become one of the country’s most active expert witnesses on behalf of accused child abusers. He estimates that he has testified in about eighty child-abuse cases in the United States, Sweden, and the United Kingdom. He has consulted or written reports in hundreds more.

Prior to his child-abuse work, Ayoub was a prominent supporter of a movement that blames the rise in autism—a neurological and developmental disorder that starts in early childhood—on vaccinations that contain mercury, aluminum, or other substances. These claims are mostly dismissed by scientists, but they have nonetheless spurred a burgeoning worldwide “anti-vaxxer” movement, which has fuelled a decline in vaccination rates. Both positions reflect a deep suspicion of government and mainstream medicine and a rising backlash against scientific consensus in an era when misinformation quickly spreads online.

Ayoub, in a series of interviews, said that his criticism of vaccines is no longer a significant part of his work, and has no bearing on his credibility as a witness in child-abuse cases. (The Divband trial ultimately ended in a mistrial, after jurors could not agree on a verdict. Prosecutors later retried the case and Divband was convicted on child-abuse charges and sentenced to fifty years in prison; Ayoub did not testify in the second trial.) Ayoub said that his testimony in each abuse case is based on a careful review of the medical evidence. He simply wants to see justice done and does not charge for his services as an expert witness, he said. “Parents are being accused and families torn apart based on fractures and/or other boney irregularities that are in fact attributable to bone fragility, not abuse,” he said in an e-mail. If rickets, Vitamin D deficiency, and other explanations are not addressed, he added, “parents cannot receive fair trials, and families will be destroyed based on a misunderstanding of the radiology and pathology.”


Why we stopped trusting elites

Why we stopped trusting elites
The credibility of establishment figures has been demolished by technological change and political upheavals. But it’s too late to turn back the clock.
By William Davies
Nov 29 2018

For hundreds of years, modern societies have depended on something that is so ubiquitous, so ordinary, that we scarcely ever stop to notice it: trust. The fact that millions of people are able to believe the same things about reality is a remarkable achievement, but one that is more fragile than is often recognised.

At times when public institutions – including the media, government departments and professions – command widespread trust, we rarely question how they achieve this. And yet at the heart of successful liberal democracies lies a remarkable collective leap of faith: that when public officials, reporters, experts and politicians share a piece of information, they are presumed to be doing so in an honest fashion.

The notion that public figures and professionals are basically trustworthy has been integral to the health of representative democracies. After all, the very core of liberal democracy is the idea that a small group of people – politicians – can represent millions of others. If this system is to work, there must be a basic modicum of trust that the small group will act on behalf of the much larger one, at least some of the time. As the past decade has made clear, nothing turns voters against liberalism more rapidly than the appearance of corruption: the suspicion, valid or otherwise, that politicians are exploiting their power for their own private interest.

This isn’t just about politics. In fact, much of what we believe to be true about the world is actually taken on trust, via newspapers, experts, officials and broadcasters. While each of us sometimes witnesses events with our own eyes, there are plenty of apparently reasonable truths that we all accept without seeing. In order to believe that the economy has grown by 1%, or to find out about latest medical advances, we take various things on trust; we don’t automatically doubt the moral character of the researchers or reporters involved.

Much of the time, the edifice that we refer to as “truth” is really an investment of trust. Consider how we come to know the facts about climate change: scientists carefully collect and analyse data, before drafting a paper for anonymous review by other scientists, who assume that the data is authentic. If published, the findings are shared with journalists in press releases, drafted by university press offices. We expect that these findings are then reported honestly and without distortion by broadcasters and newspapers. Civil servants draft ministerial speeches that respond to these facts, including details on what the government has achieved to date.

A modern liberal society is a complex web of trust relations, held together by reports, accounts, records and testimonies. Such systems have always faced political risks and threats. The template of modern expertise can be traced back to the second half of the 17th century, when scientists and merchants first established techniques for recording and sharing facts and figures. These were soon adopted by governments, for purposes of tax collection and rudimentary public finance. But from the start, strict codes of conduct had to be established to ensure that officials and experts were not seeking personal gain or glory (for instance through exaggerating their scientific discoveries), and were bound by strict norms of honesty.

But regardless of how honest parties may be in their dealings with one another, the cultural homogeneity and social intimacy of these gentlemanly networks and clubs has always been grounds for suspicion. Right back to the mid-17th century, the bodies tasked with handling public knowledge have always privileged white male graduates, living in global cities and university towns. This does not discredit the knowledge they produce – but where things get trickier is when that homogeneity starts to appear to be a political identity, with a shared set of political goals. This is what is implied by the concept of “elites”: that purportedly separate domains of power – media, business, politics, law, academia – are acting in unison.

A further threat comes from individuals taking advantage of their authority for personal gain. Systems that rely on trust are always open to abuse by those seeking to exploit them. It is a key feature of modern administrations that they use written documents to verify things – but there will always be scope for records to be manipulated, suppressed or fabricated. There is no escaping that possibility altogether. This applies to many fields: at a certain point, the willingness to trust that a newspaper is honestly reporting what a police officer claims to have been told by a credible witness, for example, relies on a leap of faith.

A trend of declining trust has been underway across the western world for many years, even decades, as copious survey evidence attests. Trust, and its absence, became a preoccupation for policymakers and business leaders during the 1990s and early 2000s. They feared that shrinking trust led to higher rates of crime and less cohesive communities, producing costs that would be picked up by the state.


Why the U.S. Can’t Solve Big Problems

Why the U.S. Can’t Solve Big Problems
The system of government gets in the country’s way.
By Julian E. Zelizer
Nov 28 2018

The federal government released a devastating report last week documenting the immense economic and human cost that the U.S. will incur as a result of climate change. It warns that the damage to roads alone will add up to $21 billion by the end of the century. In certain parts of the Midwest, farms will produce 75 percent less corn than today, while ocean acidification could result in $230 billion in financial losses. More people will die from extreme temperatures and mosquito-borne diseases. Wildfire seasons will become more frequent and more destructive. Tens of millions of people living near rising oceans will be forced to resettle. The findings put the country on notice, once again, that doing nothing is a recipe for disaster.

Yet odds are that the federal government will, in fact, do nothing. It’s tempting to blame inaction on current political conditions, like having a climate change denier in the White House or intense partisan polarization in Washington. But the unfortunate reality is that American politicians have never been good at dealing with big, long-term problems. Lawmakers have tended to act only when they had no other choice.

It took a brutal Civil War to end slavery. Bankers avoided regulation until the financial system totally collapsed in the early 1930s. Americans saw southern police brutality on their television sets before civil-rights legislation could get through Congress. Widespread dissatisfaction with the health-care system has resulted in only a patchwork solution (the Affordable Care Act). Mass shootings have still not yielded effective gun control.

Why does America so often play catch-up?

The problem, I submit, is America’s system of government.

The separation of powers, which ensures that no single part of the government can ever achieve unified control of the policymaking process, has been a blessing and a curse. It prevents tyranny but creates veto points for politicians who, for whatever reason, wish to stop federal solutions to long-term challenges. Opponents driven by the desire to defend the status quo can always find different bases in the government from which to pursue their agenda and block forward-looking legislation.

Even when there is substantial majority support for tackling big problems, such as gun violence and climate change today, political minorities who disagree with their neighbors can count on the system to help them. There are a lot more people in California (where climate legislation is popular) than West Virginia (where the coal industry still dominates government), but both states send two representatives to the U.S. Senate. Smaller, rural states—whose residents may be less likely to endorse regulation of industry— are disproportionately powerful in the Electoral College.

Not only is the American government separated and fragmented, but private interest groups hold tremendous sway. Through lobbying and campaign contributions, outside actors like the Koch brothers can make it painful for politicians to support beneficial, even popular policies—including climate change regulation— that would hurt their private interests. When President Carter pushed for a bold energy conservation program in the late 1970s, he ran directly into fossil fuel industry representatives who had little appetite for what he was selling.

American anti-intellectualism stands in the way of change, too. The historian Richard Hofstadter famously accused Americans of harboring “resentment of the life of the mind, and those who are considered to represent it.” The cultural suspicion of expertise has only become worse since 1963, when Hofstadter published Anti-Intellectualism in American Life; politicians now, including the president, feel no shame at all about dismissing expert opinion.  

Perhaps as influential as anti-intellectualism is anti-statism: the resistance to strong government, and accompanying confidence in the private marketplace, which hampers lawmakers’ ability to mobilize support for the large-scale regulations or programs needed to tackle big challenges.

One last obstacle is American Exceptionalism—the notion that the U.S. is immune from the same kinds of problems that face other comparable countries. There is a misplaced sense of confidence that the scariest predictions just won’t come to pass here; the U.S. will always finds a way to avoid the disasters other nations face. Somehow America’s scientists and business leaders will figure a way out. The belief in American Exceptionalism also pushes many American leaders to resist the kind of international agreements—such as the Kyoto Pact on Global Warming and the Paris Climate Agreement—that are the path to real progress. Those who feel that America is different and superior than the rest of the world are reluctant to concede that it can’t do whatever it wants, on its own.


U.S. life expectancy declines again, a dismal trend not seen since World War I

U.S. life expectancy declines again, a dismal trend not seen since World War I
By Lenny Bernstein
Nov 29 2018

Life expectancy in the United States declined again in 2017, the government said Thursday in a bleak series of reports that showed a nation still in the grip of escalating drug and suicide crises.

The data continued the longest sustained decline in expected life span at birth in a century, an appalling performance not seen in the United States since 1915 through 1918. That four-year period included World War I and a flu pandemic that killed 675,000 people in the United States and perhaps 50 million worldwide.

Public health and demographic experts reacted with alarm to the release of the Centers for Disease Control and Prevention’s annual statistics, which are considered a reliable barometer of a society’s health. In most developed nations, life expectancy has marched steadily upward for decades.

“I think this is a very dismal picture of health in the United States,” said Joshua M. Sharfstein, vice dean for public health practice and community engagement at the Johns Hopkins Bloomberg School of Public Health. “Life expectancy is improving in many places in the world. It shouldn’t be declining in the United States.”

“After three years of stagnation and decline, what do we do now?” asked S.V. Subramanian, a professor of population health and geography at Harvard’s T.H. Chan School of Public Health. “Do we say this is the new normal? Or can we say this is a tractable problem?”

Overall, Americans could expect to live 78.6 years at birth in 2017, down a tenth of a year from the 2016 estimate, according to the CDC’s National Center for Health Statistics. Men could anticipate a life span of 76.1 years, down a tenth of a year from 2016. Life expectancy for women in 2017 was 81.1 years, unchanged from the previous year.

Drug overdoses set another annual record in 2017, cresting at 70,237 — up from 63,632 the year before, the government said in a companion report. The opioid epidemic continued to take a relentless toll, with 47,600 deaths in 2017 from drugs sold on the street such as fentanyl and heroin, as well as prescription narcotics. That was also a record number, driven largely by an increase in fentanyl deaths.

Since 1999, the number of drug overdose deaths has more than quadrupled. Deaths attributed to opioids were nearly six times greater in 2017 than they were in 1999.

Deaths from legal painkillers did not increase in 2017. There were 14,495 overdose deaths attributed to narcotics such as oxycodone and hydrocodone and 3,194 from methadone, which is used as a painkiller. Those totals were virtually identical to the numbers in 2016. The number of heroin deaths, 15,482, also did not rise from the previous year.

Robert Anderson, chief of the mortality statistics branch at the Center for Health Statistics, said the leveling off of prescription drug deaths may reflect a small impact from efforts in recent years to curb the diversion of legal painkillers to users and dealers on the streets. Those measures include prescription drug monitoring programs that help prevent substance abusers from obtaining multiple prescriptions by “doctor shopping.”

Others noted programs that may also have helped: The overdose antidote naloxone has been made more widely available in many places; Rhode Island has made efforts to educate substance abusers as they leave jail, a time when they are particularly vulnerable to overdose; and Vermont and other states have bolstered treatment programs. States that have expanded their Medicaid programs are also able to offer more treatment for users.

Anderson said provisional data for the first four months of 2018 show a plateau and possibly a small decline in drug overdose deaths.


Global food system is broken, say world’s science academies

Global food system is broken, say world’s science academies
Radical overhaul in farming and consumption, with less meat eating, needed to avoid hunger and climate catastrophe
By Damian Carrington
Nov 28 2018

The global food system is broken, leaving billions of people either underfed or overweight and driving the planet towards climate catastrophe, according to 130 national academies of science and medicine across the world.

Providing a healthy, affordable, and environmentally friendly diet for all people will require a radical transformation of the system, says the report by the InterAcademy Partnership (IAP). This will depend on better farming methods, wealthy nations consuming less meat and countries valuing food which is nutritious rather than cheap.

The report, which was peer reviewed and took three years to compile, sets out the scale of the problems as well as evidence-driven solutions. 

The global food system is responsible for a third of all greenhouse gas emissions, which is more than all emissions from transport, heating, lighting and air conditioning combined. The global warming this is causing is now damaging food production through extreme weather events such as floods and droughts.

The food system also fails to properly nourish billions of people. More than 820 million people went hungry last year, according to the UN Food and Agriculture Organisation, while a third of all people did not get enough vitamins. At the same time, 600 million people were classed as obese and 2 billion overweight, with serious consequences for their health. On top of this, more than 1bn tonnes of food is wasted every year, a third of the total produced.

“The global food system is broken,” said Tim Benton, professor of population ecology, at the University of Leeds, who is a member of one of the expert editorial groups which produced the report. He said the cost of the damage to human health and the environment was much greater than the profits made by the farming industry.

“Whether you look at it from a human health, environmental or climate perspective, our food system is currently unsustainable and given the challenges that will come from a rising global population that is a really [serious] thing to say,” Benton said.

Reducing meat and dairy consumption is the single biggest way individuals can lessen their impact on the planet, according to recent research. And tackling dangerous global warming is considered impossible without massive reductions in meat consumption. 

Research published in the journal Climate Policy shows that at the present rate, cattle and other livestock will be responsible for half of the world’s greenhouse gas emissions by 2030, and that to prevent this will require “substantial reductions, far beyond what are planned or realistic, from other sectors”.

“It is vital [for a liveable planet] that we change our relationship with meat, especially with red meat. But no expert in this area is saying the world should be vegan or even vegetarian,” said Benton.

Rearing cattle and other livestock causes the same carbon emissions as all the world’s vehicles, trains, ships and planes combined. “We have spent 30 to 40 years investing quite heavily on fuel efficiency in the transport sector,” said Benton. “We need do something similarly radical in the farming sector and the scope for doing that by changing the way we raise the animals is much smaller than the scope we have by changing our diets.”

The IAP report notes that in poorer countries meat, eggs and dairy can be important in providing concentrated nutrients, especially for children. It also says other things livestock can provide should be taken into consideration, such as leather, wool, manure, transport and plough pulling.


‘Facebook has a black people problem’: Black ex-employee spotlights race issues in public memo

‘Facebook has a black people problem’: Black ex-employee spotlights race issues in public memo
By Eli Rosenberg
Nov 27 2018

Mark Luckie, a digital strategist and former journalist, says he accepted the job offer from Facebook reluctantly.

At first, he didn’t want to move to Silicon Valley from Atlanta, where he had been living, but he said his fiance was able to persuade him, telling him that the job presented an opportunity to make a difference on the influential social network.

“I was really excited. Facebook is an amazing company that reaches a lot of people,” Luckie, 35, said in an interview with The Washington Post. “I didn’t plan to leave.”

But as a black employee, he became disillusioned with his time at the company. After about a year at the company, he decided to quit. And before his last day in mid-November, he wrote a long memo that he sent to the company’s staff. The memo is in the news this week after Luckie made it public on — where else? — Facebook.

Luckie argues that the company is “failing its black employees and its black users,” allegedly by excluding them from events and the important work that guides Facebook’s service.

Facebook, in a statement, said it is “doing all we can to be a truly inclusive company.”

Luckie, who worked as an editor at The Washington Post for about two years until 2012, notes statistics that demonstrate that black users are one of the more engaged demographics on Facebook, with more using the service to communicate with family (63 percent) and friends (60 percent) than Facebook users on average.

“Black people are driving the kind of meaningful social interactions Facebook is striving to facilitate,” he wrote.

But Luckie argues the experiences of black users on Facebook are “far from positive,” citing a report from the investigative outlet Reveal that documented instances in which posts from black people have been removed as “hate speech. “

“Underrepresented groups are being systematically excluded from communication,” he wrote. “You can see this reflected in everything from the guest lists of Facebook’s external programs, the industry events the company has historically sponsored, the creators and influencers who appear in Explore tabs on Instagram, the power users who are verified on the platforms, and more.”

Luckie also spotlights the company’s low number of black employees, part of a long-standing problem with the representation of minorities at tech companies. The company announced over the summer that 4 percent of its employees were black, but only 1 percent and 2 percent of its employees in technical and leadership roles were black.

The number of black workers in technical jobs at the eight largest tech companies has inched up, to 3.1 percent in 2017 from 2.5 percent in 2014, according to Bloomberg, and the issue remains central to discussions about the future of the industry. Blacks make up just 3 percent of the employees at the top 75 companies in Silicon Valley, according to the Equal Employment Opportunity Commission.

“Although incremental changes are being made, the fact remains that the population of Facebook employees doesn’t reflect its most engaged user base,” Luckie wrote. “In some buildings, there are more ‘Black Lives Matter’ posters than there are actual black people.”

His experiences living in Silicon Valley were tinged with racism, as well, he said. He declined to name the city he lived in on the Peninsula but said that he was randomly stopped by the police twice — once while waiting for a food delivery outside his apartment complex, and another time while walking from the Facebook shuttle stop to his home.

“I nodded and didn’t make any sudden movement and thankfully the encounters were brief,” he said.

Luckie, who previously worked for Twitter and Reddit, also alleged discrimination at Facebook toward black employees.

“Facebook’s disenfranchisement of black people on the platform mirrors the marginalization of its black employees. In my time at the company, I’ve heard far too many stories from black employees of a colleague or manager calling them ‘hostile’ or ‘aggressive’ for simply sharing their thoughts,” he wrote. “A few black employees have reported being specifically dissuaded by their managers from becoming active in the [internal] Black@ group or doing ‘Black stuff,’ even if it happens outside of work hours. Too many black employees can recount stories of being aggressively accosted by campus security beyond what was necessary.”


China baby gene editing claim ‘dubious’

China baby gene editing claim ‘dubious’
Significant doubts have emerged about claims from a Chinese scientist that he has helped make the world’s first genetically edited babies.
By Michelle Roberts, Health editor, BBC News online
Nov 26 2018

Prof He Jiankui says the twin girls, born a few weeks ago, had their DNA altered as embryos to prevent them from contracting HIV.

His claims, filmed by Associated Press, are unverified and have sparked outrage from other scientists, who have called the idea monstrous. 

Such work is banned in most countries. 

Future generations 

Gene editing could potentially help avoid heritable diseases by deleting or changing troublesome coding in embryos.

But experts worry meddling with the genome of an embryo could cause harm not only to the individual but also future generations that inherit these same changes.

And many countries, including the UK, have laws that prevent the use of genome editing in embryos for assisted reproduction in humans.

Scientists can do gene editing research on discarded IVF embryos, as long as they are destroyed immediately afterwards and not used to make a baby.

‘Designer babies’

But Prof He, who was educated at Stanford in the US and works from a lab in the southern Chinese city of Shenzhen, says he used gene-editing tools to make two twin baby girls, known as “Lulu” and “Nana”.

In a video, he claims to have eliminated a gene called CCR5 to make the girls resistant to HIV should they ever come into contact with the virus. 

He says his work is about creating children who would not suffer from diseases, rather than making designer babies with bespoke eye colour or a high IQ. 

“I understand my work will be controversial – but I believe families need this technology and I’m willing to take the criticism for them,” he says in the video.

‘Highly treatable’

However, several organisations, including a hospital, linked to the claim have denied any involvement.

The Southern University of Science and Technology in Shenzhen said it had been unaware of the research project and will now launch an investigation.

And other scientists say if the reports are true, Prof He has gone too far, experimenting on healthy embryos without justification. 

Prof Robert Winston, Emeritus Professor of Fertility Studies and Professor of Science and Society at Imperial College London, said: “If this is a false report, it is scientific misconduct and deeply irresponsible.

“If true, it is still scientific misconduct.”

Dr Dusko Ilic, an expert in stem cell science at King’s College London, said: “If this can be called ethical, then their perception of ethics is very different to the rest of the world’s.”

He argues that HIV is highly treatable and that if the infection is kept under control with drugs, then there is almost no risk of the parents passing it on to the baby anyway. 

Too risky

Prof Julian Savulescu, an expert in ethics at the University of Oxford, said: “If true, this experiment is monstrous. The embryos were healthy – no known diseases. 

“Gene editing itself is experimental and is still associated with off-target mutations, capable of causing genetic problems early and later in life, including the development of cancer. 

“This experiment exposes healthy normal children to risks of gene editing for no real necessary benefit.”

Scientists say baby gene editing may one day be justifiable, but that more checks and measures are needed before allowing it.