Save Your Money: Vast Majority Of Dietary Supplements Don’t Improve Heart Health or Put Off Death

Save Your Money: Vast Majority Of Dietary Supplements Don’t Improve Heart Health or Put Off Death
Jul 16 2019

In a massive new analysis of findings from 277 clinical trials using 24 different interventions, Johns Hopkins Medicine researchers say they have found that almost all vitamin, mineral and other nutrient supplements or diets cannot be linked to longer life or protection from heart disease.

Although they found that most of the supplements or diets were not associated with any harm, the analysis showed possible health benefits only from a low-salt diet, omega-3 fatty acid supplements and possibly folic acid supplements for some people. Researchers also found that supplements combining calcium and vitamin D may in fact be linked to a slightly increased stroke risk.

Results of the analysis were published on July 8 in Annals of Internal Medicine.

Surveys by the Centers for Disease Control and Prevention show that 52% of Americans take a least one vitamin or other dietary/nutritional supplement daily. As a nation, Americans spend $31 billion each year on such over-the-counter products. An increasing number of studies –including this new one from Johns Hopkins –have failed to prove health benefits from most of them.

“The panacea or magic bullet that people keep searching for in dietary supplements isn’t there,” says senior author of the study Erin D. Michos, M.D., M.H.S., associate director of preventive cardiology at the Ciccarone Center for the Prevention of Cardiovascular Disease and associate professor of medicine at the Johns Hopkins University School of Medicine. “People should focus on getting their nutrients from a heart-healthy diet, because the data increasingly show that the majority of healthy adults don’t need to take supplements.”

For the current study, the researchers used data from 277 randomized clinical trials that evaluated 16 vitamins or other supplements and eight diets for their association with mortality or heart conditions including coronary heart disease, stroke, and heart attack. All together they included data gathered on 992,129 research participants worldwide.

The vitamin and other supplements reviewed included: antioxidants, β-carotene, vitamin B-complex, multivitamins, selenium, vitamin A, vitamin B3/niacin, vitamin B6, vitamin C, vitamin E, vitamin D alone, calcium alone, calcium and vitamin D together, folic acid, iron and omega-3 fatty acid (fish oil). The diets reviewed were a Mediterranean diet, a reduced saturated fat (less fats from meat and dairy) diet, modified dietary fat intake (less saturated fat or replacing calories with more unsaturated fats or carbohydrates), a reduced fat diet, a reduced salt diet in healthy people and those with high blood pressure, increased alpha linolenic acid (ALA) diet (nuts, seeds and vegetable oils), and increased omega-6 fatty acid diet (nuts, seeds and vegetable oils). Each intervention was also ranked by the strength of the evidence as high, moderate, low or very low risk impact.

The majority of the supplements including multivitamins, selenium, vitamin A, vitamin B6, vitamin C, vitamin E, vitamin D alone, calcium alone and iron showed no link to increased or decreased risk of death or heart health.

In the three studies of 3,518 people that looked at a low-salt diet in people with healthy blood pressure, there were 79 deaths. The researchers say that they found a 10% decrease in the risk of death in these people, which they classified as a moderate associated impact.

Of the five studies in which 3,680 participants with high blood pressure were put on a low-salt diet, they found that the risk of death due to heart disease decreased by 33%, as there were 674 heart disease deaths during the study periods. They also classified this intervention as moderate evidence of an impact.

Forty-one studies with 134,034 participants evaluated the possible impact of omega-3 fatty acid supplements. In this group, 10,707 people had events such as a heart attack or stroke indicating heart disease. Overall, these studies suggested that supplement use was linked to an 8 percent reduction in heart attack risk and a 7 percent reduction in coronary heart disease compared to those not on the supplements. The researchers ranked evidence for a beneficial link to this intervention as low.


Amazon deforestation accelerating towards unrecoverable ‘tipping point’

Amazon deforestation accelerating towards unrecoverable ‘tipping point’
Data confirms fears that Jair Bolsonaro’s policy encourages illegal logging in Brazil
By Jonathan Watts
Jul 25 2019

Deforestation of the Brazilian Amazon has surged above three football fields a minute, according to the latest government data, pushing the world’s biggest rainforest closer to a tipping point beyond which it cannot recover.

The sharp rise – following year-on-year increases in May and June – confirms fears that president Jair Bolsonaro has given a green light to illegal land invasion, logging and burning.

Clearance so far in July has hit 1,345 sq km, a third higher than the previous monthly record under the current monitoring system by the Deter B satellite system, which started in 2015.

With five days remaining, this is on course to be the first month for several years in which Brazil loses an area of forest bigger than Greater London.

The steady erosion of tree cover weakens the role of the rainforest in stabilising the global climate. Scientists warn that the forest is in growing danger of degrading into a savannah, after which its capacity to absorb carbon will be severely diminished, with consequences for the rest of the planet.

“It’s very important to keep repeating these concerns. There are a number of tipping points which are not far away,” said Philip Fearnside, a professor at Brazil’s National Institute of Amazonian Research. “We can’t see exactly where they are, but we know they are very close. It means we have to do things right away. Unfortunately that is not what is happening. There are people denying we even have a problem.”

It may also complicate ratification of Brazil’s biggest ever trade deal with the European Union if EU legislators decide the South American nation is not keeping its side of the bargain, which includes a commitment to slow deforestation in line with the Paris climate agreement.

The official numbers from the National Institute for Space Research are a growing embarrassment to Bolsonaro, who has tried to fob them off as lies and criticised the head of the institute. Earlier this week, the president insisted the numbers should be screened by the ministry of science and technology and shown to him before being made public so that he did not get “caught with his pants down”.

This has raised fears that the data could be vetted in future rather than automatically updated online each day as is currently the case.

In his first seven months in power, Bolsonaro, who was elected with strong support from agribusiness and mining interests, has moved rapidly to erode government agencies responsible for forest protection.

He has weakened the environment agency and effectively put it under the supervision of the agricultural ministry, which is headed by the leader of the farming lobby. His foreign minister has dismissed climate science as part of a global Marxist plot. The president and other ministers have criticised the forest monitoring agency, Ibama, for imposing fines on illegal land grabbers and loggers. The government has also moved to weaken protections for nature reserves, indigenous territories and zones of sustainable production by forest peoples and invited businesspeople to register land counter-claims within those areas.

This has emboldened those who want to invade the forest, clear it and claim it for commercial purposes, mostly in the speculative expectation it will rise in value, but also partly for cattle pastures, soya fields and mines.


Neuroscientists decode brain speech signals into written text

Neuroscientists decode brain speech signals into written text
Study funded by Facebook aims to improve communication with paralysed patients
By Ian Sample
Jul 30 2019

Doctors have turned the brain signals for speech into written sentences in a research project that aims to transform how patients with severe disabilities communicate in the future.

The breakthrough is the first to demonstrate how the intention to say specific words can be extracted from brain activity and converted into text rapidly enough to keep pace with natural conversation.

In its current form, the brain-reading software works only for certain sentences it has been trained on, but scientists believe it is a stepping stone towards a more powerful system that can decode in real time the words a person intends to say.

Doctors at the University of California in San Francisco took on the challenge in the hope of creating a product that allows paralysed people to communicate more fluidly than using existing devices that pick up eye movements and muscle twitches to control a virtual keyboard.

“To date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of a human conversation,” said Edward Chang, a neurosurgeon and lead researcher on the study published in the journal Nature.

The work, funded by Facebook, was possible thanks to three epilepsy patients who were about to have neurosurgery for their condition. Before their operations went ahead, all three had a small patch of tiny electrodes placed directly on the brain for at least a week to map the origins of their seizures.

During their stay in hospital, the patients, all of whom could speak normally, agreed to take part in Chang’s research. He used the electrodes to record brain activity while each patient was asked nine set questions and asked to read a list of 24 potential responses.

With the recordings in hand, Chang and his team built computer models that learned to match particular patterns of brain activity to the questions the patients heard and the answers they spoke. Once trained, the software could identify almost instantly, and from brain signals alone, what question a patient heard and what response they gave, with an accuracy of 76% and 61% respectively.

“This is the first time this approach has been used to identify spoken words and phrases,” said David Moses, a researcher on the team. “It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate.”

Though rudimentary, the system allowed patients to answer questions about the music they liked; how well they were feeling; whether their room was too hot or cold, or too bright or dark; and when they would like to be checked on again.

Despite the breakthrough, there are hurdles ahead. One challenge is to improve the software so it can translate brain signals into more varied speech on the fly. This will require algorithms trained on a huge amount of spoken language and corresponding brain signal data, which may vary from patient to patient.

Another goal is to read “imagined speech”, or sentences spoken in the mind. At the moment, the system detects brain signals that are sent to move the lips, tongue, jaw and larynx – in other words, the machinery of speech. But for some patients with injuries or neurodegenerative disease, these signals may not suffice, and more sophisticated ways of reading sentences in the brain will be needed.

While the work is still in its infancy, Winston Chiong, a neuroethicist at UCSF who was not involved in the latest study, said it was important to debate the ethical issues such systems might raise in the future. For example, could a “speech neuroprosthesis” unintentionally reveal people’s most private thoughts?


Researchers Develop Technology To Harness Energy From Mixing Of Freshwater And Seawater

[Note:  This item comes from reader Randall Head.  DLH]

Researchers Develop Technology To Harness Energy From Mixing Of Freshwater And Seawater
By Scott Snowden
Jul 30 2019

Researchers from Stanford University in California have developed an affordable, durable technology that could harness energy generated from mixing freshwater from seawater.

Outlined in paper, recently published in American Chemical Society’s ACS Omega, they suggest that this “blue energy” could make coastal wastewater treatment plants energy-independent.

“Blue energy is an immense and untapped source of renewable energy,” Kristian Dubrawski, a postdoctoral scholar in civil and environmental engineering at Stanford and study coauthor said in a statement. “Our battery is a major step toward practically capturing that energy without membranes, moving parts or energy input.”

The researchers tested a prototype of the battery, monitoring its energy production while flushing it with alternating hourly exchanges of wastewater effluent from the Palo Alto Regional Water Quality Control Plant and seawater collected nearby from Half Moon Bay. Over 180 cycles, battery materials maintained 97 percent effectiveness in capturing the salinity gradient energy.

The technology could work in any location where fresh and saltwater intermix, but wastewater treatment plants offer a particularly valuable case study. Wastewater treatment is energy-intensive, accounting for about three percent of the total US electrical load. The process – essential to community health – is also vulnerable to power grid shutdowns. Making wastewater treatment plants energy independent would not only cut electricity use and emissions but also make them immune to blackouts – a major advantage in places such as California, where recent wildfires have led to large-scale outages.

Every cubic meter of freshwater that mixes with seawater produces about .65 kilowatt-hours of energy – enough to power the average US household for about 30 minutes. Globally, the theoretical amount of recoverable energy from coastal wastewater treatment plants is about 18 gigawatts – enough to power more than 1,700 homes for a year.

The Stanford group’s battery isn’t the first technology to succeed in capturing blue energy, but it’s the first to use battery electrochemistry instead of pressure or membranes. If it works at scale, the technology would offer a more simple, robust and cost-effective solution.

The process first releases sodium and chloride ions from the battery electrodes into the solution, making the current flow from one electrode to the other. Then, a rapid exchange of wastewater effluent with seawater leads the electrode to reincorporate sodium and chloride ions and reverse the current flow. Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging. This means that the battery is constantly discharging and recharging without needing any input of energy.

While lab tests showed power output is still low per electrode area, the battery’s scale-up potential is considered more feasible than previous technologies due to its small footprint, simplicity, constant energy creation and lack of membranes or instruments to control charge and voltage. The electrodes are made with Prussian Blue, a material widely used as a pigment and medicine, that costs less than $1 a kilogram, and polypyrrole, a material used experimentally in batteries and other devices, which sells for less than $3 a kilogram in bulk.


Alaska’s sweltering summer is ‘basically off the charts’

Alaska’s sweltering summer is ‘basically off the charts’
By Matthew Cappucci ,Juliet Eilperin ,Andrew Freedman and Brady Dennis
Jul 31 2019

Steve Perrins didn’t see the lightning, but he couldn’t miss the smoke that followed.

It was around dinnertime on July 23 at Alaska’s oldest hunting lodge, nestled in the wilderness more than 100 miles northwest of Anchorage. What began as a quiet evening at the Rainy Pass Lodge soon turned frantic as Alaska’s latest wildfire spread fast.

The Alaska National Guard soon evacuated 26 people and two dogs by helicopter from the lodge, which serves as a checkpoint for the Iditarod Trail Sled Dog Race.

The fire came within a half-mile of the lodge. In the days that followed, Perrins and his family housed and fed dozens of federal and state firefighters who rushed to contain the blaze — one of many raging across Alaska.

“It’s the hottest summer we’ve had, ever,” said Perrins, who began working at the lodge in 1977.

The nation’s 49th state is warming faster than any other, having heated up more than 2 degrees Celsius (3.6 degrees Fahrenheit) over the past century — double the global average. And parts of the state, including its far northern reaches, have warmed even more rapidly in recent decades.

This trend, driven in part by the burning of fossil fuels, is transforming the nation’s only Arctic state. Scientists around the world, including in the U.S. government, predict the warming will continue unless countries drastically reduce their greenhouse gas emissions in coming years.

Temperatures have been above average across Alaska every day since April 25. None of the state’s nearly 300 weather stations have recorded a temperature below freezing since June 28 — the longest such streak in at least 100 years. On Independence Day, the temperature at Ted Stevens Anchorage International Airport hit 90 degrees for the first time on record.

More than 2 million acres have gone up in flames across the state as thousands of firefighters have worked to contain wildfires. Stores have sold out of fans and ice. Moose have been spotted seeking respite in garden sprinklers.

Alaska, which logged its warmest June on record, now seems destined to register not only its warmest July but also its warmest month.

“Usually if you were to break this sort of record, you’d do it by a sliver of a degree,” said Brian Brettschneider, a climatologist and research associate at the International Arctic Research Center. He said that the state is on course to shatter the record by more than one degree Fahrenheit.

The combination of relentless high pressure, extremely warm sea surface temperatures and high humidity are “basically off the charts,” Brettschneider said.

The entire Arctic is suffering under extreme temperatures. In Siberia, sweeping wildfires are sending smoke thousands of miles away and lofting dark soot particles onto the vulnerable Arctic ice cover. Arctic sea ice is melting at an alarming pace and could break the 2012 record. In addition, the weather system that caused last week’s heat wave in Western Europe has now settled above the Atlantic side of the Arctic, accelerating surface-ice melting in Greenland.

Mark Parrington, a senior scientist with the Copernicus Atmosphere Monitoring Service in Europe, said that through July 28, wildfires in the Arctic region, including Siberia and Alaska, had emitted 125 metric megatons — the highest of any year-to-date since such monitoring began in 2003.

“We’re seeing something exceptional this year,” Parrington said, even though the acreage burned in Alaska is not yet a record.

Even as researchers in Alaska are working to capture climate change’s impact on the region, sharp cuts by Gov. Mike Dunleavy (R) to the state’s education budget threaten to trigger an exodus of some of the very scientists who are trying to explain the unprecedented changes that residents are experiencing.

“I think it will lead to many of the best Arctic scientists in the UA system [leaving] the state,” Christopher Arp, an associate research professor at the University of Alaska at Fairbanks Water and Environmental Research Center, said in an email. “Having scientists live where they do research is very important in my view, so I think that will have a negative impact on Arctic research that will be very challenging to reverse.”

Meanwhile, this summer’s heat has transformed Alaska’s landscape and waterways, affecting humans and wildlife alike.


Our planet is in crisis. But until we call it a crisis, no one will listen

[Note:  This item comes from friend Jock Gill.  DLH]

Our planet is in crisis. But until we call it a crisis, no one will listen
We study disaster preparedness, and ‘climate change’ is far too mild to describe the existential threat we face
By Caleb Redlener, Charlotte Jenkins and Irwin Redlener
Jul 31 2019

When Senator Kamala Harris was asked about climate change during the Democratic debate in June, she did not mince words. “I don’t even call it climate change,” she said. “It’s a climate crisis.”

She’s right – and we, at Columbia University’s National Center for Disaster Preparedness, wish more people would call this crisis what it is.

The language we use to refer to the climate crisis has changed over time, often due to political pressures. In 1975, the geophysicist Wallace S Broecker published the first major paper on planetary heating – Climatic Change: Are We on the Brink of a Pronounced Global Warming? – and for a while the term “global warming” was the most common. But in the decades following, politicians and members of the media began to use the softer, more euphemistic term “climate change” to describe changing weather and atmospheric conditions.

That wasn’t an accident. In the early years of George W Bush’s first term as presidency, scientists were actually making serious progress in establishing overwhelming evidence that we were, in fact, facing a global crisis. Public opinion on climate change was shifting; Americans were curious about how worried they should be by the damage being done to our atmosphere.

Enter Frank Luntz, a renowned Republican pollster and strategist. Luntz was concerned that the Republican party was losing the communications battle. He advised Republicans to cast doubt on scientific consensus on the dangers of greenhouse gases and to publicly hammer home a message of uncertainty.

In 2002, Luntz wrote a memo to Bush urging him and the rest of his party to use the term “climate change” instead of “global warming”. Climate change sounded “less frightening”, he pointed out, “like you’re going from Pittsburgh to Fort Lauderdale”.

Luntz succeeded. “Climate change” began to eclipse “global warming” in the American vernacular, downplaying its menacing predecessor.

Technically, the term climate change makes sense. The climate is, indeed, changing. But the term is far too mild to describe the existential threat to the planet that these changes pose. Jeffrey Sachs, former director of the Earth Institute at Columbia University, agrees. “We should have a term that emphasizes the incredible cost and dangers,” he told us. “We do not need to be shy about this.”

Under the administration of Donald Trump, the situation is even worse. Now, instead of re-framing language about the climate crisis, Republican officials simply remove references to it entirely and put pressure on researchers and analysts who disagree.

It is time to update the language we use to better reflect the seriousness of global heating. “People do not understand the scale and pace of the climate emergency,” Jamie Henn, strategy and communications director for, an international climate campaign, told us. “This is not an issue with one future date where we will start to see effects. We may hit tipping points at any time in which we will see immediate problems.”

There is no longer any doubt that climate change is an unprecedented planetary emergency. And the terms we use to describe this crisis must deliberately reflect an appropriate sense of urgency.


Computers that can see

Computers that can see
By Benedict Evans
Jul 19 2019

One of the basic mechanics of tech over the past few decades has been that we tend to start with special-purpose components, but then over time we replace them with general-purpose components. When performance is expensive, optimizing on one task gets you a better return, but then economies of scale, Moore’s Law and so on mean that the general purpose component overtakes the single-purpose component. There’s an old engineering joke that a Russian screwdriver is a hammer  – “just hit it harder!” – and that’s what Moore’s Law tends to do in tech. “Never mind your clever efficiency optimizations, just throw more CPU at it!”

You could argue this is happening now with machine learning – “instead of complex hand-crafted rules-based systems, just throw data at it!” But it’s certainly happening with vision. The combination of a flood of cheap image sensors coming out of the smartphone supply chain with computer vision based on machine learning means that all sorts of specialized inputs are being replaced by imaging plus ML. 

The obvious place to see this is in actual physical sensors – there are all sorts of industrial applications where people are exploring using vision instead of some more specialised sensing system. Where is that equipment? Has that task been done? Is there a flood on the floor of the warehouse? Many of these things don’t necessarily *look* like vision problems, but now they can now be turned into one.

This is also, of course, the debate between Elon Musk and the autonomy community around LIDAR. A priori, you should be able to drive with vision alone, without all these expensive impractical special-purpose sensors – after all, people do. Just throw enough images and enough data at the problem (“hit it harder!”) and we should be able to make it work. Pretty much everyone does actually agree in theory – the debate is about how long it will take for vision to get there (consensus = ‘not yet’), and how hard it is to solve all the other components of autonomy. Giving the car a perfect model of the world around it, with or without LIDAR, is not the only problem to solve. 

This also overlaps with two other current preoccupations of mine: how can we get software to be good at recommendation and discovery, and how does machine learning move internet platforms away from being mechanical Turks? 

So far the internet has been very good at giving us what we already know we want, either in logistics (Amazon) or search (Google). It’s been much worse at knowing what we might like, without any explicit request. And to the extent that we can do this, we need people to tell it first. You have to buy a lot of stuff on Amazon or like a lot of things on Instagram or Spotify for your recommendations to be any good. Meanwhile, these systems often lack any real understanding of what the data you’re interacting with really is – hence all the jokes about Amazon recommendations – “Dear Amazon, I bought a refrigerator but I’m not collecting them – don’t show me five more”. In all of this, the user is being treated as a mechanical Turk – the system doesn’t know or understand, but people do, so find ways to get people to tell it (this is also what PageRank did).

Now, suppose I post five photos of myself and Mr Porter knows what clothes to recommend, without my having to buy anything first, or go through any kind of onboarding? Suppose I wave my smartphone at my living room, and 1stDibs or Chairish know what lamps I’d like, without my having to spend days browsing, liking or buying across an inventory of thousands of items? And what happens if a dating app actually knows what’s in the photos? No more swiping – just take a selfie and it tells you what the match is. Seven or eight years ago this would have been science fiction, but today it’s ‘just’ engineering, product and route to market.


We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive

We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive
By Ryan Gallagher, Ludovica Jona
Jul 26 2019

They call it the Silent Talker. It is a virtual policeman designed to strengthen Europe’s borders, subjecting travelers to a lie detector test before they are allowed to pass through customs.

Prior to your arrival at the airport, using your own computer, you log on to a website, upload an image of your passport, and are greeted by an avatar of a brown-haired man wearing a navy blue uniform.

“What is your surname?” he asks. “What is your citizenship and the purpose of your trip?” You provide your answers verbally to those and other questions, and the virtual policeman uses your webcam to scan your face and eye movements for signs of lying.

At the end of the interview, the system provides you with a QR code that you have to show to a guard when you arrive at the border. The guard scans the code using a handheld tablet device, takes your fingerprints, and reviews the facial image captured by the avatar to check if it corresponds with your passport. The guard’s tablet displays a score out of 100, telling him whether the machine has judged you to be truthful or not.

A person judged to have tried to deceive the system is categorized as “high risk” or “medium risk,” dependent on the number of questions they are found to have falsely answered. Our reporter — the first journalist to test the system before crossing the Serbian-Hungarian border earlier this year — provided honest responses to all questions but was deemed to be a liar by the machine, with four false answers out of 16 and a score of 48. The Hungarian policeman who assessed our reporter’s lie detector results said the system suggested that she should be subject to further checks, though these were not carried out.

Travelers who are deemed dangerous can be denied entry, though in most cases they would never know if the avatar test had contributed to such a decision. The results of the test are not usually disclosed to the traveler; The Intercept obtained a copy of our reporter’s test only after filing a data access request under European privacy laws.

The virtual policeman is the product of a project called iBorderCtrl, which involves security agencies in Hungary, Latvia, and Greece. Currently, the lie detector test is voluntary, and the pilot scheme is due to end in August. If it is a success, however, it may be rolled out in other European Union countries, a potential development that has attracted controversy and media coverage across the continent.
IBorderCtrl’s lie detection system was developed in England by researchers at Manchester Metropolitan University, who say that the technology can pick up on “micro gestures” a person makes while answering questions on their computer, analyzing their facial expressions, gaze, and posture.

An EU research program has pumped some 4.5 million euros into the project, which is being managed by a consortium of 13 partners, including Greece’s Center for Security Studies, Germany’s Leibniz University Hannover, and technology and security companies like Hungary’s BioSec, Spain’s Everis, and Poland’s JAS.

The researchers at Manchester Metropolitan University believe that the system could represent the future of border security. In an academic paper published in June 2018, they stated that avatars like their virtual policeman “will be suitable for detecting deception in border crossing interviews, as they are effective extractors of information from humans.”


Busing Worked in Louisville. So Why Are Its Schools Becoming More Segregated?

Busing Worked in Louisville. So Why Are Its Schools Becoming More Segregated?
By John Eligon
Jul 28 2019

LOUISVILLE, Ky. — When she saw the news images of angry white mobs pelting school buses with rocks and bottles, Sherlonda Lewis was glad that she was not among the black students being bused to a school in a white neighborhood.

It was 1975, and Louisville had initiated a court-ordered effort to integrate its public schools by busing students out of their racially segregated communities. As a high school senior that year, Ms. Lewis was exempt from being bused from her predominantly black neighborhood of Smoketown in central Louisville. Having seen the violent resistance, she considered herself lucky.

“I didn’t think it would last,” Ms. Lewis, 60, said of the busing plan.

Little did she know, that same integration program would go on to be widely embraced by members of the community, educating three generations of her family.

While some desegregation plans faltered in the face of white resistance, Louisville’s has proved remarkably resilient. It has survived riots and court rulings, skeptical superintendents and conservative lawmakers, making Jefferson County Public Schools, which includes Louisville, one of the nation’s most racially integrated districts.

But if Louisville is proof that busing can work when there is the political will to have an integrated school system, its community is now grappling with what happens when that political will starts to dry up.

These tensions — coming at a time when the nation is once again battling over the effectiveness of school integration — are the latest development in a series of changes that, in recent decades, have steadily chipped away at Louisville’s original integration plan.

A recent survey commissioned by the district showed dwindling support for the plan and a decreased interest in diversity among parents. Struggling schools and a yawning achievement gap between black and white students are drawing more attention these days than the benefits of maintaining racially integrated classrooms.

As the district’s schools slowly become more segregated, officials are considering more reforms that will almost certainly increase segregation.

The state’s Department of Education proposed taking over the district last year after finding myriad problems, from financial mismanagement to flaws in the desegregation program, known as the student assignment plan. State officials agreed to give district leaders until next year to carry out reforms.

“Right now, we’re doing our best to fight back Jim Crow and Jane Crow Jr.,” said Delquan Dorsey, Ms. Lewis’s son, who works as the district’s community engagement coordinator. “We know separate but equal doesn’t work.”

There are dozens of school districts across the country like Louisville that continue to follow desegregation plans, whether court ordered or not, with supporters often pointing to research that suggests the black-white achievement gap narrows where integration is fully accepted. And yet opposition has never been very far behind.

In the past two decades, dozens of affluent, mostly white communities have tried to secede from diverse school districts to form their own. A conservative law firm filed a lawsuit last year to challenge a decades-old system that helped desegregate public schools in Hartford. A current lawsuit in Minnesota argues that the state’s school system is unconstitutionally segregated.

Louisville’s integration program has existed since the 1975 court order merged city schools with suburban ones. The year before, a similar plan in Detroit was struck down by the United States Supreme Court.

Both Louisville and Detroit were about 20 percent black and equally racially segregated at the time, according to a report by Myron Orfield Jr., the director of the Institute on Metropolitan Opportunity at the University of Minnesota. But in the decades that followed, Detroit’s schools became overwhelmingly black and underperforming as white residents fled for suburban enclaves.

Louisville is now part of a countywide school system of roughly 100,000 students that is 42 percent white, 37 percent black and 12 percent Hispanic. About half of its black students, and two-thirds of all students, attend integrated schools, according to Will Stancil, a research fellow at the institute who defined integrated as having a population between 20 percent and 60 percent nonwhite.

By 2011, black students in Louisville were twice as likely to score “proficient” on math and reading tests as those in Detroit, Mr. Orfield found.

Janet Pinkston, who is white, was bused to duPont Manual High School, which performed better academically than the school she otherwise would have attended. It was her first meaningful exposure to black people.

“The people who went through it, like I did, saw that it has some value,” said Ms. Pinkston, 57, a freelance writer. “They saw that it has a halo effect. It does change your life.”


Can This Ancient Greek Medicine Cure Humanity?

Can This Ancient Greek Medicine Cure Humanity?
There’s fresh interest in a fabled shrub on the Aegean island of Chios
By Frank Bruni
Jul 26 2019

CHIOS, Greece — Over my 54 years, I’ve pinned my hopes on my parents, my teachers, my romantic partners, God.

I’m pinning them now on a shrub.

It’s called mastic, it grows in particular abundance on the Greek island of Chiosand its resin — the goo exuded when its bark is gashed — has been reputed for millenniums to have powerful curative properties.

Ancient Greeks chewed it for oral hygiene. Some biblical scholars think the phrase “balm of Gilead” refers to it. It has been used in creams to reduce inflammation and heal wounds, as a powder to treat irritable bowels and ulcers, as a smoke to manage asthma. I’m now part of a clinical trial in the United States to determine if a clear liquid extracted from mastic resin can, through regular injections, repair ravaged nerves.

That would have profound implications for millions of Alzheimer’s patients, stroke survivors — and me. The vision in my right eye was ruined by a condition that devastated the optic nerve behind it, and I’m at risk of the same happening on the left side, in which case I wouldn’t be able to see a paragraph like this one.

Will a gnarly evergreen related to the pistachio tree save me? That’s unclear. But in the meantime, I thought I should hop on a plane and meet my medicine.

Chios has just 50,000 or so year-round residents. It lies much closer to Turkey than to the Greek mainland. And there’s no separating its history from that of mastic.

In the 1300s and 1400s, when Chios was governed by the Republic of Genoa, the punishment for stealing up to 10 pounds of mastic resin was the loss of an ear; for more than 200 pounds, you were hanged. The stone villages in the southern part of the island, near the mastic groves, were built in the manner of fortresses — with high exterior walls, only a few entrances and labyrinthine layouts — to foil any attempts by invaders to steal the resin stored there.

Today there’s fresh interest in mastic — which is a tree or a shrub, depending on the individual plant’s size — as pharmaceutical companies and supplement manufacturers scour the natural world for overlooked or underutilized wonders: sprouting, blooming or oozing remedies developed in the largest laboratory of all. Might something more than superstition explain the spell cast by mastic over time?

“This tree has been selected by humans for 3,000 years,” Leandros Skaltsounis, a professor of pharmacology at the University of Athens, told me when I visited Chios in early July. “We’ve always known that mastic is good for health. Now we’re learning the reasons. It has huge potential.”

I ran into Skaltsounis beside the dusty construction site for a new building to accommodate technicians and equipment dedicated to studying (and, ideally, validating) mastic’s various applications. He had come to Chios for the project’s official blessing, and stood among more than a dozen business executives and scientists who listened as a bearded, black-robed Greek Orthodox priest sang hymns and prayed that the work done here would end suffering far and wide.

It’s a lot to ask of a plant. But then it’s hardly an unprecedented request. Many indispensable medicines can be traced back to the earth’s forests and fields: another reason to protect and nurture them a whole lot better than we do. Although we now use a synthetic version of aspirin, it was originally made from a compound found in the bark of the willow tree and its kin. Hippocrates reputedly prescribed chewing such bark or drinking tea brewed with it for pain.

The cancer drug taxol, the malaria drug artemisinin, the opiate morphine and much more are the bequests of bark, leaves, flowers, berries, herbs or roots, some of which captured the attention of modern scientists because ancient folk healers venerated them. 

There’s a formal name for the quest to find more drugs like these — bioprospecting — and scientists involved in it frequently pore through old tomes for clues to where in nature they should look. They know that we’ve only scratched the surface of what’s out there.

They know, too, that what we’ve already discovered — mastic resin, for example — may be able to do more than we’ve asked of it. That’s why scores of Americans with my vision impairment, known as Naion, are injecting a translucent amalgam of selected compounds in the resin — or a placebo of cottonseed oil — into our thighs or bellies twice weekly for six months. I have no idea which group I fall into or whether my stint as a human pincushion is helping me. Three months in, I haven’t experienced any improvement.

The drug is the raison d’être of an Israeli biotech start-up, Regenera Pharma, built on an Iraqi émigré’s research. In animal tests and two small-scale human studies, Regenera established that it was safe and showed enough promise in restoring neural function that the Food and Drug Administration blessed the larger trial that I’m in, which will involve nearly 250 people with Naion at a dozen sites in the United States.

We’re perfect test subjects, because we have just one, discrete neural function to monitor — vision. Either we correctly read more letters on an eye chart or we don’t. But Naion is rare, affecting only about one in 10,000 Americans, so we’re only a small fraction of the market that Regenera is after. If the drug, RPh201, works, it or its derivatives could be useful for an array of neurological or neurodegenerative disorders. But that’s a big if.