Can Behavioral Science Help in Flint?
An unusual team of White House scientists works through the final days of the Obama Administration.
By Sarah Stillman
Jan 23 Issue
A week after Donald Trump’s election, a thirty-year-old cognitive scientist named Maya Shankar purchased a plane ticket to Flint, Michigan. Shankar held one of the more unorthodox jobs in the Obama White House, running the Social and Behavioral Sciences Team, also known as the President’s “nudge unit.” When she launched the team, in early 2014, it felt, Shankar recalls, “like a startup in my parents’ basement”—no budget, no mandate, no bona-fide employees. Within two years, the small group of scientists had become a staff of dozens—including an agricultural economist, an industrial psychologist, and “human-centered designers”—working with more than twenty federal agencies on seventy projects, from fixing gaps in veterans’ health care to relieving student debt. Usually, the initiatives had, at their core, one question: Could the growing body of knowledge about the quirks of the human brain be used to improve public policy?
For months, Shankar had been thinking about how to bring behavioral science to bear on the problems in Flint, where a crisis stemming from lead contamination of the drinking water had stretched on for almost two years. She wondered if lessons from the beleaguered city could inform the Administration’s approach to the broader threat posed by lead across America—in pipes, in paint, in dust, and in soil. “Flint is not the only place poisoning kids,” Shankar said.
In recent years, behavioral science has become a voguish field. In 2002, the Israeli psychologist Daniel Kahneman won a Nobel Prize in Economic Sciences for his work with a colleague, Amos Tversky, exploring the peculiarities of human decision-making in the face of uncertainty. (Their collaboration is the subject of a popular new book by Michael Lewis, “The Undoing Project: A Friendship That Changed Our Minds.”) A basic premise of the discipline they’d helped to create was that people’s cognition is bias-prone, and susceptible to the cognitive equivalent of optical illusions. As a result, small tweaks of presentation or circumstance could make a major difference: if a judge rendered a decision about granting parole just before a meal, the inmate’s odds for a favorable outcome dipped to near zero; just after the judge ate, the chances rose to around sixty-five per cent. Grocers had learned that they could sell double the amount of soup if they placed a sign above their cans reading “limit of 12 per person.”
But, for all the field’s potential, its advances seemed mostly to have served the private sector. (And there they often veered toward sly consumer coercion.) A prominent exception was the “nudge,” a notion advanced by the legal scholar Cass R. Sunstein, now at Harvard Law School, and the University of Chicago behavioral economist Richard Thaler, in their 2008 best-seller “Nudge: Improving Decisions About Health, Wealth, and Happiness.” They stressed the role of “choice architecture”: the countless factors that coalesce around a given decision, often shaping outcomes in crucial, if barely visible, ways that could be rearranged. Sunstein and Thaler described the concept with public policy very much in mind. The subtle context in which we make choices, they theorized, could and should be stacked in favor of the social good. In the public sector, this meant gently nudging citizens toward certain choices, through techniques like automatic enrollment and reminder prompts, that take into account the fact that most of us, as Thaler told me, are “more like Homer Simpson than like Albert Einstein.”