Upstream: The Quest to Solve Problems Before They Happen

Upstream: The Quest to Solve Problems Before They Happen

Book Notes

Reading Notes on Preventative Interventions

In a Nutshell: Upstream is a book about prevention: how to not just mitigate the damage of problems after they happen (downstream) but address the underlying drivers of the problem such that the problem no longer occurs or is much less severe.

  • This type of work is hard but important in just about every field: operations, climate, city planning, health care, safety, etc.
  • The key to solving this kind of work is to first ensure people actually recognize the problem exists and have convincing evidence that it's a serious issue.
  • Then you have to ensure a diverse set of stakeholders are working together on the problem, make sure you have rigorous ways of evaluating progress long the way, and then experiment with lots of strategies to address problem.

The Author: Dan Heath is a Senior Fellow at Duke University’s CASE Center, which studies and shares knowledge on building successful social enterprises. He is the co-author, with his brother Chip, of some of my favorite books of all time - Made to Stick, Switch, Decisive, and The Power of Moments. Their books are always insightful, funny, and full of memorable examples and concrete strategies.

Here are some of my favorite ideas from the book. All quotes are from Upstream/Dan Heath unless specified.

Downstream efforts (aka "fighting fires") is visible while upstream work is not.

Heath uses the example of 2 police officers: one who is very visible and makes drivers more careful, while the other pops out of the shadows to ticket drivers making illegal turns. One prevents accidents and comes back empty handed, the other returns with a stack of tickets.

That’s one reason why we tend to favor reaction: Because it’s more tangible. Downstream work is easier to see. Easier to measure. There is a maddening ambiguity about upstream efforts.

But nearly everyone agrees that doing upstream work is really important.

When different political groups were asked how they might allocate spending to best promote health in the US, from black Democrats to white Republicans, they ended up with nearly the same distribution, with 1/3 going to the formal health care system while the rest going to things like healthy food, affordable housing and childcare.

So, even as we engage in fierce fights with people across the aisle, we’re all secretly in agreement about how our spending should be allocated. Across the political spectrum, we think the best way to “buy health” is to invest two-thirds of our money into systems that make people healthy (food, housing, etc.) and one-third into systems that heal sick people. To say it a different way, for every $1 we spend on downstream health care, most of us think it would be wise to spend $2 upstream.

Upstream work can easily backfire

There are countless examples of reasonable prevention strategies that fail:

  • When the Mexican government limited car driving based on odd or even license plate numbers to reduce pollution—people ended up getting additional cars that were even more emitting
  • When the British government tried to handle the cobra problem in India by putting a bounty on cobra heads, some people started cobra farms to make money, worsening the problem
  • Environmentalists on a particular island have been playing whack-a-mole with various invasive species first released 50+ years ago, but every intervention affects the ecosystem in a sort of "I know an old lady who swallowed a fly" kind of way.
But that desire for control—I can mold this situation to my desires—can also tempt us to act in situations that we don’t fully grasp. We tinker with systems we barely understand, stumbling into a maze of unintended consequences. There’s no doubt that our noble efforts to make the world better can very easily make the world worse.
Good intentions guarantee nothing.

Upstream

The Quest to Solve Problems Before They Happen

Buy on bookshop.org

Tackling problems upstream requires systems thinking and a great deal of collaboration

In the early 2000's, the Chicago Public Schools made a concerted effort to raise the academic performance of their students while keeping kids from dropping out and over the decade between 2008 to 2018, an estimated 30,000 incremental students earned a diploma, resulting in $10 billion in additional lifetime earnings.

The story of CPS’s success foreshadows many of the themes we’ll explore in the book. To succeed upstream, leaders must: detect problems early, target leverage points in complex systems, find reliable ways to measure success, pioneer new ways of working together, and embed their successes into systems to give them permanence.

To solve a problem you have to acknowledge its existence

The situation often starts with people not seeing or admitting there's a problem. Heath calls this "problem blindness" (he's great at coining phrases). Examples:

  • Pro sports teams are sometimes blind to the fact that injuries can be prevented with targeted physical therapy
  • Workplaces were blind to sexual harassment before the term was actually coined in 1975.
  • Parents used to ride with babies in their laps before car seats became mandatory (after a very long political battle) and no one saw a problem with it.
Forley cast about intentionally for a term—a label—that would capture these shared experiences, and she settled on sexual harassment. She later wrote in the New York Times, “Working women immediately took up the phrase, which finally captured the sexual coercion they were experiencing daily. No longer did they have to explain to their friends and family that ‘he hit on me and wouldn’t take no for an answer, so I had to quit.’ What he did had a name.”

Prevent tunneling by adding slack & sync up time

When you're overwhelmed with problems, it can be hard to get out of short term thinking. So you're constantly reacting and putting out the next fire. Heath gives the example of hospital nurses that are constantly struggling to deal with unexpected problems for their individual situations/patients and don't have time to flag or address the underlying issue because they're so narrowly focused.

The psychologists Eldar Shafir and Sendhil Mullainathan, in their book Scarcity, call this “tunneling”: When people are juggling a lot of problems, they give up trying to solve them all. They adopt tunnel vision. There’s no long-term planning; there’s no strategic prioritization of issues.
Some hospitals, for instance, create slack with a morning “safety huddle” where staffers meet to review any safety “near-misses” from the previous day—patients almost hurt, errors almost made—and preview any complexities in the day ahead. A forum like that would have been the perfect place for a nurse to mention, “The security ankle bands keep falling off the babies!”
The safety huddle isn’t slack in the sense of idle time. Rather, it’s a guaranteed block of time when staffers can emerge from the tunnel and think about systems-level issues. Think of it as structured slack: A space that has been created to cultivate upstream work. It’s collaborative and it’s disciplined.

Often big problems are tackled through lots of smaller interventions

In the 90's Iceland had a huge problem of teen drinking and drug use. In surveys, a scary fraction of Icelandic kids reported being drunk or high the day before and it was finally waking leaders in health, education, and government into action.

Most of them boil down to having better ways for teens to spend their time: by participating in sports and extracurricular activities, or simply by hanging out more with their parents. (Interestingly, research suggests the quantity of time spent matters more than quality—which was not altogether welcome news for many Icelandic parents, Sigfúsdóttir reported.) In short, a teenager’s discretionary hours are finite, so a well-behaved hour can crowd out a badly behaved one.
In other words, we shouldn’t fight teenagers’ instinct to “get high.” Instead, we should give them safer ways to get high. The campaign leaders had already known that kids needed better ways to spend their time—that was a classic protective factor—but Milkman’s insight added some nuance. Teens don’t just need more activities of any kind, they need activities with natural highs: games, performances, workouts, exhibitions. Activities that compel them to take physical or emotional risks.

Holistic interventions  means pulling a lot of people together to surround the problem

Preventive interventions often require a new kind of integration among splintered components. To succeed in upstream efforts, you need to surround the problem. Meaning you need to attract people who can address all the key dimensions of the issue. In Iceland, the campaign leaders engaged the teenagers and almost all the major influences on them: parents, teachers, coaches, and others. Each one had something critical to contribute.

Focus on leverage points

When dealing with systems, you have to find the points where you have leverage, meaning your effort to change the system will result in significant and lasting results. This can be seen very clearly when trying to prevent urban gun violence.

“Very often you read these reports and you think, ‘I just cannot believe that someone is dead because of this,’ ” said Pollack, the public policy professor. Pollack emerged from his research with a new mental model of what was causing violent deaths. “We’re the University of Chicago, so we have to have equations,” he said. “My fundamental equation is a couple of young guys plus impulsivity, maybe plus alcohol, plus a gun, equals a dead body.”
All of those are potential leverage points: moderating impulsivity or reducing alcohol consumption or restricting access to guns. The next question becomes: Can you identify an intervention that could plausibly accomplish one of those goals?

Scaling up successful small-group interventions is hard

Around the leverage point of moderating impulsivity, a man named Anthony Di Vittorio created a program called BAM "Being A Man" that seemed really effective at helping young men manage their emotions (and was something they were excited to do).

Tony D introduced a tradition called the “check-in” at the beginning of each session. He’d arrange the young men in a circle—there were usually 8 to 10 in each class—and ask each to reflect briefly on how he was doing that day: physically, emotionally, intellectually, and spiritually. At first, the young men were reluctant. Skeptical. Tony D would goad them for a one-word answer, at least: mad, sad, or glad. With time, they began to open up. They saw it was safe to share problems, to talk about their pain or their anger. By the end of the semester, it had become one of their favorite activities—the one time in the school day when they could lower their guards and just be themselves.

In 2009 they received funding to scale up the program to 18 schools, going beyond just Tony D himself and training other facilitators to do the work. It was risky since people will fund things that are unproven but won't fund things where research shows they don't work.

Among the students who participated in BAM, arrests were down 28% versus the control group. Violent-crime arrests were cut practically in half (down 45%). In the room, jaws dropped. Pollack said it was “one of the greatest moments of my entire career. They had no idea what the results were going to be. Because they see—in the kids that they work with—they see a lot of tragedy. A kid is shot. People fail. People get arrested. What they never got to see is what would have happened if they hadn’t been there.”

Stop asking whether an intervention is going to "pay for itself"

We have to spend money when a real problem is before us and we never ask if we "have to do it" (e.g. a hurricane tears down 10 blocks, a drug has a dangerous side effect and needs to be recalled). Heath argues that one of the biggest enemies to successful upstream interventions is the idea of ROI because again, it's so hard to prove that some specific intervention made the difference, but collectively they made a whole lot.

One of the most baffling and destructive ideas about preventive efforts is that they must save us money. Discussions of upstream interventions always seem to circle back to ROI: Will a dollar invested today yield us more in the long run? If we provide housing to the homeless, will it pay for itself in the form of fewer social service needs? If we provide air conditioners to asthmatic kids, will the units pay for themselves via fewer ER visits?
These aren’t irrelevant questions—but they aren’t necessary ones, either. Nothing else in health care, other than prevention, is viewed through this lens of saving money. Your neighbor with the heroic all-bacon diet—when he finally ends up needing heart bypass surgery, there’s literally no one who is going to ask whether he “deserves” the surgery or whether the surgery is going to save the system money in the long haul.

An egregious example from follow-up training around hurricane planning in the city of New Orleans around a hypothetical "Hurricane Pam" in the same year that real-life Hurricane Katrina hit.

No single training, no matter how ingenious, is sufficient to prepare for a catastrophe. IEM, the contractor that invented Hurricane Pam, had planned multiple additional exercises in 2005 to push the work forward. “But in a breathtaking display of penny-wise planning,” the authors of Disaster wrote, “FEMA canceled most of the follow-up sessions scheduled for the first half of 2005, claiming it was unable to come up with money for the modest travel expenses its own employees would incur to attend. FEMA officials have since said that the shortfall amounted to less than $15,000.”
FEMA said no to $15,000. Congress ultimately approved more than $62 billion in supplemental spending for rebuilding the Gulf Coast areas demolished by Katrina. It’s the perfect illustration of our collective bias for downstream action. To be fair, no amount of preparation was going to stop the Gulf Coast from being damaged by a Category 5 hurricane. But the proportions are so out of whack: We micromanage thousands or millions in funds in situations where billions are at stake.

Use early warning signs to respond faster and smarter

Sometimes the most upstream you can get is simply detecting when the problem is about to happen sooner. Heath gives examples around domestic violence (abusive husband is let out of jail), school shooters (social withdrawal, suicidal ideation), and 911 first responders who are deployed in areas that historically have had more 911 calls at certain times of the day / year.

The real-time location of all the ambulances is pinpointed on the maps, and each one is surrounded by a halo that shows the area it could reach within 10 minutes. When a 911 call comes in, the closest ambulance to the emergency is deployed. Then all the other nearby ambulances shift their locations dynamically in order to fill the hole left by the deployed ambulance.
This is the model of an early-warning story: Data warns us of a problem we wouldn’t have seen otherwise—say, needing ambulances deployed closer to nursing homes at mealtimes. And that predictive capacity gives us the time to act to prevent problems. Northwell paramedics can’t stop people from suffering cardiac arrest, but they can stop some of those people from dying.

When measuring success metrics for upstream interventions, watch out for "ghost victories"

Since it's tough to only measure long-term outcomes (4 year graduation rates of freshman) it is necessary to use shorter term metrics like GPA or advancement to the next grade as proxies. But they can be flawed in at least 3 ways that are important to watch out for. Heath uses an example of a baseball team measuring home runs as a proxy for having a winning season

In the first kind of ghost victory, your measures show that you’re succeeding, but you’ve mistakenly attributed that success to your own work. (The team applauds itself for hitting more home runs—but it turns out every team in the league hit more, too, because pitching talent declined.)
The second is that you’ve succeeded on your short-term measures, but they didn’t align with your long-term mission. (The team doubled its home runs but barely won any more games.)
And the third is that your short-term measures became the mission in a way that really undermined the work. (The pressure to hit home runs led several players to start taking steroids, and they got caught.)

Gaming the metrics is a huge risk with upstream efforts and it defiles the mission if not avoided

An astonishing example of this came from NYPD in the 1990's which used an crime measurement system called CompStat to evaluate police departments. The system drove officers and chiefs to game the system to boost their numbers.

“If your crime numbers are going in the wrong direction, you are going to be in trouble. But some of these chiefs started to figure out, wait a minute, the person who’s in charge of actually keeping track of the crime in my neighborhood is me. And so if they couldn’t make crime go down, they just would stop reporting crime."
“And they found all these different ways to do it. You could refuse to take crime reports from victims, you could write down different things than what had actually happened. You could literally just throw paperwork away. And so [the chief] would survive that CompStat meeting, he’d get his promotion, and then when the next guy showed up, the number that he had to beat was the number that a cheater had set. And so he had to cheat a little bit more.…

Use paired metrics to prevent/reduce gaming

At Meta, data scientists will often use something called "guardrail metrics" or "health metrics" that explicitly look out for teams trying to game the system. I enjoyed learning about the origins of this concept from Andy Grove (who invented so much of tech management, my god).

They used what Andy Grove, the former CEO of Intel, called “paired measures.” Grove pointed out that if you use a quantity-based measure, quality will often suffer. So if you pay your janitorial crew by the number of square feet cleaned, and you assess your data entry team based on documents processed, you’ve given them an incentive to clean poorly and ignore errors, respectively. Grove made sure to balance quantity measures with quality measures.
Note that the researchers who assessed CPS used this pairing: They balanced a quantity metric (number of students graduating) with quality ones (ACT scores, AP class enrollments).
In New York City in 2017, NYPD finally added some complementary measures to CompStat: questions for local citizens that measure how safe they feel and how much they trust the police.

Experimentation is critical because lots of things will not work

Because upstream problems are tough to tackle, you have to build in time and space to run experiments, with the expectation that many of them will fail.

How can you know in advance which strand of common sense to trust? We usually won’t. As a result, we must experiment. “Remember, always, that everything you know, and everything everyone knows, is only a model,” said Donella Meadows, the systems thinker. “Get your model out there where it can be shot at. Invite others to challenge your assumptions and add their own.… The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error.”
Has an intervention been tried before that’s similar to the one we’re contemplating (so that we can learn from its results and second-order effects)? Is our intervention “trial-able”—can we experiment in a small way first, so that the negative consequences would be limited if our ideas are wrong? Can we create closed feedback loops so that we can improve quickly? Is it easy to reverse or undo our intervention if it turns out we’ve unwittingly done harm? If the answer to any of these questions is no, we should think very carefully before proceeding.

Lasting change takes time

It's frustrating to think that such important changes take so long, but upstream interventions require changing ingrained habits, processes, and systems and often are met with resistance. Heath's case studies are filled with long battles.

“Be impatient for action but patient for outcomes.” That’s a quote from Maureen Bisognano, the president emerita of the Institute for Healthcare Improvement, and it struck me as the perfect motto for upstream efforts. The world is full of groups who engage in lofty discussions—and feel virtuous doing so—but never create meaningful change. Change won’t come without action.
They started by picking a fight they thought they could win: making schools smoke-free. “Even tobacco farmers didn’t want their kids to smoke,” said Herndon. For years, they won tough victories at the local level—persuading school boards, one at a time, to banish smoking. By 2000, they’d convinced 10% of the state’s school districts to go tobacco-free. Think of it: It took her team a full decade to succeed in one-tenth of the state’s districts. And this was supposed to be the easy fight. That’s stamina.

Start small then scale

You can't solve at scale if you don't know how to solve it for individuals. If we can't reliably make one person / family / team healthier, safer, more effective, it will be very hard to help a whole country or company.

Macro starts with micro. When we think about big problems, we’re forced to grapple with big numbers. What would it take to solve problems for 1,000 people? Your first instinct might be to say: We’ll have to think about the big picture, because we can’t very well intervene individually with 1,000 people. But that notion, as it turns out, is exactly wrong. Notice how often the heroes in this book actually organized their work on a name-by-name basis.
The lesson is clear: You can’t help a thousand people, or a million, until you understand how to help one. That’s because you don’t understand a problem until you’ve seen it up close. Until you’ve “gotten proximate” to the problem, as we explored in the chapter on leverage points.

Feedback loops help experimenters iterate along the way

In addition to running experiments, preventative interventions need short cycle times to feedback. Having a real-time scoreboard is worth more than a single powerful intervention because it allows more people to collaborate towards that better situation.

Favor scoreboards over pills. I believe the social sector has been misled by a bad mental model: that running social interventions is a bit like distributing pills. First, you formulate a great “drug”: Maybe it’s a mentoring program or a behavioral therapy or a job-training model. Then you conduct a randomized control trial (RCT) of the “drug,” and if it proves effective, you attempt to spread it far and wide.
Feedback loops spur improvement. And where those loops are missing, they can be created. Imagine if, in the mastectomy situation, photos of surgical scars were taken automatically at patients’ follow-up visits, and those photos were sent back to the surgeons along with a comparison set of their peers’ work. (Or even more radically, imagine if the comparison set was shared with patients before their procedures, as an input into their choice of surgeon.)