You’ve probably seen a lot written lately about tech safety, the notion that we should be making our software environments as safe to work in as our physical workplace environment. If you haven’t, it’s definitely worth a read. Josh Kerievsky has conjured up a useful analogy for the protective measures we should be implementing in our development environments. From his initial article on the topic:
Tech safety is a driving value, ever-present, like breathing.
It is not a process or technique and it is not a priority, since it isn’t something that can be displaced by other priorities.
Valuing tech safety means continuously improving the safety of processes, codebases, workplaces, relationships, products and services.
It is a pathway to excellence, not an end unto itself.
It influences what we notice, what we work on and how our organization runs.
It’s a noble concept and I wholeheartedly believe that it has merit. However, there’s an issue with the idea of Tech Safety lurking below the surface, and it’s called “risk homeostasis”. Risk homeostasis, a theory developed by Gerald Wilde in 1994, is the idea that we don’t save risk; we consume it. In other words, when we implement something to make our lives safer, we use it to justify riskier behavior in other areas of our lives, and so on the whole we’re no safer than we were before. There is no better overview of the concept of risk homeostasis than Malcolm Gladwell’s 1996 New Yorker article, “Blowup”. (You can also find it in What the Dog Saw, a collection of his best articles.) In the article, he examines the cultural and sociological factors that contributed to disasters like the Challenger explosion and Three-Mile Island. At the top of the list: Risk homeostasis.
A few examples:
- Studies have shown that Diet Coke does not help people lose weight. On the contrary: the supposed calorie “savings” are subconsciously used as an excuse to eat other high-calorie foods.
- When we lower our monthly expenses by, say, paying off a car loan, do we take that amount and put it in a savings account or find another way to spend it?
- Gladwell cites a study of taxi drivers in his article. Drivers whose cars are equipped with ABS were shown to drive more recklessly than those who didn’t, supposedly because they “consumed” the risk savings provided by their anti-lock braking systems.
Don’t get me wrong, I believe that Tech Safety is a good idea. We need to be able to prevent our teams from the perils of fragile code. We owe it to our customers to protect them from bugs. But the safety systems we create, according to the risk homeostasis theory, are just going to give us permission to increase our risky behavior in other areas. We’re going to use the protections we’ve enabled, not to make us safer, but to make us faster. That’s what the taxi drivers did, isn’t it?
So what’s the solution?
I think the first step is recognition. It’s human nature to equalize our risk tolerance. When we create our safety systems, let’s not look at it as the solution, but as the first iteration of a better system.
The next step is to find the balance for each of our safety checks. Our safety systems have to have the foresight to capture and isolate the risk not only within the area it’s intended to protect, but also areas where our desire for risk might leak. Not just checks, but balances too. And checks-and-balances is where Agile shines. We protect against mishandled requirements with acceptance criteria and a clear definition of done (check), and prevent information silos with collective ownership (balance).
I’d like to hear your thoughts. How else can we protect our safety systems from risk homeostasis?