Categories
Uncategorized

Are you too data-driven?

How to leverage your intuition and make decisions

(even when the data isn’t clear)

 

I work with a lot of data-driven organizations.

 

Whether it’s finance geeks, business owners, software developers, or engineers, a lot of people who connect with my work self-identify as “data-driven.”

 

Don’t get me wrong, using data to make decisions is fantastic.

 

But, sometimes, it’s not enough.

 

At times, you may just need more data. You need to run an experiment longer, run a bigger survey, or just look more deeply at the data that you already have.

 

But, other times, you need a different approach; you need to break out of analysis paralysis.

 

Here are three approaches that you can use to make better decisions, even when embedded in a culture that celebrates data.

 

Broaden your perspective

 

I wrote recently about Chris Argyris’s ladder of inference. The ladder is a mental model of how we process data and make decisions. Its key power is helping us become aware of the way that we unconsciously filter and interpret data that we all make.

And that’s the key — even in data-driven cultures, the data we have access to are biased — we’re looking at them because someone, somewhere, decided that we were going to take the XYZ approach to measuring something.

 

What we measure becomes the water in which we swim.

 

That’s OK — but the water often has a current that implies preexisting compromises. For example, an engineering manager at a tech firm might be incentivized to have her team close bug tickets swiftly. Open tickets may draw the attention (and ire?) of more senior leaders.

 

All of that influences her behavior in a particular way. If a deeper dive into a section of code would forestall future problems but would leave a ticket open longer, then an engineer may not be incentivized to make that investment — even if it would be good for the long-term health of the software.

 

In cases like this, using intuition and conversation to think through example problems can help us rethink our approaches. It can help us broaden what we measure and see more clearly what’s in front of us.

 

Realize that “the map is not the territory”

 

I’ve written here about how the environment we operate in changes whether intuition is a source of bias or a superpower to make great decisions. In an environment with a lot of feedback, we can learn from our decisions and improve our intuition.

 

Even in the most data-driven cultures, the map is not the territory. Data can never faithfully or fully represent the world as it is, whether that is because of the way that informal networks promote technology adoption or because work actually gets done in a different way than what’s written out as a procedure. That means that as data-driven as a culture is, data can’t be the only input for data.

 

As a result, many data-driven cultures struggle to incorporate intuitive solutions, even when they’re developed by experienced people drawing on their broad perspectives.

 

In these cases, rather than stake out and advocate a position, use intellectual judo to design an experiment:

 

I think we’d get newer developers up to speed faster if we paired them with formal mentors. Can we try an experiment where we do that for half of our starting developers and measure how much they deploy in the first three months?

 

These kinds of techniques work particularly well in big companies where you can benchmark internally. If you don’t have that option, you can still define smaller experiments where you predict specific outcomes and reflect on if you met your expectations.

 

Use HPPOs judiciously

 

When data fails, the default way of making decisions often follows the HPPO approach — to rely on the highest-paid person’s opinion.

 

That can be OK — sometimes a leader really does need to just make a decision. But groups often implicitly slide into this strategy. People might share their views, but subtly shape their input to match what they think a leader wants to hear.

 

That’s a mistake.

 

Instead, when you’re going to fall back on the highest-paid person’s opinion, make that explicit! A leader should proactively seek input while letting everyone know that she will ultimately be making the call. In this way, you can invite constructive dissent and empower everyone in the group to contribute.

 

That’s particularly useful when the data don’t show a clear path forward.

 

What about you? What approaches do you use to avoid analysis paralysis and make decisions, even when the data aren’t clear?

 

Subscribe here and let’s keep the conversation going…

Categories
Uncategorized

WARNING: ABSOLUTELY NO BROWN M&M’s

Van Halen’s secret to avoiding disaster

I first heard the story on This American Life, but it seemed apocryphal.

When the band Van Halen was touring, the story goes, they required a bowl of M&M’s as snacks — with the brown ones removed.

For years, if the band’s requirement wasn’t discounted as a rumor, it was viewed as an absurd rock star excess. The band’s capriciousness demanded the removal of brown M&M’s from the bowl of candy, along with snacks like herring cream cheese and essentials (Tupelo honey and KY Jelly, for example).

Well, it turns out that it’s true! And there’s a really good reason for it.

Here’s David Lee Roth, the band’s lead vocalist, from his autobiography (as quoted on Snopes.com):

We’d pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors — whether it was the girders couldn’t support the weight, or the flooring would sink in, or the doors weren’t big enough to move the gear through.

… there was so much equipment, and so many human beings to make [a show] function. So just as a little test, in the technical aspect of the rider, it would say “Article 148: There will be fifteen amperage voltage sockets at twenty-foot spaces, evenly, providing nineteen amperes …” This kind of thing. And article number 126, in the middle of nowhere, was: “There will be no brown M&M’s in the backstage area, upon pain of forfeiture of the show, with full compensation.”

So, when I would walk backstage, if I saw a brown M&M in that bowl … well, line-check the entire production. Guaranteed you’re going to arrive at a technical error. They didn’t read the contract. Guaranteed you’d run into a problem. Sometimes it would threaten to just destroy the whole show. Something like, literally, life-threatening.

It’s brilliant.

One of the things that we write about in Meltdown (read a free sample here) is the idea that it’s hard to observe what’s actually going on in many complex systems. Whether that’s because of the hidden features of abstract things (like correlations between complex financial products) or the difficulty of observing physical systems (you can’t send someone into the core of a nuclear power plant or to the bottom of the ocean to see a wellhead).

But transparency is often the antidote to complexity: even when we can’t simplify our systems, we can make things more visible. For example, banks that used a unified system to quantify their exposures (compared with those that emailed spreadsheets around) had a better understanding of their risk in the lead-up to the 2008 financial crisis.

That’s akin to what Van Halen did with the brown M&M’s: find something that could help them understand the hidden risks they were facing.

It’s what I do with my consulting clients: we seek clear indicators for how our work is going and how we’re changing a team’s behaviors. I do it with my coaching clients: we create indicators that reveal the hidden beliefs that motivate actions, particularly the beliefs that work against someone’s stated goal.

What about you? How do you try and inject transparency into your systems?

Subscribe here and let’s keep the conversation going…

Categories
Uncategorized

Making space for risk (where youth and wisdom meet)

I recently came to the conclusion the other day that my child does not, in fact, have brain damage.

 

This despite outward signs of impairment: difficulty staying on task, emotional volatility, inability to read and write fluently.

 

In his defense, he is a seven-year-old.

 

This realization started as we were reading a bedtime story together. T is working on his reading skills and had arrived at a challenging sentence.

 

We do a lot of work around developing a growth mindset in our family. But staying with something difficult is still really hard.

 

In that moment, T backed off from the challenge and redirected himself to anything else: rolling around on a yoga ball, jumping off the bed, looking out of the window, and opining on the merits of various construction strategies in Minecraft.

 

For whatever reason, I was able to detach from my usual drive toward bedtime and just observe him. It was lovely.

 

What I remembered in that moment was: while T doesn’t have brain damage, he also doesn’t have the same brain that you and I, dear reader, possess.

 

His brain lacks a robust prefrontal cortex (PFC). This part of the brain drives what neuroscientists call executive function: the ability to pay attention, to plan, to see the consequences of behavior, and to control impulses, among other things.

 

From the perspective of a parent, this can be… how to put it… infuriating.

 

But it’s also tremendously powerful. As the PFC develops, kids get better at focus and attention. But until about age 25, it remains underdeveloped. As a result, adolescents are willing to take many more risks.

 

Or, in the words of science: “Due to immature functional areas in the prefrontal cortex, adolescent teens may take part in risk-seeking behavior including unprotected sex, impaired driving, and drug addiction.”

 

Wow, that sounds bad! Why would evolution set us up to take a bunch of risks? Let’s turn, again, to science:

Risk-taking serves as a means of discovery about oneself, others, and the world at large. The proclivity for risk-taking behavior plays a significant role in adolescent development, rendering this a period of time for both accomplishing their full potential and vulnerability… risk-taking behavior is a normal and necessary component of adolescence. [emphasis mine]

 

I’ll take this one level beyond the data by adding my own hypothesis: evolution sets us up to take risks—not just because it helps us to develop as individuals, but because it pushes us to try new things as a society.

 

I’ve written before about how change is built around threshold effects. As we observe others participating in an activity—whether that is adopting a new technology or marching for equal rights—we’re more likely to do it ourselves.

 

Segments of the population that are more risk-seeking are likelier to be first movers who spur change for us all.

 

This is a powerful observation. From an organizational lens, it suggests that innovation happens by respecting and incorporating the views of our most junior members. But hierarchy tamps down their views. Instead, change is usually a mandate from the top that’s pushed down.

 

It’s not that every change suggested by a junior hire will be a good one. It’s that some percentage of their ideas should be considered and tried, rather than relegated to the “that’s not how things are done here” suggestion box.

 

This jives with my experience. In doing innovation work with organizations, one of my guiding beliefs is that groups already possess the wisdom to create their own change.

 

It’s a matter of creating a structure and a space for discussion. It’s a matter of creating parity between senior leaders (who often have great ideas) and voices that may not otherwise have a forum to contribute (who often have great ideas).

 

It’s a matter of figuring out ways to prioritize and test those ideas.

 

And it’s a matter of thinking carefully about the people not in the room. Who among them will an idea resonate with? Who will resist? And who will be on the fence but will move if they understand the benefits of a new approach?

 

How have you seen change work (or not work) in your organization(s)?

 

Subscribe here and join the conversation.

Categories
Uncategorized

Three ways to hone the superpower that will transform the way you make decisions

Whenever groups are at loggerheads over a decision, it makes me think that a system is involved.

 

Whether it’s a question of public policy or a strategy for a complex project, when I notice entrenched opposing views, I sense that we haven’t gotten to the underlying issue.

 

Take work like software development or a lawyer who reviews contracts. One leader, Rachel, might want to create a team to handle the work in-house, while her colleague Susan feels equally strongly that it should be outsourced.

 

Each can, of course, muster arguments to support their decision.

 

Rachel: “Handling it in-house will be higher quality!”

 

Susan: “It will be far cheaper if we outsource it!”

 

This is clearly a straw man dialogue, but it highlights something that many discussions of this type have in common: Rachel and Susan aren’t even talking to each other about the same aspect of the work. One is focused on quality, the other on cost.

 

Often, these kinds of discussions are resolved with a “HPPO” approach to decision-making; we fall back on the Highest Paid Person’s Opinion.

 

A great decision, however, draws on a deep understanding of the underlying system that highlights the real compromises — and the pseudo compromises that are actually opportunities for innovation.

 

So, what’s the secret to understanding a system?

 

It’s the superpower of curiosity. We have to prioritize being curious over being right.

 

That’s hard! Here are three ways to make it easier.

 

1. Listen deeply.

 

I wrote last week about the gift of listening and resting in uncertainty. This often isn’t the right strategy with a peer, but it can drive insight with your customers (who may be end users or colleagues from a different division that your solution will ultimately serve). Listening deeply can help you tease out the underlying factors that are important.

 

2. Combine advocacy with inquiry.

 

With peers, we already have a starting point for our views or experience that we can draw from. In these cases, we can combine advocacy with inquiry.

 

One of my favorite tools for this is Chris Argyris’s ladder of inference. Chris was a Harvard Business School professor for many years.


The ladder of inference lets you share both beliefs and facts. “I think we should pursue outsourcing. Here are the facts about costs that I’m seeing. What do 
you see?
It’s a subtle change but linking beliefs to objective observations is helpful.
The shift is based on the idea that we take a lot of implicit steps to form our beliefs — filtering data and applying cultural lenses and biases to it, and then using that to drive our decisions. Balancing advocacy and inquiry lets us make those steps more explicit (and lets us better understand our own and our colleagues’ mental models of the world).

 

3. Add structure.

 

As you know, I’m a fan of structure. To unpack seemingly dichotomous choices, I like the structure that my friends Roger Martin and Jennifer Riel developed in their book Creating Great Choices.

 

The CGC framework helps teams flesh out the benefits of extreme positions (outsource everything vs. bring all work in-house, for instance) to see the underlying benefit of each option.

 

In articulating the underlying jobs that the models do, we can see if there are ways of combining them into a third, integrated solution. It might be that we triage our contract work to outsource most contracts, but send important ones to an in-house specialist. In a software context, we might use outsourced developers, but integrate them into our sprint process as if they were in-house.

 

Roger and Jennifer’s framework is particularly useful for articulating bigger strategy decisions. It’s on my mind right now because I’m using it with a group who’s trying to decide what parts of their service should be centrally controlled vs. delegated to business units within their large organization.

 

That’s three approaches to understanding a system — but there are many more. What are the skills you use to deepen your understanding?

 

Subscribe here and join the conversation.

Categories
Uncategorized

Are you listening?

Some notes on feeling our way forward.

 

Normally, I take the space here to think and write about something related to business, systems, or what’s happening in the world at large.

 

But, right now, that doesn’t seem appropriate.

 

There are things to say. About, for example, how citizen complaints against officers might be used by police departments in the same way that aviation and chemical industries improve by observing everyday near misses. About how policy choices implicitly (and explicitly) strengthen the systems that unequally distribute power in our society.

 

But this isn’t the time to break these things down on an intellectual level. The raw emotion in the world right now overwhelms me and lots of others.

 

It’s not time for the thinking mind to work alone.

 

Instead, I want to write something about listening deeply.

 

Deep listening is the process of suspending our stories while we attend to others.

 

Rather than listening to confirm our worldview, extract facts, or prepare our case, we listen to see and understand the person we’re talking with.

 

We listen without thinking ahead to our own counterarguments. We listen without judging the experience of others. We listen to witness our fellow humans.

 

On the wider stage, this gets harder. Political leaders have messages behind their messages. Online discussions tend toward the extreme, too. They’re often not speaking to communicate, but to win. In these settings, we should be skeptical.

 

But in small groups, in person (or on Zoom), we can listen deeply. We can connect with our friends, family, and colleagues and give them the gift of assuming positive intent—that we are all showing up to do our best with the tools we have.

 

At work, it shows up when we recognize that our colleagues also want to solve problems worth solving. At home, it shows up when our kids scream at us—and we can step back and realize that they just need a snack.

 

One of the guiding principles I have rested on in the last couple of years is negative capability.

 

It’s a term coined by the poet John Keats. He used it to describe the ability to sit in confusion and uncertainty in pursuit of what he called beauty; I think of it as the ability to keep our need for a tidy answer at bay. It’s an incredibly difficult practice, something that a lot of entrepreneurs and business owners I know use because they sit in uncertainty every day.

 

To see and witness, to move forward without knowing where we’re going. To sit with a problem without trying to make it right—not because it doesn’t deserve to be solved, but because our efforts would be as a drowning person tiring themself out in the waves.

 

It’s the ability to listen and respond to the world as it is, rather than as we think it should be.

 

It’s a muscle that we need to use a lot more of these days.

 

Subscribe here and join the conversation.

Categories
Uncategorized

Trouble making decisions? How to use predetermined criteria to simplify the decision-making process

Hi — just a note as a short preamble. I wrote this piece before the death of George Floyd and the widespread response that followed, including destruction and protests not far from where I live in Seattle.

There’s a lot to absorb and reflect on. My focus, particularly as a privileged white American man, is on listening.

I’ll still be writing things and having a dialogue with you, dear reader, but for now, I’m going to hold off on the next pieces of the intuition mini-series that this piece is a part of.


 

One of the challenges of making decisions in this complex world is that we might not even know that our decisions are bad.

 

The human brain is so good at filtering out data that undermines our conclusions that we may never recognize that we could have taken a better path.

 

Fortunately, there’s a solution to this challenge: structure.

 

One broad class of strategies is to create feedback loops. We can break down a problem into more manageable bits that we can test, looking for validation of an idea and capturing feedback.

 

The best teams create feedback loops around their own work, reflecting on how they’re performing — and changing what they do as a result.

 

Reflection helps us get better over time (as do some of the strategies I wrote about here), but there’s another class of problems that they don’t work well on: when we have to make a single decision at a point in time.

 

Decisions like buying a house, hiring someone, or choosing a project.

 

Or, as it turns out, whether a patient should get an X-ray to see if a hurt ankle is sprained or broken.

 

For a long time, physicians diagnosing a hurt ankle were misled by symptoms that didn’t actually matter, like swelling. On the whole, they ordered many more X-rays than they needed to, using them as a sort of diagnostic safety net.

 

But those X-rays cost money — a lot of money when added up across everyone with an injured ankle — and exposed patients to unnecessary radiation. Doctors also missed severe fractures, skipping X-rays even when they were needed. They relied on their gut instincts, but they never got enough feedback to improve those instincts.

 

In the early 1990s, a team of Canadian physicians set out to change things. They ran a study to identify the factors that really mattered. The data showed that, by using only four criteria, doctors could cut the number of X-rays by a third but still catch every severe fracture.

 

Four simple questions turned every doctor into an expert diagnostician. They asked about pain, age, weight-bearing, and bone tenderness.

 

Simple and predetermined, these criteria did much better than doctors’ intuition.

 

The physicians had the benefit of a rigorous study, but using criteria can be powerful even without a big dataset.

 

Think about how we typically pick who should run an important, high-risk project. We might consider a pool of potential project managers, intuitively compare them, and then make a choice. But that would be letting our gut feelings lead us astray.

 

Instead, we should develop criteria based on the project. First, we determine the essential skills that the project manager will need to be successful, and what it would look like to be great, OK, or poor at those skills. Then, as we compare potential candidates, we rate them with a 1, 0, or −1 for each of those criteria.

 

If we’re working with a group to make the hiring decision, we can independently score each person and then average the results. This gives us a numerical representation of the overall strength of each candidate. Something like this:

Predetermined Criteria Decision Making.png

I use it in my personal and professional life all the time; when writing Meltdown, in fact, my co-author András and I used it to evaluate different failures we wanted to write about.

 

I’ve also used it in coaching to help clients make decisions; I worked with one couple to help them choose a school for their daughter.

 

What big decisions are you working with? Would establishing a simple set of criteria help you move forward more confidently?

 

Subscribe here and join the conversation.

 

Parts of this post were drawn from my book Meltdown, written with András Tilcsik. Used with the permission of the publisher, Penguin Press. Copyright © 2018 by Chris Clearfield and András Tilcsik.

3 Mistakes most leaders make with change

And how to avoid them!

download the free guide

* When you subscribe, you’ll also receive The Breakdown newsletter: tools and reflections on the practice of solving impossible problems. We respect your privacy. Unsubscribe at any time.