A lil' rant about assumption mapping


Hello hello, Tom & Corissa here,

We used to be called ​Trigger Strategy​ but now we're Crown & Reach. Thank you for welcoming us into your inbox.

Today we're getting all hot and bothered about assumption mapping.

First though, a spot of housekeeping:

  • NEW! Join us for Multiverse Mapping Live on Thursday 11th July. This is live mapping for a real person's real business situation, no safety net. Get your free ticket
  • Our next "what the heck's goin' on in tech?" live sense-making session will be Thursday 18th July. Sign up here
  • Our newsletter, podcast and YouTube channel are audience-supported efforts. Want to help keep it all alive, weird and free? Buy us a coffee

Once upon a time, at a well-known challenger bank, a senior designer ran an assumption mapping workshop.

He knew the score. He'd facilitated plenty of workshops, designed plenty of experiments, and he was damned good at his job.

The feature was exciting – the kind of release that everyone knew the users would be hungry for. The team was sharp AF, and they took pride in challenging their ideas before shipping.

The workshop went well. Assumptions were mapped. Healthy discussion ensued. Risks were flagged and actions agreed. A few weeks later, the feature launched.

It flopped.

The team were gutted. And shocked! They'd done everything by the book. What went wrong?

Here's the problem:

The assumptions that matter most—the ones that could derail your project—are the ones you don't recognise as assumptions. They're too central, too sacred. They feel like facts.

  • "Of course people want to track their spending. Who wouldn't?"
  • "Obviously, donors will shift their giving once they see the data!"
  • "Naturally, it's got to have AI in it!"

Philosopher Imre Lakatos would call these your "hardcore" assumptions.

If you challenge the hardcore assumptions, you call into question the entire project ... maybe even the need for your team to even exist.

Imagine you've been hired onto an organisation's mobile app team as a specialist mobile app developer because the organisation has assumed that a mobile app is necessary.

Then one day you uncover evidence in the data that there's really no point building a mobile app at all. It's clear to you that a mobile app is going to be net negative, both for the users and for the company's bottom line.

What are you going to do?

  • Use your big jagged data blade to saw through the branch you and your mobile app team mates are sitting on?
  • Say the quiet part out loud? (You already know what happens ... now you're now seen as a risk.)
  • Hmm. Wouldn't it be easier if you hadn't even considered that assumption?

Look, skating around the hardcore assumptions is normal. In most companies, it's risky to challenge assumptions. Even at the less risky end, it's socially awkward to be "that person" bringing everyone down. At the fiercer end, there's a performance review waiting just round the corner with "not a team player" stamped on it in red ink.

Sir, where are your underpants?

In the classic fairy tale, The Emperor's New Clothes, con artists trick the emperor into believing they've made him a dazzling new suit out of magic cloth. Smart, worthy people can see the magic cloth. Only fools can't.

Word of this magic spreads, and nobody's willing to admit that they can't see the magic cloth for fear of losing status. Nobody, that is, except a small child – someone as yet unswayed by status games and social pressure.

There's a version of that story where the child in question meets a sticky end at the hands of everyone they made look foolish.

It's a crude comparison, but by golly, doesn't it feel familiar?

This doesn't only happen in corporations either.

Lakatos' model dates from back in the 1960s. It wasn't about corporate games, it was about research programmes run by actual scientists – people whose one job is being good at shooting down assumptions with data.

And that's assuming you can even spot the important assumptions in the first place. Maybe you're so horrified by the emperor parading around in the nip that you don't notice that the big bad wolf has escaped from Little Red Riding Hood and is sneaking up on the crowd.

Just plug and unplug. Easy!

Let's say you really can spot the deadliest assumptions and list them without social repercussions. Let's say everyone's bought in.

Then we get to a deeper problem with assumption mapping. It tends to treat assumptions like they're modular units that you can unplug from the initiative you're doing and test in isolation. As if you're working on an electronic circuit, and you unplug one of the resistors, pop it in a resistor-testing machine, and then put it back if it's good. Or grab a different resistor if it's bad.

But organisational assumptions aren't modular. They're load-bearing narrative threads entangled with culture, incentives, emotions and more. You can't lift one out without bringing a heaping mess of others along for the ride.

Even if you could just unweave one and pop it in an assumption-testing machine, you'd find it behaves differently in the machine. Gah! What you actually end up testing looks like the assumption you listed, but it isn't.

“Of course people want to track their spending” ... so the team trials a new colour-coded spending category. Engagement nudges up 2% and they report that they've "validated" the assumption.

It feels tidy, it feels logical, it feels safe. And they invest another couple of years into spending-tracking that it turns out nobody wanted after all. Hey, they made some really pretty graphs along the way though.

There are loads of stories of this kind of thing happening. We bet you've lived through some.

Don't try to write down the unwritedownable

So what do you do when the riskiest assumptions are the ones you can’t name or isolate?

You come at the thing sideways. Obliquely.

Try:

  • mapping behaviours instead of beliefs
  • expressing the implications of networks of assumptions and testing those safely
  • making it a positive game to try ideas that go against the grain
  • generating signals that expose more attractive assumptions
  • allowing people to quietly drop assumptions without losing face
  • treating strategy as a living pattern rather than a fixed plan or set of pillars.

There are methods to do this. Some we've come up with at Crown & Reach (like Multiverse Mapping). Some you’ve probably invented yourself without naming.

These methods tend to look messier and less clinical than assumption mapping workshops and hypothesis validation spreadsheets, which can make them harder for some orgs to tolerate. And they sometimes create results and answers you can't predict (or control). But when they work, they can reweave new narratives through the messy bundle, and sometimes even melt the hardcore assumptions away.

How about you? What oblique methods have you tried? What was it like? We'd love to hear from you.

Tom & Corissa x

£5.00

Buy us a coffee

We publish loads of articles, podcasts and videos for free. If you've found what we've shared helpful and you'd like to... Read more

£5.00

Buy us a coffee

We publish loads of articles, podcasts and videos for free. If you've found what we've shared helpful and you'd like to... Read more

Crown & Reach, Suite A, 82 James Carter Road, Mildenhall, IP28 7DE
Unsubscribe · Preferences

background

Subscribe to The Reach