All writing

Thinking about experimentation maturity

Over the years, I've developed a mental model for understanding where organisations are in their experimentation journey. It's not original—versions of this exist all over the industry—but I find it useful for diagnosing what a team actually needs versus what they think they need.

The stages, roughly

Nascent. The organisation runs occasional experiments, usually in response to specific requests. There's no consistent process, tools are basic, and results are shared informally if at all.

Emerging. A dedicated team or person owns experimentation. There's a testing tool in place, some process for prioritisation, and regular reporting. But experiments are still mostly tactical, and impact is inconsistent.

Established. Experimentation is integrated into product development. Multiple teams run experiments independently. There's a learning repository, a hypothesis framework, and clear ownership at the executive level.

Optimised. Experimentation informs strategy, not just tactics. The organisation has moved beyond A/B testing to more sophisticated methods—Bayesian approaches, multi-armed bandits, causal inference. Learning compounds systematically.

Why this matters

The most common mistake I see is teams trying to skip stages. An organisation at the nascent stage invests in sophisticated tooling designed for the optimised stage. It doesn't work, and they conclude that "experimentation isn't right for us."

What they needed was simpler: a consistent process, a few visible wins, and time to build institutional muscle.

Meeting teams where they are

When I start working with a new organisation, the first thing I try to understand is where they actually are—not where they think they are or where they want to be. The interventions that help at each stage are different:

At nascent, you need quick wins and simple process. Prove the value before investing in infrastructure.

At emerging, you need to shift from reactive testing to proactive learning. This usually means a hypothesis framework and better prioritisation.

At established, the challenge becomes scaling without losing quality. Building capability across teams, creating systems for institutional memory.

At optimised, the focus shifts to strategic integration. Using experimentation insights to inform product and business strategy, not just feature releases.

A caveat

I'm wary of maturity models in general. They can create a false sense that progress is linear, or that the goal is always to reach the "highest" stage. That's not quite right.

What matters is fit: whether your experimentation capability matches your organisation's needs. A small startup might be perfectly well-served at the emerging stage. A large enterprise might need optimised capabilities just to keep up with the complexity of their business.

The model is a diagnostic tool, not a destination.