What’s working, what’s hard, what we’re still learning
The view three years into this transition.
The Trailblazer recognition lands three years into this transition, at a point where some things are clearer to me than they were and some things are murkier. What follows is the view from here.
What is clearly working.
Coverage has replaced hours as the way we measure the work. We have built strong internal AI capabilities and a deep appetite for innovation across the team. We have created mechanisms that expand both the team’s knowledge and its partnerships with stakeholders.
Day to day, that shows up as patterns across data instead of transactions, signals that surface earlier than the old model allowed, and partners who engage us before their decisions are locked in. Those are the wins the previous four essays describe.
The team built this. Three years of learning new work and doing it at the same time, bringing creativity and judgment to questions the profession had not answered yet. The lighthouse is possible because of what they were willing to become while the light was being built.
What is clearly hard.
The human piece. Capability transitions are not fast, and people who built careers on one set of skills do not always want to reinvent themselves for another. Both thriving and leaving happen in the same function, and leaders who pretend otherwise are not leading.
Governance keeping pace with capability. We can surface signal in hours that used to take months. Our decision processes still move on the old timeframe. When accountability, escalation paths, and decision rights are designed for retrospective certainty, forward-looking signal can stall. The capability can outrun the infrastructure around it.
Measuring what matters. Decisions not made, risks not escalated, quality improvements that compound quietly over time: these are where the real value lives. They are also difficult to count. We are still building the vocabulary and the discipline for what success looks like when prevention and earlier action are the product.
What we are still learning.
The biggest open question for us is how to build mechanisms that translate insight into shared understanding and durable decision clarity at scale.
In my third essay I set up the metaphor: the old model was a flashlight; the new one is a lighthouse. We know the lighthouse is possible; we have seen it work. What we are still learning is how to run it by default, without depending on specific people to carry it.
Access to collective intelligence is much greater and faster than it was. That means we see more, and we are required to interpret more. Systems can connect data and surface patterns. The work of deciding what those patterns mean, when they matter, and how the organization should respond still lives in people.
We had a valuable session recently with Derek and Laura Cabrera on exactly this question: how to build mechanisms that bring systems thinking to life so teams can form insight together instead of in parallel. It crystallized something I had been wrestling with. The tools are necessary; the insights live in the human layer above them, and that layer has to be designed. It shows up in forums, shared language, escalation models, and repeatable moments where weak signals are examined before they become obvious failures.
After the Cabrera session, my leadership team and I have been reevaluating our structure and our mechanisms: how we design the capabilities we need for success. Their DSRP framework has become part of how we work the whiteboard together. What I did not expect was seeing the teams pick it up on their own. That is what new operating muscle looks like when it starts to form.
Consider a concrete example. Investigators see a trend across their cases. Enterprise risk sees a widening gap in a control. Auditors see recurring issues in a specific process. None of these signals alone crosses an old threshold. Too often, they stay disconnected, each rational on its own. An AI agent may flag the pattern across them. Once it does, the questions people have to answer are the ones that actually drive resourcing and response: do we need more coverage here, or less? Is this a one-off, or a pattern? Are the signals strengthening or weakening over time? It still takes people to understand the dependencies, apply judgment, and decide whether the moment calls for monitoring, intervention, or action. That is what the lighthouse is meant to enable, and it does not yet happen by default.
Designing that human layer at scale, across a function the size of ours, is the work we are doing this year and next. The bottleneck is operating muscle. If you have a framework that does this well, I want to hear it.
What the role of audit looks like five years out is not yet visible. The function is in motion. The last three years have taught me that seeing earlier is only the first step. Helping organizations understand what they are seeing, in time to act, is the harder and more important one. I expect the next three years to be just as instructive.
Get new essays in your inbox
One essay at a time. No promotion. Unsubscribe anytime.