Essay · March 2026

Working with clarity when AI is in the loop

What changes when collective intelligence becomes instant.

Until very recently, the leader who assembled the full picture faster controlled the meeting; information-gathering was a career-differentiating skill, and an MBA was effectively a credential in doing it well.

That advantage collapsed in the time it takes to write a prompt. What used to take a team three weeks now takes an afternoon; the person whose career was built on being great at gathering discovers the gathering is now the cheap part.

The harder collapse is in the rhythm around decisions. The weekly one-on-one, the monthly leadership team review, the quarterly offsite: these existed because the pace of information moving through an organization was slow. You needed time built in for alignment, and you needed ritual checkpoints for shared understanding. The decision calendar was paced to the speed of information because that was the binding constraint.

AI removed the binding constraint. The pace of information is now effectively instant; the decision calendar has not caught up. Weekly reviews arrive after the question has already cycled through three iterations in three different teams; the alignment meeting is about what the market already moved past yesterday.

Speed gets the headlines. What actually changes is the access. For the first time, a single person has collective intelligence on tap: the aggregated patterns of millions of writers, analysts, researchers, and practitioners, available in seconds. The leader who imagines what becomes possible with that access is the one who wins the next decade. The default move is bolting AI onto the existing workflow; the harder work is rebuilding the workflow around the access.

McKinsey Global Institute's 2023 analysis of generative AI estimates the potential economic value at $2.6 trillion to $4.4 trillion annually across sixty-three business use cases. For the average knowledge worker, the research found that sixty to seventy percent of current employee time is spent on activities that can be significantly augmented or automated by current generative AI. The gap between what AI makes possible and what most organizations have captured is the work of the next decade.

It cuts the other way too. Taking the output and running with it, without questioning it, is how people drive their cars off piers following GPS directions. The AI produces something confident, clean, and typically correct in shape; it will be wrong in specifics that matter, and it will not tell you which specifics those are. It will give you the average answer; it will not flag what is unique about your situation; it will hallucinate the detail you have no independent way to verify. Blind trust at this speed takes you somewhere you did not mean to go, in seconds, with no friction to slow the error.

A 2023 Harvard Business School field experiment with BCG consultants found that on tasks inside AI's capability frontier, consultants using GPT-4 delivered roughly thirty-eight percent higher-quality work than colleagues without it. On tasks outside the frontier, AI-augmented consultants were nineteen percentage points less likely to produce correct answers than those working without AI. The users could not easily tell which side of the frontier they were on. The same tool that lifts performance on the work you understand can sink performance on the work you do not, and the error mode looks like confidence.

The operator discipline is trust but verify, which means asking questions of the output before you pass it on.

What data is this actually built from, and how current? What assumptions are baked in that the AI did not surface? What would a person with twenty years in this specific domain see that an AI would not? Where is this likely wrong in ways I would not catch without looking? What context is missing that would change the answer?

Those questions catch errors, but that is not their main purpose. Each one is a lever that opens new ground. You start with what the AI produced; you probe the edges of it; you discover the assumption it was built on; and in that probing you begin to see what else is possible that the AI, working from averages, could not have proposed. The verification is also where the imagination happens.

This is where the information-gathering leader can still win. The craft of gathering is commoditized; the craft of judging what has been gathered is not; the craft of reimagining what becomes possible when gathering is effectively free is the one that differentiates now. Senior operators have lived context the AI cannot replicate: they know which sources are reliable, which patterns recur, and which questions to ask that do not appear in any prompt template. Paired with AI, that context stops being a career record and becomes a multiplier.

Early-career operators have their own edge, and it is not a smaller one. They are not unlearning a prior workflow; they hit the discipline curve faster. They ask the questions a senior would skip because the senior assumes the answer. And the compounding is on their side: the operator who builds the verification habit three years into a career carries it across every year that follows.

The future of knowledge work is imagination applied to collective intelligence, powered by the curiosity and clarity that keep the car off the pier.

Continue the conversation. I read and reply to reactions on LinkedIn. The sharpest responses usually come from readers.

Julia Denman is Chief Risk and Audit Officer at Microsoft and a director on The Clorox Company's board. Her book, The Clarity Quotient, publishes early 2027.

Take the five-minute self-assessment →

Get the essays in your inbox

One short essay per month. No promotion. Unsubscribe anytime.

All essays About Julia