I got scared that AI would take my job. Then I read two things that changed everything.

A few days ago, I had a moment of genuine fear about AI. Not a passing thought — a proper gut-punch of: "At this rate of progress...what if I'm becoming irrelevant?"

It might be low key embarrassing to admit that...but in context it's also fascinating; I've spent the better part of two decades helping leaders and organisations navigate complex change. I study this stuff. I write about it. I bore people at dinner parties about it.

(Just kidding...I don't get invited to dinner parties anymore)

And here I was — emotionally rattled by a tool...that has no emotions whatsoever.

There's something worth sitting with in that juxtaposition.

But then I read two things that shifted everything about what I thought I "knew" about AI — and I think they're just as important for the leaders and organisations I work with as they were for me.

Nobody wants to hear this...

Courtesy of The Australian, a recent analysis of 164,000 workers — covering over 443 million hours of work across more than 1,100 employers — found something that should stop every AI evangelist in their tracks.

AI isn't reducing workloads. It's intensifying them.

Workers who adopted AI tools saw their time on email and messaging more than double. Their use of business management software rose 94 per cent.

But the time spent on focused, uninterrupted work...the kind required for complex problem-solving, strategy, and creative thinking...fell 9 per cent.

The research captures the dynamic simply: capacity freed up by AI gets immediately repurposed into doing more work. And not necessarily better work.

One researcher described it as "a sense of momentum": AI makes additional tasks feel easy and accessible, so people keep going. It's the organisational equivalent of feeling good mid-workout, so you go harder and longer than ever before.

I've experienced this first hand; shout out to Rovo by Atlassian, who helped me build a complex program of work in Jira, in a four-hour whirlwind that would have typically taken a fortnight!

If you've been reading my work for a while, I pose the same question I wrote about in Beyond 'Making Your Bed' — the seductive pull of busyness over effectiveness. The research is indicating we're doing more...but are we doing what matters?

Then I read something that reframed the whole thing.

George Sivulka, writing for a16z (one of the most influential venture capital groups in the world) made an argument I haven't been able to stop thinking about...because I already had lived the experience. Heck, I built a career around it!

Sivulka opens with a question: AI just made every individual 10x more productive. No company became 10x more valuable as a result. Where did the productivity go?

His answer reaches back to the 1890s.

When electricity arrived, textile mills in New England ripped out their steam engines and installed electric motors. Same machines. New power source.

For thirty years, almost nothing changed in terms of output.

It wasn't until the 1920s — when factories completely redesigned themselves, with assembly lines, individual motors in every piece of equipment, and entirely different roles for workers (quick shout out to Taylorism for all the fellow Management nerds reading this) — that electrification produced the returns everyone had expected.

The lesson, as Sivulka puts it: we've swapped the motor. We have not redesigned the factory.

What age are we really living in?

Sivulka draws a distinction that I think is the most important framing for any leader dealing with AI right now: Individual AI versus Institutional AI.

Individual AI makes people more productive. Institutional AI makes organisations more productive. And productive individuals...do not automatically make productive firms.

We have all seen it: without a coordination layer, more productive individuals can create chaos — everyone with their own ChatGPT habits, their own prompting styles, outputs that don't connect to anyone else's. An org chart might exist, but the actual flow of AI-generated work tells a different story entirely.

In truth, my temporary AI panic attack was disingenuous; I wrote my own relatively recent piece on why AI projects underperform, using Lencioni's Five Dysfunctions as the lens. The technology is rarely the problem. The organisation is. Organisations are led by people. People require continual investment.

What bridges Individual AI and Institutional AI? Sivulka names it directly: change management.

Needless to say...I like the guy!

He points to the controversial Palantir's resilience as a case study in what he calls "process engineering" — encoding firm processes into agents and actualising the change management required to put them into action.

A top-tier investment bank chose one platform over a major AI lab specifically because the lab's team didn't understand the domain they were deploying into. The technology was arguably equivalent. The change capability was not.

"Glad your panic attack is over...now mine is just starting..."

A few things I'd offer, from the intersection of these pieces and the work I do every day:

Stop measuring AI adoption. Start measuring AI coordination. The ActivTrak data shows 80 per cent of employees now use AI tools at work. That tells you almost nothing useful. The question is: are those tools aligning in the same direction? Or are you measuring busyness and calling it progress?

Cutting my teeth in mining and manufacturing 20 years ago, the True North for productivity and alignment was "make work visual"; the discipline of drawing out processes and work instructions on physical whiteboards "where the work happens". Today is the same, just more digital.

The people problem always precedes the technology problem. Whether it's trust (do your people share enough good data for AI to actually work?), coordination (are your processes designed for AI-augmented work or just AI-adjacent work?), or bias (are your senior leaders using AI to confirm what they already believe?) — every one of these is a human and cultural challenge first.

(Irony: both of the articles behind this piece were sent to me by two of the brightest people I know, who make a career out of engaging with others. Clearly that skillset is still red hot.)

Redesign the factory in modules, not all at once. Oxford professor Bent Flyvbjerg has spent his career studying why big things fail. His finding: only 0.5% of major projects deliver on time, on budget, and on benefits. The ones that succeed share a common trait — they're built modularly. His question for every transformation: "What's your LEGO?" What's the repeatable unit you can deliver, learn from, and build on?

Organisations that try to redesign everything simultaneously will join the 99.5%. Those that find their LEGO — a team, a process, a workflow — and get that right first, then replicate it, will not.

This applies directly to AI transformation. Don't boil the ocean. Find one place where AI genuinely changes an outcome, make it work properly, document it properly...and then build from there.

A final coherent thought.

Here's what I keep coming back to, and it's the subject of an article I'm already writing.

Sivulka's argument is essentially that organisations need AI that serves their specific capabilities — not generic productivity tools sprayed across the enterprise.

There's a concept from strategy literature called the Coherence Premium — the measurably higher returns earned by companies that identify 3–6 interlocking capabilities, make them genuinely world-class, and align everything to them. The market, it turns out, pays a premium for focus and coherence over breadth.

Just over a year ago I spoke at a Change Management Institute event and declared that this concept has been living rent free in my head. It still does, and I think it is the most important strategic question for any leader right now: which of your distinctive capabilities does AI actually supercharge? And are you building those, or are you just adding tools?

More thoughts on this to come.

Back to my fear for a moment.

What I realised, sitting with that gut-punch, is that my fear wasn't really about AI. It was about the familiar anxiety of complex change — What does this mean for me? Am I ready? Do I still have something to offer?

Every leader I've worked with in the last two years has felt some version of that. Most won't say it out loud.

The irony is that an unemotional tool is triggering some of the most profound emotional responses I've seen in organisations. That's signal, not noise.

The leaders who'll navigate this well aren't the ones who suppress the fear or outsource the problem to their IT team.

They're the ones who treat AI adoption the way any complex change deserves to be treated: with honesty about where the organisation actually is, curiosity about what needs to genuinely shift, and the patience to redesign the factory — not just swap the motor.

That, I'm confident about. Great leadership ain't going anywhere!

Grateful to George Sivulka at a16z for the textile mill framework — one of the clearest pieces of thinking I've read on this topic. To Ray A. Smith's reporting in The Australian on the ActivTrak research, which grounded the conversation in data worth taking seriously. And to Bent Flyvbjerg, and the discipline of modularity, which deserves to be in every transformation leader's toolkit.