The essential leadership concept behind why so many hyped AI projects are underperforming
AI is no longer just the playground of technologists and futurists. It’s in nearly every digital interaction and in the inboxes of C-suite leaders everywhere. But for all the hype cycle, one thing is increasingly clear:
Your AI projects will move only as fast as your organisation’s data governance maturity.
But before you stuff your technology portfolio, know this: data governance isn’t a tech problem. It’s a trust problem.
I affirmed this realisation in a recent conversation with the brilliant Dr Jen Frahm. We were riffing on the proposition of having a large language model (LLM) for change impacts - only to be sobered up by the fact data is there, but it’s not coming to you...
Locked away in silos, held tight by teams, or sitting under layers of compliance and hesitation.
My opinion on why? Because to give up data is to give up control. It’s exposing your assumptions, your performance, your blind spots, the mundane work you or your team haven't done. You only share that kind of vulnerability with people you trust.
No trust = no data.
No data = no AI impact.
It doesn’t matter how slick your models are, or how many prompt workshops you’ve run. AI doesn’t just rely on data or user capability - it feeds on the institutional memory, workflows, and assumptions of an organisation.
That makes it political. That makes it personal.
If the foundational conditions aren’t right, your AI effort will stall at the same layer every transformation does: people. More specifically, the behaviours of the leaders. And for the record, I'm not alone in this thinking.
Back to the bottom of the pyramid
The Five Dysfunctions model by Patrick Lencioni remains an accessible and compelling guide for prompting better leadership behaviours - even as those behaviours now play out in digital ecosystems, not just meeting rooms or physical infrastructure.
Here's my proposition for you:
Can we shift our thinking so that AI projects are not technical deployments, but team dynamics exercises in disguise?
My focus on change, leadership and transformation helps me see that behind every struggling algorithm is a series of missed conversations. And under every failed AI pilot is a group of people who weren’t brave enough to have a 'crucial conversation' (in the classic definition) on what success looked like.
Let’s get back to the bottom of the Lencioni pyramid—but through an AI lens:
1. Absence of Trust (The root of the root) Every AI use case starts with a plea for data. A common misunderstanding is that its "the more the better", but more realistically it's "the better, the better"; data quality ahead of data quantity. But data quality takes work and resources.
People fundamentally don’t give up data when they don’t trust the requestor—or the motive. Data exposes the work and importantly, aligned to Lencioni's explanation of this fundamental layer, exposing the work (or lack thereof) is about being vulnerable.
People will only share that exposure and be vulnerable with those they believe will use it responsibly. And if you're asking them to put in the work to help with data quality, when they don't trust you? You don't need to be an AI Engineer to predict the outcome on that one...
2. Fear of Conflict In high-trust teams, disagreement is respectful, constructive and fuels progress. In low-trust teams, everyone nods and then resists behind the scenes - or worse, actively undermines. If your AI steering group meetings feel too polite (or worse...they are being avoided!), then you’re in trouble. You want the data scientists questioning the business owners, the frontline staff poking holes in assumptions, and the Execs willing to debate the ethics. That’s where the real progress is.
3. Lack of Commitment No trust, no conflict... no commitment. When decisions are made without real debate, nobody really owns them. They just execute the bare minimum. If your AI roadmap feels sluggish, maybe it’s not a capability or funding issue - it’s that nobody actually believes in the decisions being made.
4. Avoidance of Accountability In a world of generative AI, where outputs are probabilistic and evolving, holding people to account becomes fuzzier. But in high-trust teams, people hold themselves to account. Not because of fear, but because of pride in the work. AI initiatives need clear roles, open debriefs / retrospectives, and genuine collaboration on blockers and misalignments.
5. Inattention to Results AI promises outcomes—efficiency, insight, personalisation. But if you’re not measuring results that matter (and sharing them transparently), the effort becomes theatre. High-trust teams are laser-focused on impact. Not vanity metrics, but real business value and real human outcomes.
Putting Trust into the Technology budget
Having witnessed this up close recently, I am further convinced that trust is what drives a hyper-digital, AI-enabled organisation. It’s the one variable that will determine whether your AI efforts are scalable... or superficial. Speedy...or stymied.
If you’ve built your leadership muscle on principles like Lencioni’s model—heart-led, relationship-oriented, based on open dialogue and mutual respect to agreed standards —this is your moment. But you need to be overt about it. The AI revolution won’t wait for trust to form quietly in the background. You need to lead with it, name it, and rebuild it...for the open data sharing and collective ownership of the work ahead. And there will be work! Some of it rather mundane.
Organisational life is clearly changing. But continually cultivating trust is still a must.
...I wonder if there is an AI for that already?
Professional update: I'm back in the market for consulting work on all things strategic change, leadership development and wider transformation. Look forward to chatting soon!