Skip to content

The Organizational Debt Collector

AI didn't create your organization's knowledge problem. It just sent the bill. The practitioners best equipped to fix it have been doing this work for years — without naming it, and without charging for it at the level it deserves.

AI didn't create your organization's knowledge problem. It just sent the bill.

TL;DR: AI adoption is failing in large enterprises because organizations have spent years accumulating a different kind of debt — undocumented decisions, tribal knowledge, hollowed-out organizational health functions, and documentation graveyards. AI can't infer what humans always could. The debt is being called in. And the practitioners best equipped to help organizations pay it down have been doing fragments of this work for their entire careers — without naming it, and without charging for it at the level it deserves.

AI adoption is failing in large enterprises. Not because the technology doesn't work. Because the organizations deploying it don't know themselves well enough to tell it what they want.

Nobody documented how decisions actually got made. Process flows lived in the heads of the three people who had been there long enough to know. Change management happened at the project level, which meant every initiative competed for the same pool of attention without anyone tracking what the cumulative load was doing to the organization underneath. The function that used to ask the hard questions about organizational health got absorbed into HR and repositioned as a compliance operation. And the documentation that did exist — the lessons learned, the process maps, the decision logs — got filed somewhere nobody could find it and left to rot.

The gaps were always there. They were survivable when humans were the only ones who had to navigate them. Humans infer. They ask around. They remember what happened last time. They know, without being told, that when this company says it values speed and quality equally, it means speed.

AI cannot infer. And that changes everything.

The Progression Nobody Finished

To understand why, it helps to know where enterprise AI has been — and where it stalled.

The AI and technology community has been working through a predictable maturation arc — one that most enterprises entered with confidence and are now somewhere in the middle of, wondering why the results don't match the promise.

The first phase was about craft. Getting useful output from a model meant learning how to ask well — structuring instructions, iterating on phrasing, developing the personal skill of working a prompt until the response was actually usable. That value stayed with the individual. You got better at it. Your results improved.

The second phase was about architecture. Serious enterprise deployments stopped thinking about individual prompts and started thinking about the information environment surrounding the system — what knowledge it could access, how that knowledge was structured, whether the right context was in the right place at the right time. This is where most mature AI programs sit today.

The third phase is where deployments keep breaking. It's the question nobody built an answer for: what does this organization actually want? Not the stated goals in the strategy deck. The real tradeoffs. When speed conflicts with quality, which one wins here? When efficiency conflicts with relationship, what does this organization actually choose? What are the decision boundaries that a human employee absorbs over years of watching how things actually get done?

That knowledge has to be explicit now. The system can't infer it. It can't ask around. It can't draw on institutional memory it was never given. When it hits the gap, it doesn't adapt. It fails — or worse, it optimizes for the wrong thing with perfect efficiency.

What the System Finds When It Looks

Here's what an AI system encounters when it gets deployed into a typical large enterprise today:

Change management has been running at the project level for years. Each initiative has its own stakeholder map, its own communication plan, its own adoption metrics. Nobody has ever mapped what all of those initiatives are doing simultaneously to the organization's capacity for change. So the system has no way to know that the workforce it's being asked to help is already at the edge of what it can absorb. It can't read exhaustion. It can't read the accumulated weight of the fourth major transformation in three years.

OD isn't in the room. The function that used to do this work — mapping culture, surfacing the gap between stated values and actual behavior, asking leadership the questions nobody else would ask — moved to the consultant's invoice years ago. The organizations that could afford it got it. Everyone else got a business partner with a checklist. So when the AI system needs to understand how this organization actually makes decisions under pressure, there's no internal function positioned to answer. The knowledge exists. It's just not in any form the system can use.

The people who knew things are gone. Mass layoffs over the past several years didn't just reduce headcount. They eliminated institutional memory. The employees who remained learned, rationally, that information is a form of job security. Process knowledge gets hoarded. Decision rights go undocumented. The organizational chart says one thing about who approves what. The reality is something else entirely, known only to the people currently in those roles.

And the documentation that does exist is nearly unusable. SharePoint graveyards. Lessons learned that were never written down, or written down once and never retrieved. RAID logs from projects that ended three years ago. Retros that happened in a meeting and died there. The organizational record is a ruin.

The AI system arrives and needs explicit, structured, machine-readable organizational intent. It finds a debt that has been accumulating for years and was never called in — until now.

The Debt Was Always There

The intent vacuum that's breaking AI deployments is not a new problem. It is the same problem the profession has been working around, papering over, and absorbing for the entire arc of the modern change management discipline.

Every stakeholder interview where you asked an executive to define success and got five different answers from five different people — that was organizational debt.

Every impact assessment where you discovered that nobody had mapped decision rights for the process you were trying to change — that was organizational debt.

Every change project that launched without a portfolio view of what else was landing on the same population at the same time — that was organizational debt compounding.

Every time a skilled practitioner spent the first month of an engagement just figuring out how the organization actually worked versus how the org chart said it worked — that was organizational debt, and the practitioner was paying it down with their own time and expertise, usually without naming it as such.

The difference now is that AI systems can't absorb that debt the way human practitioners can. They can't infer. They can't ask around. They can't draw on twenty years of pattern recognition to fill in what nobody wrote down. When the system hits the gap, it doesn't adapt. It fails, or worse — it optimizes for the wrong thing with perfect efficiency.

The debt is being called in. The question is who's equipped to help organizations pay it.

The Work You've Already Been Doing

Change practitioners have been doing fragments of intent engineering for their entire careers. They just haven't been calling it that, and they haven't been selling it at the level it deserves.

Stakeholder analysis isn't just a communication planning tool. It's organizational mapping — who holds real authority versus formal authority, where resistance will come from and why, what the organization is actually afraid of underneath the stated concerns. That's intent-relevant knowledge. It describes how the organization makes decisions under pressure.

Impact assessment isn't just a checklist for who gets trained. At its best, it surfaces the gap between how a process is documented and how it actually runs — the workarounds, the judgment calls, the undocumented exceptions that keep the system functional. That's exactly the tribal knowledge that has to become explicit for AI systems to operate safely.

Change readiness work — the real version, not the survey — requires practitioners to develop a working model of the organization's actual capacity, culture, and constraints. That model is organizational intent, partially assembled.

The practitioners who stayed close to this work — who never let the templates fully replace the thinking — have been building something more valuable than they've been charging for. The market just didn't have a name for it. It does now.

Intent engineering is organizational archaeology with a business case attached. It's the work of making what organizations know about themselves explicit, documented, and usable — by humans and, increasingly, by the systems working alongside them.

The Revaluation

The commoditization of change management hit the template-sellers first. If your value proposition was a methodology, a framework, a deck that looked the same at every client, that was always going to compress. The templates got cheaper. Then they got free. Then AI started generating them on demand.

What didn't compress was the capability to walk into an organization and understand it — really understand it — faster and more clearly than the people who work there every day. To surface the gaps between the stated strategy and the operational reality. To make the implicit explicit. To ask the question that stops the room: when your two top priorities conflict, which one actually wins?

That capability is not a methodology. It can't be templated. It requires years of organizational pattern recognition, a tolerance for ambiguity, and the specific skill of getting people to say out loud what they've only ever thought.

It is also, precisely, what AI adoption now requires at scale.

Organizations are about to spend significant money on AI systems that will fail — not because the technology doesn't work, but because they've never done the organizational work that would make the technology safe to deploy. The intent vacuum is real. The debt is real. And the practitioners who can help pay it down are already in the building. They just stopped charging for the most valuable thing they do.

ChangeGuild: Power to the Practitioner™

Latest

Friction Was the Business Model
AI

Friction Was the Business Model

When intelligence can be applied continuously and at scale, complexity stops functioning as a barrier. Organizations built to absorb that barrier may discover their role was more temporary than it appeared.

Members Public
The Coming Intelligence Oligopoly
AI

The Coming Intelligence Oligopoly

Most leaders still think of AI as a tool. In reality, it is becoming a form of scalable cognition embedded directly into the enterprise. The firms that reorganize around this shift will not just operate more efficiently. They will operate under fundamentally different competitive conditions.

Members Public
Action Absorbs Anxiety

Action Absorbs Anxiety

When the world feels unstable, the weight people carry does not stay outside the workplace. Anxiety grows where influence feels lost. Deliberate action restores agency, protects our capacity to function, and helps practitioners stabilize the systems around them.

Members Public