Skip to content
AI

How to Use AI Without Losing Your Edge

AI can speed up change work, but it can also quietly erode the thinking behind it. New research suggests overreliance on AI creates “cognitive debt,” weakening judgment, recall, and sensemaking if we’re not intentional about how we use it.

This is your brain on AI.
TL;DR: A new study suggests that heavy reliance on AI for writing reduces cognitive engagement and memory retention. For change practitioners, this matters. AI can accelerate work, but if misused, it quietly erodes judgment, synthesis, and ownership—the very skills our profession depends on. The solution isn’t using less AI. It’s using it differently.

The Convenience Trap

A recent research paper made waves in AI and academic circles for an uncomfortable reason.

Researchers asked participants to write short essays under three conditions:

  • with no assistance
  • using a search engine
  • using a large language model (LLM)

They didn’t just measure output quality. They measured brain activity.

The finding was striking:
Participants who relied on AI showed lower cognitive engagement, weaker recall of what they wrote, and less evidence of deep processing. The researchers called this accumulation of “cognitive debt.”

Not burnout.
Not laziness.
Debt.

Work got done. But understanding didn’t accumulate.

If you work in change, transformation, or strategy, that should make you uncomfortable.

Why This Matters for Change Practitioners

Change work is not mechanical work.

It relies on:

  • sensemaking
  • pattern recognition
  • contextual judgment
  • narrative construction
  • political and emotional intelligence

These are thinking-first skills.

And they are exactly the skills most vulnerable to degradation when AI is used as a shortcut instead of a partner.

The risk isn’t that AI will replace practitioners.

The risk is that practitioners will slowly lose the very cognitive muscle that makes them valuable.

Cognitive Debt, Explained Simply

Think of cognitive debt the way we think of technical debt.

You can:

  • move faster now
  • skip some hard thinking
  • let a tool fill in the gaps

But later, you pay for it:

  • weaker intuition
  • poorer recall
  • less confidence defending your work
  • reduced ability to improvise under pressure

Over time, this compounds.

The research suggests that when people start with AI rather than finish with it, their brains engage less deeply with the problem itself.

And in change work, the problem is the work.

Where Practitioners Are Most at Risk

Based on how most people are using AI today, the danger zones are predictable.

1. First-Draft Thinking

Letting AI generate:

  • stakeholder analyses
  • change narratives
  • comms strategies
  • impact assessments

…before you’ve done your own synthesis.

This feels efficient.
It is also where the deepest cognitive outsourcing happens.

2. Pattern Substitution

AI is very good at producing plausible patterns.

It is not good at knowing:

  • what matters politically
  • what’s sensitive culturally
  • what has already failed in this organization

Those insights come from lived experience and judgment, not prompts.

3. False Fluency

When AI writes cleanly, people often mistake polish for clarity.

But clarity without comprehension is fragile.

You see it later when:

  • you can’t explain your own deck
  • your logic falls apart under questioning
  • stakeholders push back and you don’t know why
💡
Want support applying these ideas to your practice or team? We offer coaching to help change leaders do better work. Let's Talk.

The Right Way to Use AI in Change Work

The answer is not “use AI less.”

It’s use AI later.

Here’s a model that works.

Step 1: Think First (Analog Brain On)

Before touching AI:

  • Write the ugly version
  • Sketch the logic
  • Bullet your assumptions
  • Note what you don’t understand yet

This is where cognition happens.

Messy is good.

Step 2: Use AI as a Challenger, Not an Author

Now bring in AI to:

  • pressure-test your thinking
  • identify gaps
  • suggest alternative framings
  • surface counterarguments

Prompt it like a sparring partner, not a ghostwriter.

Example:

“Here’s my draft thinking. Where is this weak, incomplete, or overly simplistic?”

Step 3: Reclaim Ownership

Before anything goes out:

  • rewrite in your own voice
  • re-sequence the logic
  • remove anything you don’t fully understand
  • ask: Could I explain this without notes?

If the answer is no, you’re borrowing intelligence instead of building it.

A Simple Rule of Thumb

If AI saves you thinking time, you’re probably doing it wrong.
If AI sharpens your thinking, you’re doing it right.

The goal is not speed.
The goal is better judgment per unit of effort.

What This Means for the Future of the Profession

Change practitioners who thrive in the AI era will:

  • Think before they prompt
  • Use AI to stress-test, not substitute
  • Build personal frameworks instead of borrowing generic ones
  • Maintain strong narrative and synthesis skills
  • Treat cognition as a professional asset, not a cost center

Those who don’t may still produce deliverables—but they’ll struggle when things get messy, political, or ambiguous.

And change work is always messy.

Final Thought

AI is not making us less capable.

But it is revealing who was thinking deeply to begin with.

If you want to future-proof your practice, the answer isn’t resisting AI. It’s protecting the part of your work that only a thinking human can do.

ChangeGuild: Power to the Practitioner™

Now What?

  • Think before you prompt.
    Even a rough outline or a few handwritten notes forces your brain to engage with the problem. If you start with AI, you skip the cognitive work that builds judgment. Use the tool after you’ve formed a point of view, not instead of one.
  • Use AI as a challenger, not a creator.
    The most valuable prompts ask what you missed, what assumptions you’re making, or how your thinking could fail. When AI generates the work for you, learning stops. When it pushes back on your thinking, learning accelerates.
  • Pressure-test your own understanding.
    Before sharing anything AI-assisted, ask yourself if you could explain it clearly without notes. If you can’t, the insight isn’t yours yet. That’s a signal to slow down, not ship faster.
  • Practice unaided thinking on purpose.
    Set aside time where AI is off and thinking is manual. Write badly. Sketch ideas. Talk through problems. This isn’t inefficiency—it’s maintenance. Cognitive strength fades when it isn’t exercised.
  • Protect what actually makes you valuable.
    In change work, value comes from judgment, framing, and sensemaking—not speed or volume. AI can amplify those skills, but only if you keep them sharp. Otherwise, it quietly replaces them.

Frequently Asked Questions

What is “cognitive debt” in the context of AI?
Cognitive debt refers to the mental cost of repeatedly outsourcing thinking to tools like AI. When people rely on AI to generate ideas, structure arguments, or do initial reasoning, they often retain less understanding and build weaker mental models over time. The work gets done, but the learning doesn’t accumulate.

Is using AI making people less intelligent?
Not inherently. The research suggests the issue isn’t AI itself, but how it’s used. When AI replaces early-stage thinking, it reduces engagement and recall. When it’s used to challenge or refine thinking, it can actually deepen understanding.

Should change practitioners stop using AI altogether?
No. That would be unrealistic and counterproductive. The goal is not avoidance, but intentional use. AI works best as a thinking partner, not a substitute for judgment, sensemaking, or experience.

What kinds of tasks are most at risk of cognitive debt?
Tasks that involve synthesis, framing, or decision-making are the most vulnerable. Examples include stakeholder analysis, change narratives, strategy documents, and executive communications. These require understanding, not just output.

How can I tell if I’m relying too much on AI?
A simple test: if you struggle to explain or defend the work without looking at it, you probably didn’t internalize the thinking. That’s a sign the tool did more of the work than you did.

Does this mean AI reduces productivity in the long run?
Not necessarily. AI can dramatically improve efficiency. The risk is when efficiency replaces comprehension. The most effective practitioners use AI to accelerate insight, not bypass it.

What’s the safest way to use AI in change work?
Start with your own thinking. Use AI to test, challenge, or expand it. Then take ownership of the final output. If you can explain it clearly without the tool, you’re using AI the right way.


💡
Like what you’re reading?
This post is free, and if it supported your work, feel free to support mine. Every bit helps keep the ideas flowing—and the practitioners powered. [Support the Work]

Latest

Why Shared Services Break First in AI Transformations
AI

Why Shared Services Break First in AI Transformations

Shared services are often the first place AI breaks—not because the tech fails, but because automation removes the human judgment holding complex systems together. When exceptions rise and authority stays centralized, shared services absorb the damage long before leaders see it.

Members Public
Year-End Metrics Lie (and What to Measure Instead)

Year-End Metrics Lie (and What to Measure Instead)

Year-end metrics are designed to show completion, not readiness. The problem is that change does not resolve on a fiscal calendar. What matters most often shows up after the dashboards turn green, in confidence gaps, workarounds, and quiet strain. This piece explores what to measure instead.

Members Public
An Unwrapped Look at the Work Behind the Work

An Unwrapped Look at the Work Behind the Work

This year-end Unwrapped looks past dashboards and milestones to name the work change practitioners actually carried. The stabilization, translation, and risk absorption that kept change moving rarely shows up in the recap, but it shaped everything.

Members Public