Feb 22, 2026

AI Isn’t a Hammer. That’s the Problem | AI Dependency, Deskilling and Critical Thinking

author's image Morten Jensen
7 minutes read

Exoskeleton effect

A Guardian column this week asks whether AI dependency is eroding our ability to think. It’s the right question — but the answer treats AI like just another tool. It isn’t.


Today’s Guardian published a letter [1] from someone worried about their partner’s AI use. He’s in the top 0.3% of ChatGPT users worldwide, has ADHD and uses AI for everything — work, life, decisions — even when a more straightforward option would do the job better. She’s concerned he can’t think for himself anymore.

The expert advice is thoughtful. Ask what gaps AI is filling. Recognise the dependency compassionately. Find the underlying anxiety.

It’s not bad advice. But it doesn’t go far enough.

Where the article falls short

The column treats AI the way psychology might treat any problematic dependency: identify the emotional need, address the root cause, restore healthy functioning. It’s a reasonable approach — but it assumes AI is something passive that you develop a relationship with, the way you might with any other technology or habit. A search engine doesn’t learn your patterns and adapt to your thinking style. A spreadsheet doesn’t draft your strategy or challenge your assumptions. And it certainly doesn’t write code that mostly works but occasionally doesn’t in ways that require both deep expertise and critical thinking to catch.

AI — particularly the conversational AI that hundreds of millions of people now use daily — is not a tool in any conventional sense. It responds, adapts, remembers your preferences and gets better at anticipating what you need. It sits somewhere between tool, colleague and advisor without being any of those things. And the trajectory is towards it becoming more capable and more embedded in how we work — not less.

The article asks whether the boyfriend’s dependency is a symptom of anxiety. That may be part of it. But I think the more useful question is: in a world where this kind of cognitive partnership is available to everyone, what does it mean to maintain your own expertise — and your own agency? And whose responsibility is it to figure that out?

This is bigger than individual vulnerability

It would be easy to read that article and think this is a niche problem — something that affects certain personality types or people with specific challenges. But every knowledge worker I know — myself included — is negotiating some version of this right now.

The question isn’t whether AI is useful. It obviously is. The question is what happens to our capabilities over time when we routinely delegate thinking to a system that can do it faster. And what happens to the collective body of knowledge and critical thinking when millions of people do this simultaneously.

The research on this is still young but it is no longer speculative [3][4].

In January 2026 Anthropic (the company behind Claude, which we use extensively) published a randomised controlled trial [2] with software developers learning a new programming library with and without AI assistance. Developers who used AI scored 17% lower on comprehension tests. On debugging, the gap was wider still. And they didn’t finish significantly faster.

What made the study interesting was not the headline finding but the detail underneath. The researchers identified six distinct patterns of AI interaction and three of them preserved learning outcomes. The difference wasn’t whether developers used AI. It was how. Those who asked for explanations, posed conceptual questions or wrote their own code first and then sought understanding kept their skills. Those who delegated the thinking — effectively saying “just do it for me” — learned almost nothing.

Interestingly, the broader research suggests that higher education and professional experience offer some protection — older and more educated users tend to maintain deeper thinking habits even with heavy AI use [4]. But protection is not immunity.

Other researchers have described this as the “exoskeleton” effect [2]. AI enhances your performance while you’re wearing it. Take it off and you may find you’re weaker than before you put it on.

For anyone in a technical or advisory profession, this is worth paying attention to. If I’m reviewing AI-generated infrastructure designs but my own ability to reason about those architectures has quietly atrophied, I’m not augmented. I’m exposed. And so are my clients.

The relationship question

When psychologists discuss AI dependency they tend to reach for existing frameworks: behavioural addiction, cognitive offloading, anxiety-driven coping. These are useful lenses but they were designed for technologies that are fundamentally passive — things that deliver a stimulus and wait for you to come back.

AI doesn’t wait passively. It engages. It remembers. It adapts. Research [5] suggests it can fulfil basic psychological needs — competence, autonomy and even a form of relatedness — while potentially undermining the very capacities it appears to support. One framework describes a continuum from “healthy scaffolding” where AI promotes growth through to “risky overload” where it begins to dictate rather than facilitate. But even that understates the challenge. Training wheels are inert. They don’t get better at anticipating where you want to go. They don’t make cycling without them feel unnecessarily difficult by comparison.

We need better mental models for this. Not because the existing psychology is wrong but because the thing we’re relating to is genuinely new.

What I’m doing about it

I don’t have a framework to sell. I have a set of questions I’ve started asking myself.

What do I want to remain good at? Not everything — that’s neither possible nor necessary. But there are skills and capabilities that define my professional value. Deep technical reasoning, architectural thinking, the ability to see what a client actually needs beneath what they’re asking for. If I let AI handle those routinely, I’m trading long-term capability for short-term efficiency.

Am I engaging or delegating? The Anthropic study [2] suggests this is the critical variable. When I use AI as a thinking partner — challenging my reasoning, explaining tradeoffs, stress-testing assumptions — the interaction builds capability. When I use it as a doing machine — “just write the code” or “just draft the email” — it may erode it. The distinction feels small in the moment but compounds over time.

What happens if nobody does the hard thinking anymore? AI is trained on the accumulated output of people who struggled with problems and developed expertise the slow way. But that body of knowledge was never neutral — it carries the biases, cultural assumptions and blind spots of the people who produced it. Those distortions have historically been corrected by critical thinking: questioning received wisdom, noticing what’s missing. If we delegate that work to AI, we don’t just risk the knowledge getting thinner. We risk losing the capacity to correct what’s already wrong with it.

The conversation we should be having

The Guardian column ends by suggesting the boyfriend’s AI use is probably driven by anxiety and that the root cause needs to be found. That may be true for him. But the more important conversation isn’t about individual behaviour. It’s about a collective challenge that every professional, every business and every educator is now facing.

AI is not a hammer. It’s not a crutch. It’s a new kind of cognitive relationship. And we are all — right now — establishing the habits and patterns that will determine whether it makes us more capable or less.

The research suggests the answer depends almost entirely on how intentionally we approach it. Nobody else is going to manage this for us. Each of us needs to decide: which skills do I protect? Where do I insist on doing the hard work myself? And where do I gratefully hand things over, knowing the tradeoff I’m making?

I don’t think the answers are obvious yet. But not asking the questions is probably the worst place to start.


Morten Jensen is the Managing Director of Virtuability, an AWS consulting firm specialising in cloud architecture and AI integration for enterprise clients. He has been a regular AI user since 2023.

References:

[1] Barbieri, A., “I’m worried my boyfriend’s use of AI is affecting his ability to think for himself” (The Guardian, February 2026)

[2] Shen & Tamkin, “How AI Impacts Skill Formation” (arXiv, January 2026). See also Anthropic’s summary.

[3] Kim, “From Algorithm Aversion to AI Dependence: Deskilling, Upskilling, and Emerging Addictions in the GenAI Age” (Consumer Psychology Review, 2026)

[4] Gerlich, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” (Societies, 2025)

[5] “Cognitive Offloading or Cognitive Overload? How AI Alters the Mental Architecture of Coping” (Frontiers in Psychology, 2025)

We have the tools to understand your cloud and the guidance to make the most of it.

GET IN TOUCH

Schedule a call with a us and find out what Virtuability can do for you.

GET STARTED