In our article “AI First Means: New Skills for Everyone”, we explored the additional competencies that emerge when people work productively with AI: prioritisation, quality control, tool selection, and above all, maintaining oversight. In short: AI First means that everyone needs to acquire new skills to collaborate effectively with one or even multiple AI systems in a professional environment.
One question we didn’t address: AI First does not mean that existing human capabilities become obsolete overnight or are entirely replaced by AI. Quite the opposite. Those who deploy AI unreflectively and rely on it exclusively risk a gradual loss of competence – a phenomenon known as deskilling.
From our perspective, the central point is this: responsibility for the quality of results always remains with humans. AI delivers outputs – but the interpretation, critical review, and final decision are and remain human tasks. For this to work, we must continue to be capable of fulfilling these tasks in the future.
Deskilling or the Calculator Problem
Most of us know this phenomenon from everyday life: mental arithmetic. Since the first calculator – and later smartphones – our ability to quickly estimate figures in our heads has noticeably declined. Anyone asked to compare two percentage values spontaneously or make a rough projection in a meeting today instinctively reaches for their phone. The ability to intuitively gauge orders of magnitude has become partially rusty for many of us.
This isn’t individual failure. It’s a fundamental principle of human cognition: what we don’t use, atrophies. What we train regularly stays sharp – and may even improve.
This same principle applies to working with AI systems – except that the consequences in a professional context can be far more significant than a miscalculated sum.
What We Observe in Ourselves when working with AI
At NordAGI, we work with various AI tools daily. What we’ve noticed: reaching for AI is remarkably easy given its availability and the quality of its outputs. Unfortunately, this means certain thought processes get replaced rather than purposefully supported.
There’s more to it: AI results are only ever as good as the prompt that guides them. A superficial prompt leads to a superficial answer. Those who provide too little context out of convenience or ask the wrong questions receive mediocre results at best – and in the worst case, don’t even realise it.
Example 1: Research and Synthesis
Anyone who regularly needs to get an overview of extensive studies, whitepapers, or technical articles knows the temptation: load everything into NotebookLM, have it summarised, carry on working. Efficient? Yes. But complete?
The question is: do we actually grasp what the authors want to communicate in a nuanced way? Do we pick up the subtle undertones – what’s written between the lines? Or do we receive a sterile summary that hits the core but ignores the subtle messages?
There’s a difference between deconstruction and compression. Those who only have content summarised by AI deconstruct it – reducing it to extractable facts. Those who work through content themselves compress it – and can later expand it again to understand what the authors really meant to express.
This doesn’t mean AI summaries are fundamentally unsuitable. For an initial overview, for prioritising sources, or for quickly categorising large volumes of documents, they’re excellent. But for content that will feed into discussions, decisions, or your own publications, it’s worth skimming the originals. At minimum, you should have read and understood the central passages yourself – if only to be able to argue at eye level in a discussion.
Example 2: Numbers and Gut Feeling
A second example concerns handling data. Imagine you receive an extensive table with financial metrics, market data, or project KPIs. The quick route: load the table into an AI tool and have a summary generated.
This works – until it doesn’t.
What gets lost is the gut feeling that emerges when you look at numbers yourself. That intuitive pause when something doesn’t add up. When a metric is off by an order of magnitude. When ratios don’t seem plausible.
Here too: AI does what you ask it to. “Summarise this” is a different brief than “Check whether these numbers are plausible”. And even if the AI could detect inconsistencies – would we still recognise them if we permanently outsource this verification?
Ultimate responsibility for accuracy remains with humans. We can only bear this responsibility if we retain the ability to verify.
The real danger of AI adoption: an imperceptible loss of capabilities
The treacherous thing about deskilling is its imperceptible nature. It doesn’t happen overnight. It happens in small steps, each of which appears rational on its own:
- Text summaries: Why should I read this study myself when AI can summarise it in two minutes?
- Metrics overview: Why should I work through an extensive table when the tool gives me the key insights?
- Generating text: Why should I write this text myself when AI produces a usable draft?
Each of these approaches is rationalisable in isolation. Combined, they lead to certain cognitive abilities – concentrated reading, critical numerical understanding, independent formulation – being trained less. Abilities that are already under pressure from social media and constant connectivity. And what gets trained less, atrophies.
The Real Questions When Using AI Regularly
The question isn’t just “Could I do this without AI?” The more relevant questions are:
How much do I trust its results? Do I trust that the AI has performed the critical thinking that I could have done? Or should have?
Have I checked the sources? Tools like NotebookLM show references in the original documents. Do I use this option to verify whether the summary captures the essence – or do I blindly continue working with the result?
Who’s actually doing the thinking here? AI tools assist with thinking. But the thinking itself – the interpretation, the evaluation, the conclusion – must come from humans. Otherwise, everyone would ultimately be replaceable. Inspiration, original thinking, the unexpected connection: these don’t yet come from machines.
These questions lead to an uncomfortable realisation: those who permanently delegate the thinking work don’t just lose abilities – they lose what makes them valuable as knowledge workers.
Upskilling and Deskilling: Two Sides of the Same Coin
The solution isn’t to abandon AI. That would be neither realistic nor sensible. The solution is to design AI usage consciously.
Concretely, this means:
Become aware of your core competencies. Which abilities are central to your role, your expertise, your value creation? These abilities should be actively maintained – even if AI could theoretically take them over.
Distinguish between delegation and augmentation. Some tasks can be entirely handed over to AI. Others should be worked on together with AI, preserving your own cognitive contribution. The decision about what belongs where isn’t technical – it’s strategic.
Preserve the capacity for deep work. Occasionally write a text without AI support. Conduct an analysis without tool assistance. Consciously make time for concentrated, deep work. Not on principle, but as training – to maintain the ability to deliver high-quality results even without technological support.
AI Makes Us More Productive – And That's Where the Danger Lies
How unreflective AI usage leads to gradual skill erosion
Remember the calculator phenomenon? Since we've had calculators and smartphones, our mental math skills have significantly declined. What we don't use atrophies. The same principle applies when working with AI tools – except the consequences in a professional context can be far more serious.
Conclusion
AI First requires new skills for everyone. But equally important is the willingness to keep learning continuously – and not just in using new tools.
Factual knowledge can be retrieved faster today than ever before. What’s gaining value: the ability to learn, to experiment, to stay curious. Knowing how to explore new topics, how to contextualise information, how to question critically. This meta-competence can only be delegated to AI to a limited extent.
The more routine tasks transfer to AI, the more valuable the abilities become that make humans unique: critical judgement, contextual understanding, the sense for inconsistencies – and the ability to think clearly even when no tool is available.
The challenge is to develop both simultaneously: building new competencies for working with AI while maintaining existing core competencies. This requires awareness of the risks of deskilling – and the willingness to occasionally resist convenience.
Would you like to strategically implement AI in your daily business operations?
You handle a multitude of tasks in your company’s day-to-day operations and wonder which tasks could be simplified with targeted AI support? In a no-obligation strategy session, we would be happy to introduce you to the NordAGI approach.




AI Workflows for Practice - How AI Becomes a Team Member
[…] results: As with any AI use: The extraction is very good, but not perfect. A quick glance before the to-dos go into the system or the key points go into documentation is […]
AI Workflows for Practice - How AI Becomes a Team Member
[…] Capability development that goes beyond technical skills to include the ability to critically evaluate AI outputs and realistically assess one’s own workload (“Deskilling: Why AI Makes Us Smarter and Less Capable at the Same Time”). […]