Jagged Intelligence: Why AI Isn’t a Silver Bullet Yet – And Why That’s Precisely Why You Should Start Now

why AI projects fail when companies expect to “set it and forget it” – and why organizations that learn to work with AI's strengths AND weaknesses now are building a real competitive advantage. Today's tools already deliver powerful leverage when used correctly. Those who wait for AI to become “perfect” don't just miss productivity gains – they miss the learning curve that other companies are already navigating, and will have to start from scratch.

Jagged Intelligence: Why AI Isn’t a Silver Bullet Yet – And Why That’s Precisely Why You Should Start Now

At the 2026 World Economic Forum in Davos, Demis Hassabis, Nobel laureate and CEO of Google DeepMind, used a term that captures the central paradox of today’s AI systems: “Jagged Intelligence” – uneven capability across different tasks. The term was coined by Andrej Karpathy in 2024 and describes how AI systems are brilliant in some areas, surprisingly weak in others. This pattern explains why AI projects fail when companies expect to “set it and forget it” – and why organizations that learn to work with AI’s strengths AND weaknesses now are building a real competitive advantage. Today’s tools already deliver powerful leverage when used correctly. Those who wait for AI to become “perfect” don’t just miss productivity gains – they miss the learning curve that other companies are already navigating, and will have to start from scratch.

What Hassabis Means by “Jagged Intelligence”

In January 2026, Demis Hassabis spoke with Bloomberg journalist Emily Chang at the World Economic Forum in Davos about the future of artificial intelligence. The conversation covered the expected topics: AGI timelines, robotics, China’s position in the global AI race. But one concept stood out – one far more relevant to businesses than speculation about hypothetical superintelligent machines:

“I call it jagged intelligence – we’re very good at certain things and very poor at other things. And if you want to offload or delegate an entire task to an agent, rather than having what we have today, which are more like assistive programs, you’re going to need a lot more consistency across the board.”

— Demis Hassabis, CEO Google DeepMind, Davos 2026

Today’s AI systems have an uneven capability profile. In some domains, they dramatically outperform humans. In others, they fail in ways that surprise even experts. This is precisely what business leaders need to understand – because those who grasp this pattern can design AI initiatives that actually succeed.

Jagged Intelligence in Practice

If you’ve worked with AI tools like ChatGPT, Claude, or Gemini, you’ve likely experienced this firsthand. On one hand, an AI system can precisely summarize a 50-page report in seconds – a task that would take humans hours to complete. The system understands context, recognizes structure, distills key insights. The results are impressive.

On the other hand, the same system can “hallucinate” when creating original content – inventing facts that sound plausible but are simply wrong. Not because the system is “lying,” but because it lacks the necessary context. Without clear parameters, without relevant background information, AI sometimes produces convincingly written nonsense.

The difference? When summarizing, the AI has everything it needs: the complete text. When generating content, it often lacks the context that humans intuitively bring. This isn’t a random bug – it’s a pattern. And understanding this pattern is the first step toward successful AI adoption.

The Temptation to Wait

A natural response to “Jagged Intelligence” would be: wait until the systems improve. Until they work consistently. Until AGI arrives.

Hassabis estimates there’s roughly a 50 percent chance of achieving AGI – AI systems matching or exceeding all human cognitive capabilities – by 2030. But concluding “let’s just wait until 2030” would be a strategic mistake, for several reasons:

Nobody knows when the consistency problems will be solved. Hassabis himself says it’s “an open question whether a new architecture or new breakthrough is needed.” That could be 2030. Or 2040. Or solved in ways we don’t expect.

Today’s tools already deliver enormous leverage – when used correctly. Those who learn to work with AI’s strengths and weaknesses now are building competencies that will remain relevant with better systems. More importantly: these acquired competencies and experiences will create even greater leverage as AI improves.

Those who wait lose – not because competitors “automate faster,” but because they’re building a knowledge and experience advantage.

Hassabis puts it directly: “Learning to learn is the most important thing. How quickly can you adapt to new situations, absorb new information using the tools that we have.”

There’s also a psychological factor: the feeling that there’s “still plenty of time” is deceptive. AI developments over the past two years have shown how rapidly the field moves. Those who think they have three more years will discover in three years that others used that time.

Those who wait will lose – not because competitors automate faster, but because they’re building a knowledge and experience advantage in working with AI.

What Works Instead: Systematic Enablement

The answer to “Jagged Intelligence” isn’t less AI, and it isn’t waiting – it’s smarter deployment of available AI systems. This starts with several shifts in perspective:

From “AI as Tool” to “AI as Sparring Partner”

Most companies use AI for repetitive tasks: summarizing text, drafting emails, preparing data. That’s legitimate, but it captures only a fraction of the potential.

The real strength of current AI systems lies elsewhere: they’re excellent conversation partners for complex questions. They can challenge assumptions, provide counterarguments, reveal blind spots. Not because they’re “smarter” than humans – but because they think differently and don’t need to navigate political considerations.

A CEO who only uses AI for text optimization is leaving potential on the table. A CEO who uses AI as a sparring partner for strategic decisions actually experiences the leverage that AI offers.

From Individual Users to AI-Capable Teams

The second element is scaling. An executive who personally works with Claude or ChatGPT has an advantage. A company where entire teams are AI-capable has a competitive advantage.

This requires more than tool training. It requires clear guidelines (what can AI be used for, what not?), quality assurance (how is AI output verified?), knowledge sharing (which prompts work, which don’t?), and ultimately a culture where AI use is encouraged rather than considered “cheating.”

Without this systematic approach, exactly the problems Hassabis describes emerge: inconsistent results, loss of trust, disappointed expectations. AI implementation is therefore always also change management – an aspect we’ve explored in depth in a separate article.

From One-Off Projects to Continuous Learning

AI systems evolve in months, not years. What’s a weakness today may be solved tomorrow. Companies therefore don’t need a one-time “AI implementation” but a continuous learning and development process.

Concretely, this means: regular reviews (what’s working, what isn’t?), experimentation spaces (where can new use cases be tested?), and feedback loops (how do insights flow back to the team?).

Part of the learning process is understanding which tools suit which tasks. When your only tool is a hammer, every problem looks like a nail. Those who understand the spectrum of current AI tools – from language models to image AI to specialized analysis tools – can choose the right tool for each task and build a toolkit precisely tailored to their organization’s needs.

What’s becoming increasingly relevant: tools like Claude Cowork that integrate AI directly into desktop workflows point the direction. Not AI as a separate system, but as an embedded component of daily work.

The Real Question: How quickly can your organisation learn to work productively with AI?

Hassabis was asked what people should do in the face of the AI revolution. His answer was telling: not “which skills to learn” or “which jobs to avoid” – but “Learning to learn. How quickly can you adapt.”

The same applies to organizations. The question isn’t: “Which AI tool should we buy?” The question is: “How quickly can our organization learn to work effectively and productively with these tools?”

Companies that build this capability now will benefit from every future AI development – whether AGI arrives in 2030 or 2040 or never.

Companies that wait won’t know what to do even with perfect tools – and will face a steep learning curve that others have already conquered.

Jagged Intelligence: Understanding the AI Paradox

Why AI systems are brilliant in some areas – and surprisingly weak in others

The Uneven Intelligence Profile of Today's AI
Summari- sing Analysing Sparring Factual Accuracy Context Knowledge Writing Code Consistency High Low
Leverage strengths
Mitigate weaknesses

"Learning to learn is the most important thing. How quickly can you adapt to new situations, absorb new information using the tools that we have."

— Demis Hassabis, CEO Google DeepMind, Davos 2026

The Strategic Response: Three Shifts in Perspective

How organisations work with the unevenness, not against it

Shift 1

From Tool to Sparring Partner

Don't just use AI for repetitive tasks – use it as a thought partner for complex questions. Challenge assumptions, generate counterarguments, uncover blind spots.

Shift 2

From Individual Users to AI-Capable Teams

Not just a few power users, but entire teams enabled: Clear guidelines, quality assurance, knowledge sharing, and a culture that embraces AI adoption.

Shift 3

From Projects to Continuous Learning

AI evolves in months, not years. That's why you need: Regular reviews, experimentation spaces, feedback loops – a learning process, not a one-off project.

Conclusion: Working With the Unevenness, Not Against It

“Jagged Intelligence” isn’t a flaw to be overcome. It’s the current reality – a reality we can work with effectively once we’re aware of it.

And paradoxically, this is precisely where the opportunity lies. While many companies wait for the “perfect” AI assistant, you can start now: deliberately leveraging current systems’ strengths, knowing and mitigating their weaknesses, enabling your team to distinguish between the two, building a learning culture that grows stronger with each new development.

This isn’t a revolutionary “Big Bang” – it’s systematic AI implementation. But this work will determine who counts among the AI winners in three years – and who is still waiting.

Source: The full interview with Demis Hassabis at the 2026 World Economic Forum: Bloomberg Interview with Demis Hassabis, Davos 2026

Want to learn more about your AI journey?

Sound interesting? Let’s discuss how we can support you on your AI journey.

Book a no-obligation strategy call here.