Why most business AI inititaves stall

Blog

Most organisations aren’t failing at AI - they’re simply discovering how hard it is to move from experimentation to real impact. A recent MIT study shows that while AI pilots are everywhere, only a small fraction of business AI tools ever make it into sustained use. The reason isn’t lack of ambition or technology, but the fact that many tools don’t learn, don’t retain context, and don’t embed into real workflows. This article explores why AI adoption feels so confusing right now, what actually differentiates systems that stick, and why understanding the questions people ask can be more strategically valuable than the answers AI generates.

If you’re trying to make sense of AI in your organisation right now, you’re not behind - you’re experiencing what most businesses are.

A recent MIT NANDA report, “The GenAI Divide: State of AI in Business 2025”, puts hard numbers behind what many leaders quietly feel: AI experimentation is everywhere, but real, sustained business impact is rare.

In fact, the report finds that only around 5% of task-specific, embedded AI tools make it into long-term, production use.

This gap between experimentation and real value isn’t a failure of ambition or intelligence. It’s a sign that most AI tools were never designed to become real organisational systems in the first place.

Why AI feels inevitable - and yet underwhelming

Right now, many organisations are:

  • Running AI pilots or proofs of concept.

  • Sitting through endless demos.

  • Watching competitors announce “AI initiatives”.

  • Feeling pressure to act, without clarity on what actually works.

The result is often confusion rather than confidence.

According to MIT, this isn’t because the underlying models aren’t powerful enough. The core problems are more structural.

Most AI tools struggle to scale because they:

  • Don’t retain organisational context.

  • Don’t learn from real usage.

  • Aren’t embedded into day-to-day workflows.

  • Don’t clearly tie to accountable outcomes.

They can answer questions - sometimes impressively - but they rarely become systems organisations are comfortable relying on.

The real differentiator isn’t intelligence - it’s learning

One of the most important (and overlooked) findings in the MIT report is this:

AI tools that don’t learn over time almost always stall.

Many tools generate responses, but:

  • They don’t remember what users asked previously

  • They don’t adapt to different roles, locations, or domains

  • They don’t improve as understanding deepens.

This matters most in complex environments - regulation, compliance, safety, standards, policy, where context and accuracy matter more than creativity.

In these settings, what organisations actually need are learning systems, not just AI interfaces.

Why leaders see the value — but adoption still stalls

The report highlights a pattern that will feel familiar to many organisations:

  • Executives understand the strategic potential of AI.

  • Decisions are delegated for execution.

  • Momentum slows or stops.

This isn’t about capability. It’s about incentives.

Middle layers are often:

  • Rewarded for stability, not experimentation.

  • Wary of introducing new systems.

  • Already overloaded with competing priorities.

As a result, AI initiatives that address strategic issues can lose momentum unless they are clearly low-risk, low-friction, and well-aligned with existing workflows.

This is also why the report finds that AI solutions introduced via trusted referrals and partnerships succeed far more often than those sold cold.

A shift in thinking: why questions matter more than answers

Most AI discussions focus on answers: How accurate are they? How fast? How impressive?

But the organisations seeing real value are paying attention to something else entirely:

The questions.

Over time, the questions users ask reveal:

  • What people actually care about.

  • Where they are confused.

  • What knowledge is missing or poorly communicated.

  • What themes are emerging across customers, members, or staff.

This “question intelligence” becomes strategically valuable:

  • It deepens understanding of users.

  • It highlights gaps and risks early.

  • It allows responses to become more relevant and better targeted over time.

Crucially, this only works if the system is designed to learn from those questions - not discard them.

What the MIT report suggests organisations should look for

Based on the research, AI tools that make it into sustained use tend to share a few characteristics:

  • They are bounded to trusted, relevant information.

  • They learn from usage, rather than treating every interaction in isolation.

  • They adapt to context - role, location, domain, history.

  • They provide visibility into what users are asking, not just what the system says.

  • They reduce risk and rework, rather than creating new uncertainty.

This is particularly important in high-stakes, information-dense environments where “almost right” is often not good enough.

A more useful way to think about AI initiatives that stall

When AI initiatives stall, the instinct is often to look for the wrong problem - model choice, vendor selection, or whether the organisation is “moving fast enough”.

The MIT report suggests a more useful lens.

Instead of asking “Is this tool impressive?”, organisations that make progress tend to ask:

  • Does this system retain context, or does every interaction start from scratch?

  • Does it improve as people use it, or does it stay essentially the same?

  • Does it help us understand what users are actually asking, or only generate answers?

  • Does it fit naturally into how work already happens?

Seen through this lens, many stalled initiatives aren’t failures - they’re tools that were never designed to grow beyond experimentation.

This is where the gap between AI demos and AI systems becomes visible.

A final thought

If AI feels harder to operationalise than expected, that’s not a failure - it’s a signal.

The organisations that succeed won’t be the ones running the most experiments, but the ones that:

  • Treat AI as a learning capability.

  • Focus on real workflows.

  • Pay attention to what users are actually asking.

  • Choose systems designed to improve with use.

That’s where durable value - and real confidence in an AI roadmap - actually comes from.