Insights

From pilots to proof: How to scale artificial intelligence

Ai Banner

Organisations can make sure AI is more than just a bubble by discovering how it’s already being used inside the business.


In summary

  • AI has been broadly embraced at board level, but rhetoric often fails to live up to reality 
  • Real but small-scale success stories, pilots and adoption are often mistaken for organisational transformation 
  • The gap between expectations and reality threatens to undermine the technology’s credibility and organisations’ enthusiasm 
  • But unearthing unauthorised and informal uses of AI could not only reduce risks but also unlock opportunities to identify scalable use cases  

In many businesses, confidence in AI sometimes seems inversely proportional to its concrete achievements.  

Organisations say they are embracing the technology. It features prominently in board discussions and corporate strategies. Conversations have increasingly shifted from whether to adopt AI, to when it can scale.   

But there is growing tension between the rhetoric and measurable evidence of adoption, productivity impacts and operational change. Recent research suggests that while executive enthusiasm remains high, enterprise-wide AI often plateaus, with tangible returns below expectations.  

In many cases, there’s a growing gap between belief in AI and the reality in large, complex organisations operating at scale. 

Technology success stories and the illusion of scale

There are several reasons for this.  

One is that a lot of executive confidence in AI is shaped by exposure to success stories. These are real and often impressive. They often showcase genuinely innovative use cases and step-changes in personal productivity or efficiency. 

The problem is not that these examples exist. Many successful rollouts of AI are likely to start with such pilots or small-scale adoptions. But they are often mistaken for evidence of widespread transformation that’s already taken place.  

What is rarely visible is how exceptional these cases are, and how much informal effort they rely on. They are often the result of the persistent efforts, trial and error of individuals or small teams. That can disguise how difficult they can be to replicate across large organisations with legacy systems, embedded controls, established cultures and ways of working, and multiple lines of defence. 

Many of these pockets of innovation bring valuable, albeit limited, gains in productivity, efficiency or quality. But they’re often misinterpreted as proof of organisational transformation and maturity. Over time, this creates an illusion of scale that is hard to challenge, even if the evidence is that day-to-day operations remain largely unchanged. 

The foregrounding of these success stories is encouraged by senior executives’ enthusiasm for AI, which has been noted inside organisations.  

The response within organisations is often swift and well-intentioned. AI labs, innovation hubs, centres of excellence and pilot programmes flourish; trials are launched; new policies and principles are published. Generative AI assistants and other tools are rolled out at pace.  

These activities give AI even more visibility and, perhaps fairly, allow organisations to say that they are investing in AI and exploring its potential. But in many cases, they sit alongside unchanged processes, incentives and operating models. The work is additive rather than transformative. 

The result is that organisations are busy with AI without being changed by it. From the top, leaders receive reports of progress, but the operational reality below remains unmoved. Or, at least, the formal processes and procedures do. 

Pockets of innovation bring valuable gains in productivity, efficiency or quality, but they’re often misinterpreted as proof of organisational transformation.

Informal adoption and invisible AI use

None of this is to deny that AI is having an impact. But the most common AI adoption is largely independent of official programmes, informal and often taking place out of sight.  

Individuals and small teams are experimenting with AI, just as they do outside the workplace. They’re using tools to draft documents, analyse data, triage information and automate repetitive tasks.  

Often, they’re harnessing consumer-grade tools or AI features embedded in existing software platforms. Many organisations are discovering that a significant proportion of AI use occurs within tools they license, but without explicit approval or control. It simply harnesses capabilities enabled by software vendors. 

In practice, this means organisations may know which platforms are connected, but not how AI is used, what data is processed, or how outputs are informing decision making. For boards and executive teams, this raises familiar but unresolved questions around data protection, accountability, model risk and organisational assurance. 

There is both a risk and a lost opportunity in this.  

The issues around data protection, governance and cyber security risks are increasingly obvious. But the lack of visibility also works against the successful adoption of AI at scale in another way. 

Such activity – unofficial, unmeasured and disconnected from formal AI projects – is driven by personal initiatives, interests and irritants rather than organisational strategies. Individuals are usually best placed to identify the use case that could make a difference. But ironically, the people learning the most about how AI affects their work are frequently the least visible to central teams. 

When AI use sits outside formal governance structures, it is rarely evaluated for quality or risks, and neither are opportunities for scalability examined. Valuable learning is lost, and potentially powerful use cases never make the transition from individual workarounds to enterprise capabilities. 

Organisations may try to eliminate unauthorised use, but they would usually do better to seek a clear view of what is happening on the ground so that they can distinguish between productive innovation and unmanaged exposures. 

The people learning the most about how AI affects their work are frequently the least visible to central teams.

From stories to stats: Evidencing AI

Addressing the gap between AI rhetoric and reality is important for both businesses and the future of the technology. The real risk of an AI bubble is not because AI lacks value, but that organisations believe they are further along the adoption curve than they really are.  

When the anticipated benefits of AI adoption do not materialise, its credibility suffers. Boards begin to ask why productivity has not shifted, why issues are unresolved and efficiencies remain elusive. The risk is not of dramatic failure, but a slow erosion of trust in the technology and the organisational structures built around it. 

Closing this gap requires a shift in focus. Rather than relying on programme-level reporting or success stories, organisations need evidence of how AI is actually used. 

In practice, this means developing visibility into real use cases across the business, including those occurring outside formal AI initiatives and within existing software platforms. It also means understanding which uses are delivering value, which introduce risk, and which represent opportunities to scale responsibly. 

This is where structured insight matters. Tools such as AI Score, an AI governance platform we partner with, can help organisations surface actual AI usage patterns, understand what data is shared, and identify where informal adoption is already taking place. Combined with a clear AI readiness, risk and maturity framework, it allows leaders to move beyond assumptions and engage with AI adoption as it really exists, not as it’s reported. 

Organisations that want AI to be more than a talking point, don’t need another pilot or lab, but a grounded, evidence-based understanding of current behaviour, capability and exposure. They can use that to prioritise where AI should be supported, how it should be governed and where it can be scaled. 

Those that do this well will be better positioned to turn scattered experimentation into sustainable advantage, ensuring AI isn’t a bubble for them. 

Let S&W help you move from ambition to evidence

Contact us to find out how we can help turn your AI potential into measurable impact.

We help organisations move beyond AI stories by using data‑driven insight to understand real AI use, identify what is worth scaling, and do so safely and responsibly.