Insights

You have to play by the rules to win: AI governance and observability

Data card files
Mark Hendry Mark Hendry Article author separator

AI’s a winner at the Olympics, but Italy’s law trying to tame the technology reflects just one reason why all businesses need to focus on their controls when it comes to exploiting the technology.


In summary

  • Rising regulatory interest and business imperatives are increasing the importance of AI governance 
  • Governance is more than a compliance issue and can promote efficiency, resilience and innovation 
  • Observability is often the missing link in AI governance, providing the insights on model and user behaviour required for control 
  • Technology, people and processes all play a part in ensuring the observability required and making good governance a reality 

As athletes compete in the Winter Olympics in Milano Cortina this week, the event is supported by artificial intelligence like never before.  

As one technology writer explained ahead of the games, “AI will be deployed across every aspect of the Milano Cortina 2026 Winter Olympics, from athlete performance tracking and timekeeping through to real-time television broadcasting and onsite security.” 

But in the host country, Italy, there’s increasing concern about the technology. Last September, it became the first EU member state to pass a national artificial intelligence law. The law introduces requirements for human oversight, governance, access for minors and copyright protections that aim to bring “innovation back within the perimeter of the public interest, steering AI toward growth, rights and full protection of citizens”, according to the government.  

It’s just another sign that, although increasing application of the technology will be a key trend this year, AI governance will be a critical challenge all businesses will face. 

AI governance: Good for business

Regulations are a driver for good governance, and not just in Italy. The UK’s outcome-based regulatory model and the EU’s AI Act take significantly different approaches, but there’s at least one similarity: firms that proactively govern their use of AI will be better placed to meet regulatory expectations as they evolve.  

It’s far from the only benefit, either. AI governance is more than just a question of compliance. It is increasingly a strategic capability –  an innovation enabler, a resilience reinforcer and protector of trust:  

Promoting operational resilience, with governance frameworks helping firms detect and respond to model drift, data degradation and other systemic risks 

Boosting stakeholder confidence by meeting growing expectations of transparency and accountability around AI from investors and clients, as well as regulators  

Unleashing innovation enablement, with clear governance not constraining by enabling individuals and firms to safely experiment with AI while preserving appropriate control and oversight 

Without proper oversight, AI can introduce bias, cybersecurity risks, data protection and privacy issues, and potentially other legal liabilities. With proper oversight, the technology just might live up to its promise.   

Without effective insight, AI governance is illusory, providing a false comfort and sense of control.

AI and observability

Control without visibility is an illusion. AI observability is, too often, the missing link in governance. It refers to the ability to monitor, understand and explain how AI systems behave in real time, and it’s a critical capability in controlling how AI is used and operates.  

Enabling continuous oversight of AI and adaptation, observability is critical for devising, implanting and enforcing AI controls and guardrails. It transforms AI governance from an aspiration to a reality, from a static checklist to a real time capability. 

Without effective insight, AI governance is illusory, providing a false comfort and sense of control. Observability provides that insight and a host of other benefits:  

  • Model performance tracking to detect when models deviate from expected behaviour or degrade over time 
  • Explainability to provide clarity on why models make particular decisions – essential for regulatory compliance and user trust 
  • Auditability, providing the evidence and insights needed for internal reviews or external investigations 
  • Early identification of problems in AI systems, including poor use behaviours, to enable rapid intervention and resolutions 

Crucially, observability enables organisations to see how their governance policies and controls work in practice and adjust them as required. It shifts governance from a speculative strategy to a data-driven practice that reduces risk and unlocks new opportunities for collaboration and innovation. 

In sectors where governance is particularly crucial and regulators’ expectations are highest, such as financial services, healthcare and legal technology, good observability is powerful support for and source of trust, efficiency and market differentiation. 

Bringing it together: Observable reality

There is no single solution to achieving good observability. It is the product of technology, processes, protocols and people: 

  • Observability platforms provide the real-time insights required into the behaviour of the models and users 
  • Cross-function governance committees should bring together the legal, compliance, technology and business leaders to make the rules for and oversee AI use 
  • AI usage registers can be used to track the use of all AI tools across the organisation, whether in-house, third-party or informal  
  • Model audit protocols should provide for regular review of AI models’ fairness, transparency and performance 

Working together, these can give the essential insights into how the AI works, how people are using it and what its outputs are. To be effective, however, governance and observability must be deeply embedded into the organisation’s strategy, operations and culture.  

Top tips

  • Lead from the top

    AI oversight must be a strategic priority rather than a compliance issue, championed by boards and executives.  

  • Collaborate across functions

    Legal, compliance, risk, technology and business functions must work together. Siloed governance approaches will fail in the face of complex AI systems.

  • Invest in technology

    Observability platforms and model monitoring tools are usually prerequisites for the real-time insights and explainability required. 

  • Set expectations

    Clear policies and training should define acceptable practice and foster responsible AI use, including for shadow AI, where employees informally adopt tools.

  • Keep learning

    Regular reviews, scenario testing, and stakeholder feedback will ensure governance and observability evolve as the technology does.  

Real control for AI

Find out more about our consulting services for taking charge of technology.