top of page

Is your business actually ready for AI - or just hoping it is?

Most UK SME leaders know they need to take AI seriously. Very few know where they genuinely stand. Here is how to find out.

 

By Paul Noon OBE | AI Adviser and Founder, The Professor-AI | 12 min read


Here is a question worth sitting with for a moment: if a board member asked you to describe your organisation's AI readiness right now - specifically, not vaguely - what would you actually say?

 

I ask this because I have had this conversation with enough UK SME leaders to know that the honest answer, in most cases, is some version of: "We're doing a few things, we think we're probably okay, but if I'm being honest, I haven't really mapped it out."

 

That answer is more common than most people admit. And it is not a failure of ambition or intelligence - it is a failure of diagnostic infrastructure. Most organisations have never been given a clear framework for assessing AI readiness, so they are navigating by feel, reacting to news cycles, copying competitors, and hoping the combination adds up to something coherent.

 

It usually does not.

 

This post sets out what AI readiness actually means - not as a vague aspiration but as a structured, assessable condition across five specific dimensions. By the end of it you will have a clearer picture of where your organisation actually stands. What you do with that picture is up to you.

 

 

Why "we're exploring AI" is not an answer

 

The phrase I hear most often from SME leaders when asked about their AI position is some variation of "we're exploring it" or "we're actively looking at the options." Both phrases do a lot of heavy lifting for very little substance.

 

Exploring AI is not a strategy. It is a posture - and a comfortable one, because it requires nothing specific and commits to nothing measurable. The problem is that while your organisation is exploring, the AI risk landscape is not standing still. Staff are already using free consumer AI tools on company data. Your competitors - or at least some of them - have moved beyond exploration. And the regulatory environment around AI use is tightening, quietly and unevenly, in ways that most SME leaders are not tracking.

 

"The organisations that will struggle most with AI are not the ones who moved too fast. They are the ones who never got past comfortable ambiguity."

 

The antidote to comfortable ambiguity is a clear assessment. Not a consultant's jargon-heavy maturity model, not a vendor's self-serving questionnaire - a straightforward, honest appraisal of where you actually are across the dimensions that matter.

 

Infographic on AI readiness dimensions: Strategy, Data, People, Governance, Tools. Each scored 1-5. Blue background with yellow text.

 

The five dimensions of AI readiness

 

AI readiness is not a single thing. It is a composite of five distinct conditions, each of which needs to be in reasonable shape before AI adoption can be either effective or safe. Weakness in any one dimension creates a specific type of failure, and the failure modes differ markedly.

 

1. Strategy and Leadership

 

Whether the organisation has a stated position on AI, who owns it, and whether senior leaders understand enough to make informed decisions.

 

The diagnostic question: "Who in this organisation has the authority and the knowledge to make your most important AI decision - and do they know it is them?"

 

2. Data and Infrastructure

 

What data the organisation holds, in what condition, and whether the infrastructure can support AI use without creating new problems.

 

The diagnostic question: "If you tried to use your customer data to train or inform an AI system tomorrow, what would stop you - and are those blockers known or unknown?"

 

3. People and Culture

 

The current level of AI literacy across the workforce, where the enthusiasm and the resistance are, and whether the culture is capable of absorbing change.

 

The diagnostic question: "Name the person in your organisation most likely to drive AI adoption forward - and the person most likely to quietly undermine it. Do you know what to do with both of them?"

 

4. Governance and Risk

 

Whether policies exist governing AI use, whether risks have been identified and assessed, and whether the organisation is managing its AI exposure or simply hoping it will be fine.

 

The diagnostic question: "How many of your staff are using free consumer AI tools on company data right now - and does your organisation have a policy that covers it?"

 

5. Tools and Workflows

 

What AI tools are already in use - officially or otherwise - whether they are delivering measurable value, and what the highest-value opportunities for expansion look like.

 

The diagnostic question: "What is the ROI evidence for the AI tools your organisation is currently using - and has anyone actually calculated it?"

 

Notice what is not in this framework: technical architecture, machine learning models, developer capability, API integrations. Those things matter eventually, for some organisations. They are not where most UK SMEs need to start - and starting there is one of the most common and expensive mistakes in AI adoption.

 

 

What each dimension failure looks like in practice

 

Every dimension has a characteristic failure mode. Knowing which failure mode is most relevant to your organisation is the beginning of useful diagnosis.

 

When Strategy and Leadership are weak

 

AI decisions happen reactively - triggered by a supplier's pitch, a competitor's announcement, or a news article that one of the directors forwarded. There is no framework for evaluating those decisions consistently. Different parts of the organisation make different AI choices, in isolation, with no one coordinating the picture.

 

Risk this creates: AI investment that cannot be defended to the board because no one can explain why particular tools were chosen or what they were expected to achieve. Diffuse spending with no measurable outcome and no strategic coherence.

 

When Data and infrastructure are weak

 

The organisation buys AI tools and then discovers the data needed to run them effectively does not exist, or exists in a form that cannot be used. Systems are not connected. Customer records are inconsistent. GDPR compliance is assumed rather than evidenced. The AI tool sits on top of a data foundation that cannot support it.

 

Risk this creates: AI outputs that are unreliable because they are built on poor data - and the unreliability is not immediately visible, so wrong decisions get made with false confidence. In regulated sectors, potential compliance exposure.

 

When People and Culture are weak

 

AI tools get deployed to staff who have not been prepared for them. Some staff use them enthusiastically, without guidance, creating informal, ungoverned usage patterns. Others resist them and find quiet ways around them. Neither group has what they need to use AI well.

 

Risk this creates: AI investment that fails to deliver the productivity gains it promised because adoption is inconsistent and ungoverned. Over time, the cynicism this creates makes the next wave of AI adoption harder to land.

 

When Governance and Risk are weak

 

This is the dimension where I see the most exposure, most consistently. Staff are using tools such as ChatGPT, Copilot, and other free AI products in their daily work. In many organisations, this is happening without any policy framework, any assessment of what data those tools process, or any understanding of what the AI provider does with that data.

 

The question most leaders do not want to answer: Has your finance team ever pasted client financial data into ChatGPT? Has your HR team used an AI writing tool to draft correspondence involving employee information? If you do not know the answer with confidence, that is itself the answer.

 

When Tools and Workflows are weak

 

The organisation is not extracting the value from the AI tools it already has. Microsoft Copilot licences are sitting unused or underused. Tools have been trialled and quietly abandoned when they did not immediately deliver. There is no process for evaluating AI tools consistently - or for knowing when to stop using one.

 

 

A rough self-assessment

 

Before you read further, it is worth taking 60 seconds to form a working hypothesis about where your organisation stands. For each of the five dimensions, give yourself an honest score from 1 to 5. A score of 1 means no meaningful activity and a significant unaddressed risk. A score of 5 indicates a documented, consistently applied approach that would withstand external scrutiny. Most UK SMEs score between 1.5 and 2.5 on most dimensions.

 

To make that concrete, check how many of these statements are true for your organisation:

 

  1. We have a written document or agreed position that states our organisation's approach to AI - not just a general intention, but an actual document.

  2. We know exactly what personal or sensitive data our staff are processing through AI tools, and we have assessed the GDPR implications.

  3. We have provided structured AI training to staff - not just access to tools, but guidance on how and when to use them responsibly.

  4. AI has been a formal agenda item at the board or senior leadership level in the past six months, with a recorded outcome.

  5. We can point to at least one AI-driven workflow change in the last 12 months that delivered a measurable, evidence-based improvement.


If you checked three or fewer of those statements, your organisation has meaningful gaps that carry real commercial and regulatory risk. That does not make you unusual - it makes you a representative UK SME in 2026. But it does mean the "we're exploring it" posture is costing you more than you might think.

 

 

The governance gap is the most urgent - and the most overlooked

 

Of the five dimensions, governance is the one I spend the most time on with advisory clients - because it is the one where the gap between perceived risk and actual risk is largest.

 

Most senior leaders believe their organisation's AI risk exposure is low because they have not formally adopted any AI tools at scale. What they have not accounted for is the informal adoption that is already underway, invisibly, across their workforce.

 

Consumer AI tools are free, powerful, and being used by your staff right now. Some of that use is entirely benign. Some of it involves the processing of personal data, commercially sensitive information, or client records through platforms whose data handling practices are at best unclear and at worst non-compliant with your GDPR obligations.

 

"Your AI governance risk is not a future problem. If your staff have access to free AI tools and no policy governing their use, it is a current problem - it is simply an unexamined one."

 

The fix is not technically complex. An AI use policy does not require a team of lawyers or a six-figure implementation project. It requires a clear-eyed assessment of what is currently happening, a documented position on what is and is not acceptable, and a communication programme that ensures staff understand it. Most organisations can do this in a matter of weeks with the right guidance.

 

What stops them is not complexity. It is not knowing where to start, and not having a clear enough picture of their own current position to know what the policy needs to cover.

 

 

What a genuine readiness assessment tells you that this blog cannot

 

The framework above is useful for forming a working hypothesis about where your organisation stands. It is not a substitute for a structured, independent diagnostic that goes into the specifics of your business.

 

Here is what a proper AI readiness assessment adds that a self-scored framework cannot:

 

Specific evidence, not impressions. A self-assessment is only as accurate as the self-awareness of the person completing it. A structured diagnostic draws on specific, evidence-based answers - the actual tools in use, the actual policies in existence or not, the actual data governance arrangements. The difference between "we probably have that covered" and "here is our GDPR data processing register" is significant.

 

Named risks, not generic cautions. A self-assessment might flag that governance is a concern. A proper audit names the specific risk: "Your current practice of using a particular tool to process client financial data is likely incompatible with your data processing obligations under your client contracts." The specificity is what makes it actionable.

 

A prioritised roadmap, not a list. The output of a proper assessment is not a list of everything the organisation should do about AI - that list is always too long and always paralyses action. It is a prioritised sequence: three specific actions, ordered by impact and feasibility, that will make the most difference in the next 90 days.

 

Worth asking yourself: if your board asked you to provide a one-page AI readiness summary next month, would you be able to produce one that was specific, evidenced, and defensible? If the answer is no - that is the gap the assessment fills.

 

 

Where most UK SMEs actually are right now

 

Based on the organisations I have worked with and assessed in the past year, here is a rough picture of where most UK SMEs with 50 to 300 employees currently sit.

 

On Strategy and Leadership, most score between 1.5 and 2.5. There is awareness and some informal discussion, but very few have a documented position or clear ownership. On Data and Infrastructure, the picture varies considerably by sector - but data quality problems are near-universal. On People and Culture, there is typically a significant bimodal distribution: a small number of AI enthusiasts doing a great deal, and a larger number of staff with minimal awareness and no guidance.

 

On Governance and Risk - the dimension where I would argue the most urgent attention is needed - the majority of organisations score between 1 and 2. Policies are absent or inadequate. Informal tool use is extensive and ungoverned. The risk exposure is real and immediate.

 

On Tools and Workflows, there is typically more activity than leaders realise - but it is scattered, uncoordinated, and often delivers less value than it could because it sits outside any strategic framework.

 

The aggregate picture is of organisations that are not behind because they are unaware or incompetent. They are behind because nobody has given them a clear, practical, specific picture of where they are and what to do about it - and because "we're exploring" has been an acceptable substitute for that picture for too long.

 

 

Know where you actually stand

 

The AI Readiness Audit is a structured five-dimensional diagnostic for UK SMEs. You receive a board-ready report, named risks, and a prioritised 90-day roadmap - delivered in 10 working days. Starting from £2,500.

 

View the full audit details and pricing: www.theprofessor.info/aireadiness

 

 


 


Professor Paul Noon

Paul Noon OBE is an AI adviser and the founder of The Professor-AI, advising UK SMEs and senior leadership teams on AI strategy, governance, and adoption. He was previously Deputy Vice-Chancellor at Coventry University Group. He publishes regularly on AI readiness and governance at theprofessor.info.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page