Every CEO I talk to asks some version of the same question: “Are we ready for AI?”

And every enterprise consulting firm has an answer for them. It involves a six-figure engagement, a three-month assessment phase, and a 60-page maturity model that looks beautiful in a board presentation and changes nothing about how the company operates.

Here is the thing. If you are a growth-stage company doing $5M to $100M in revenue, you do not need a Gartner maturity model. You do not need a McKinsey framework. You need to answer 20 specific questions, understand what the answers mean, and know what to do next.

That is what this assessment does. I am publishing the full framework here, not behind a form, not gated, not abbreviated. Because the framework itself is not the hard part. Implementation is the hard part. And the companies that actually implement are the ones that started with a clear picture of where they stand.

Why Most AI Readiness Assessments Are Useless

I have reviewed a dozen AI readiness frameworks from the major consulting firms and tech vendors. They share three problems.

They are built for the wrong company. Enterprise maturity models assume you have a data science team, a chief data officer, a dedicated AI budget, and an 18-month timeline. If you are a 50-person company, those assumptions make the entire framework irrelevant. You do not have 18 months. You need to know what to do this quarter.

They measure the wrong things. “Rate your data maturity on a scale of 1 to 5.” What does that mean? A 3? What do I do with a 3? These frameworks produce scores that go in a presentation but never produce action. The output is a number, not a next step.

They gate the value behind the engagement. Most readiness assessments are designed to produce one conclusion: “you need to hire us.” The assessment is the sales tool, not the deliverable. The actual value, the implementation plan, only comes after you sign the contract.

What you need instead is a framework that asks specific questions, produces actionable answers, and tells you exactly what to fix and in what order. That is what I am going to give you.

The 5 Dimensions of AI Readiness

After running assessments for companies across healthcare, professional services, manufacturing, and insurance, I have found that AI readiness comes down to five dimensions. Miss any one of them and your first AI project will stall.

Dimension 1: Workflow Clarity

Can you describe, in specific terms, what your team does every day?

This sounds basic. It is not. Most companies I work with cannot map their daily workflows with any precision. They know the outputs (reports get sent, clients get serviced, campaigns get launched) but not the inputs and steps that produce those outputs.

AI automates workflows, not job titles. If you cannot describe the workflow, you cannot automate it. This is the single most common gap I see, and it is the easiest to fix.

What “good” looks like: Your marketing lead can tell you, “Every Monday I spend 45 minutes pulling data from Google Analytics and our ad platforms, then 90 minutes writing a client summary, then 30 minutes formatting and sending it.” That is a workflow. That is automatable.

What “not ready” looks like: “We do client reporting.” That is a function, not a workflow. You need the steps before AI can help.

Dimension 2: Data Accessibility

Can you get at the information AI needs to do the work?

AI is only as good as the data it can access. That does not mean you need a data warehouse or a clean data lake. It means the information your team uses to make decisions needs to be accessible, not locked in someone’s head or buried in disconnected tools.

What “good” looks like: Your client data lives in a CRM. Your financial data lives in QuickBooks or a similar platform. Your marketing data lives in Google Analytics and your ad platforms. These systems have APIs or export capabilities. The data exists in a form that tools can read.

What “not ready” looks like: Your client history lives in one person’s email inbox. Your pricing decisions are based on “what we charged last time” with no record of what that was. Key information exists only as tribal knowledge.

Dimension 3: Decision Patterns

Which decisions in your business are repeatable, and which require genuine human judgment?

This is where the Dispatch/Prep/Yours/Skip framework matters most. AI handles pattern-based decisions well: classify this email by priority, route this support ticket to the right team, flag this invoice as an outlier. It does not handle judgment-based decisions well: should we fire this client, should we pivot our strategy, should we hire for this role.

What “good” looks like: You can sort your team’s daily decisions into categories. “These 15 decisions follow the same pattern every time. These 5 decisions require context and judgment.” The pattern-based decisions are candidates for AI. The judgment decisions stay with humans.

What “not ready” looks like: Everything feels like a judgment call. Nobody has mapped which decisions are truly unique and which just feel unique because they have never been documented.

Dimension 4: Tool Infrastructure

Do you have the basic technology stack that AI tools can plug into?

You do not need enterprise software. You need the basics: a CRM (even a simple one), cloud-based file storage, an email system with API access, and ideally an automation platform like Make or Zapier. The current generation of AI tools connects to standard business software. But if your operations run on paper forms and desktop-only applications, there is a gap to close first.

What “good” looks like: Your core business tools are cloud-based and support integrations. You use Google Workspace or Microsoft 365. You have a CRM. Your team is comfortable with software tools generally.

What “not ready” looks like: Critical processes run on spreadsheets stored on a local desktop. Your team uses email as a database. Key information lives in paper files or legacy systems with no integration capabilities.

Dimension 5: Cultural Willingness

Will your team actually use AI if you implement it?

This is the dimension everyone skips and the one that kills more AI projects than any technical limitation. If your team sees AI as a threat rather than a tool, if leadership is not visibly championing the initiative, or if there is no one willing to own the first pilot, the technology does not matter.

What “good” looks like: At least one person on the team is excited about AI and willing to be the internal champion. Leadership has communicated that AI is about amplifying the team, not replacing it. The team is open to changing how they work.

What “not ready” looks like: The team is anxious about AI replacing their jobs. Leadership wants “AI strategy” but has not communicated why or what it means for individuals. Nobody has volunteered (or been designated) to own the initiative.

The Assessment: 20 Questions

Score each question from 1 (strongly disagree) to 5 (strongly agree). Be honest. Optimistic scores produce useless results.

Workflow Clarity (Questions 1 to 4)

  1. I can describe, step by step, the top 5 most time-consuming tasks my team does each week.
  2. I know how many hours per week each team member spends on recurring, pattern-based tasks versus unique, judgment-based work.
  3. If a key team member left tomorrow, someone could document their daily workflow within a day.
  4. I have at least 3 specific tasks that I know are repetitive, high-volume, and follow a predictable pattern.

Data Accessibility (Questions 5 to 8)

  1. Our core business data (clients, financials, operations) lives in systems that can be accessed by other tools.
  2. We do not have critical business information that exists only in one person’s head or email inbox.
  3. Our team can pull a report on key business metrics without asking IT or waiting more than a few minutes.
  4. Our client and operational data is reasonably current, not months out of date.

Decision Patterns (Questions 9 to 12)

  1. I can identify at least 5 decisions my team makes daily that follow a consistent pattern (same inputs, same logic, same output).
  2. My team spends more time on pattern-based tasks than on tasks requiring genuine creative or strategic judgment.
  3. We have documented criteria for routine decisions (how to prioritize emails, how to classify requests, how to route issues).
  4. When a new team member starts, we can explain “here is how we handle X” for most common situations.

Tool Infrastructure (Questions 13 to 16)

  1. Our core business tools are cloud-based and support integrations or API access.
  2. We use (or could use) an automation platform like Make, Zapier, or similar.
  3. Our team is generally comfortable learning new software tools.
  4. We could give an AI tool access to our email, calendar, CRM, and file storage without major security concerns.

Cultural Willingness (Questions 17 to 20)

  1. At least one person on the team has experimented with AI tools (ChatGPT, Claude, etc.) for work tasks.
  2. Leadership has communicated that AI is a priority and explained what it means for the team.
  3. The team would welcome a tool that handles their most tedious recurring tasks.
  4. We have (or could identify) an internal champion who would own the first AI implementation.

Scoring and What It Means

Add up your scores across all 20 questions. Maximum possible: 100.

75 to 100: Ready to implement. You have the workflows documented, the data accessible, the tools in place, and the team willing. Do not spend another month assessing. Start building your first AI workflow this week. Pick the highest-volume Dispatch-level task in your marketing function and automate it. Read the implementation playbook and start with Week 1.

55 to 74: Ready with minor gaps. You are close. Look at which dimension scored lowest. If it is Workflow Clarity, spend one week mapping your top 10 recurring tasks before you start. If it is Data Accessibility, identify the 2 to 3 data sources that need to be connected or migrated. If it is Cultural Willingness, leadership needs to have the “what AI means for us” conversation before implementation begins. Fix the gap, then start.

35 to 54: Foundation work needed. This is where most growth-stage companies land on their first assessment. You are not behind. You just have specific things to fix first. Look at each dimension’s score individually:

  • Workflow Clarity below 12: Spend two weeks documenting your team’s daily operations. Have each person track their time for one week. This exercise alone produces insights worth the effort.
  • Data Accessibility below 12: Audit where your business data lives. Identify the information that is trapped in email inboxes, local files, or tribal knowledge. Start migrating to accessible systems.
  • Decision Patterns below 12: Run the Dispatch/Prep/Yours/Skip classification on your team’s tasks. This reveals how many decisions are actually pattern-based versus how many feel pattern-based.
  • Tool Infrastructure below 12: Evaluate your tech stack. Cloud migration does not have to be a six-month project. Start with the tools your team uses most.
  • Cultural Willingness below 12: This is the most important gap to close. Have an honest conversation with your team about what AI will and will not change. Show them examples of AI handling their tedious work, not replacing their judgment.

Below 35: Start with the basics. You have foundational work to do, but do not let that discourage you. Companies at this level often make the fastest progress because the improvements are straightforward. Document workflows, connect data sources, get the team comfortable with cloud tools. Most companies move from this range to the 55+ range in 60 to 90 days with focused effort.

What “Not Ready” Actually Means

Here is what most readiness assessments get wrong: they treat “not ready” as a stop sign. It is not. It is a to-do list.

I have never worked with a company that was a zero on every dimension. Even the least prepared organizations have pockets of readiness: one team member who already uses AI tools, one process that is well-documented, one data source that is clean and accessible.

The path from “not ready” to “first AI workflow running” is shorter than most people think. Here is what it typically looks like:

Weeks 1 to 2: Document. Map your top 10 recurring workflows. Have your team track their time. Identify the high-volume, pattern-based tasks. This is the Workflow Clarity gap, and closing it takes days, not months.

Weeks 3 to 4: Connect. Get your data into accessible systems. This might mean migrating from a desktop spreadsheet to a cloud CRM, connecting your email to an automation platform, or simply creating shared drives instead of local folders. Close the Data Accessibility gap.

Weeks 5 to 6: Classify. Run the Dispatch/Prep/Yours/Skip framework on every task you documented. This closes the Decision Patterns gap and gives you a prioritized list of what to automate first.

Weeks 7 to 8: Build. Pick one workflow. The highest-volume Dispatch or Prep task you identified. Build the AI automation. Not a pilot program. Not a proof of concept. A real workflow that runs in your actual business starting tomorrow.

That is 60 days from “not ready” to “first AI workflow live.” Not theoretical. I have seen this timeline play out across multiple clients and industries.

What “Ready” Looks Like in Practice

Ready does not mean perfect. It means you have enough clarity, data, and willingness to start with one focused implementation and learn from it.

Here is what I have seen at companies that scored in the 60+ range and moved directly to implementation:

A healthcare services company identified that their patient intake coordinators spent 3 hours per day on email triage and scheduling coordination. Workflow was clear, data was in their EHR system, decisions were pattern-based (route this type of inquiry to this coordinator). They built an AI triage system in two weeks. Time savings: 2.5 hours per coordinator per day.

A professional services firm mapped their proposal process: 6 hours per proposal, mostly assembling boilerplate from previous proposals and customizing for the prospect. Their data was in Google Drive and their CRM. The team was willing. They built an AI-assisted proposal drafting system that reduced assembly time to 90 minutes. Quality stayed the same because a senior partner still reviewed every proposal, but the Prep work was handled.

An insurance agency cluster documented their renewal processing workflow: pull policy data, compare rates, draft renewal letters, follow up. Pattern-based from start to finish. They automated the data pull, comparison, and draft letter, cutting a 4-hour-per-renewal process to 45 minutes of review and customization.

In each case, the company did not try to “transform with AI.” They picked one workflow, built the automation, measured the results, and expanded from there. That is the implementation-first approach that actually produces results.

Running the Assessment Yourself

You can run this assessment today. Here is how to get the most out of it.

Get the right people in the room. You need someone who knows the daily operations (not just the org chart), someone who knows the tech stack and data infrastructure, and someone who makes decisions about where time and money go. In a growth-stage company, this might be three people or it might be one person who wears all three hats.

Score honestly. The temptation is to score optimistically, especially on Cultural Willingness. Resist that. An honest 42 is more useful than an optimistic 65, because the honest score shows you exactly what to fix. The optimistic score leads to an implementation that stalls when the real gaps surface.

Look at dimension scores, not just the total. A total score of 55 could mean “solid across the board” or it could mean “excellent on four dimensions and terrible on one.” The dimension breakdown is where the action items live.

Do not use this as an excuse to delay. The point of the assessment is to clarify what to do next, not to justify waiting. If you score a 45, the answer is not “we should assess more.” The answer is “here are the two specific gaps to close, and here is the 60-day plan to close them.”

Next Steps

You have the framework. You can score yourself right now. But if you want someone to run the assessment with you, interpret the results, and build the implementation plan for what comes after, that is what I do.

The guided version of this assessment takes two sessions. In the first session, I walk through the 20 questions with your team and dig into the specifics behind each score. In the second session, I present the findings with a prioritized implementation roadmap: what to fix, what to build first, and what the first 90 days look like.

The assessment itself is free. Get in touch if you want the guided version.

If you want to start on your own, here is the recommended reading order: Why Marketing Is the Best Starting Point for AI Integration gives you the beachhead strategy for where to implement first. AI Integration for Business Operations gives you the full implementation playbook. And Why Most AI Consulting Fails helps you avoid the consultants who will waste your time and money.

Frequently Asked Questions

How do I know if my company is ready for AI?

Evaluate five dimensions: workflow clarity (do you know what your team actually does?), data accessibility (can you get at the information AI needs?), decision patterns (which decisions are repeatable?), tool infrastructure (do you have the basic tech stack?), and cultural willingness (will your team use it?). Most growth-stage companies score between 35 and 55 on their first assessment, which means they are ready to start with one focused pilot, not that they need to wait.

What is an AI readiness assessment?

An AI readiness assessment evaluates how prepared your organization is to implement AI. Unlike enterprise maturity models from Gartner or McKinsey, a practical AI readiness assessment for growth-stage companies focuses on five dimensions: workflow clarity, data accessibility, decision patterns, tool infrastructure, and cultural willingness. The goal is not a score for a board deck. It is a clear picture of what to fix before you start and where to start first.

How long does an AI readiness assessment take?

A self-assessment takes 60 to 90 minutes with the right people in the room: someone who knows daily operations, someone who knows the tech stack, and someone who makes decisions about time and money. A guided assessment with a consultant typically takes two sessions and produces a prioritized implementation plan alongside the score.

What should I do if my company is not ready for AI?

Not ready does not mean do not start. It means fix two or three specific things first. The most common fixes: document your workflows, connect your data sources, and identify one internal champion to own the first pilot. Most companies go from not ready to ready for a first pilot in 30 to 60 days.

Do small businesses need AI readiness assessments?

Yes, but not the enterprise kind. Gartner maturity models and McKinsey frameworks are designed for organizations with dedicated AI teams and seven-figure budgets. Growth-stage companies need a simpler assessment focused on practical questions: do you know your workflows, can you access your data, and will your team adopt the tools? The assessment should take an hour, not a quarter.

What is a good AI readiness score?

On a 100-point scale, most growth-stage companies score between 35 and 55 on their first assessment. A score of 60 or above means start implementing immediately. A score of 40 to 59 means close one or two gaps first. Below 40 means foundational work is needed, but even that typically takes 30 to 60 days to resolve.