Why Readiness Determines Outcomes
The majority of enterprise AI projects fail. The post-mortems rarely blame the models — GPT-4, Claude, and Gemini are capable enough for most business tasks. What fails is the foundation underneath: data that isn't where it needs to be, processes that were never actually documented, teams that lack the authority to make decisions, and organizations that bought the pitch without asking the hard questions first.
"Most AI failures are preparation failures. The technology worked exactly as designed. The surrounding system wasn't ready for it."
This checklist is twelve questions. Honest answers take about an hour. That hour is worth more than the first three months of an unprepared deployment.
Score as you go: one point for each "yes." The scoring guide is at the end.
Data Readiness
Agents run on data. If the data is wrong, inaccessible, or stale, the agent will be wrong, stuck, or outdated — regardless of how sophisticated its reasoning is.
1. Is your data accessible via API or structured export? Can a software system query the data your agent will need without a human exporting it manually? If your team downloads a CSV from a dashboard every Monday to feed into a process, that process is not agent-ready. The data needs a programmatic path.
2. Is your data structured enough to parse reliably? Structured means machine-readable without interpretation: database tables, JSON responses, CSV with consistent schema. Semi-structured (emails, PDFs, free-text notes) can work, but adds parsing overhead and failure modes. If more than 40% of the agent's inputs are unstructured documents, plan for an extraction layer before the agent.
3. Do you have data governance in place? Who owns each data source? Who can grant access? What's the process for an agent to be credentialed against a production system? Governance gaps become blockers when you move from demo to deployment. If the answer to "who owns this data?" is "we're not sure," resolve that before building.
4. Is your data fresh enough for real-time decisions? If your agent is making recommendations based on yesterday's data (or last week's), will that be accurate enough? Latency matters differently at different latency tolerances — a daily summary report can tolerate a 12-hour lag, but a customer escalation agent cannot. Know your freshness requirement before you build.
"Stale data doesn't break agents visibly. It just makes them confidently wrong."
Process Readiness
AI agents automate processes. If the process isn't defined, the agent will make it up — and not always in ways you want.
5. Is the workflow documented well enough to hand to a new employee? Not "could someone figure it out by watching," but "could someone follow the written instructions without asking questions?" If you can't write that documentation, you can't build an agent. Start with the documentation.
6. Are there clear, measurable success criteria? What does "done correctly" look like? If the answer requires judgment that varies by reviewer, the agent will fail evaluation inconsistently. Success criteria should be specific: "invoice processed with correct GL code and amount within 2% of invoice total" is evaluable. "Handled the invoice correctly" is not.
7. Is there a clear owner for this process? Someone needs to be accountable for the agent's outputs — to approve changes to its behavior, review escalations, and make the call when the agent is wrong. If process ownership is diffuse or contested, agent governance will be a permanent source of friction.
8. Do you have a defined failure path? What happens when the agent can't complete a task? Who gets the escalation? How fast? What context do they need? Agents will fail — on edge cases, on upstream API outages, on inputs they haven't seen before. The failure path needs to be as designed as the success path.
"The question is never whether the agent will fail. It's whether you've designed for the failure."
Team Readiness
Technology is the easy part. People and authority are where implementation stalls.
9. Do you have an AI champion with real influence? Not a title — someone who will fight for the project when it hits its inevitable rough patch at month two. Champions with influence can unblock access, defend budget, and course-correct scope. Champions without influence watch the project die in committee.
10. Is leadership aligned on what success looks like in 6 months? "Aligned" means a shared written definition, not a nod in a meeting. If the CPTO expects a 40% reduction in processing time and the business unit head expects headcount reduction and the finance team expects cost neutrality, you will fail all three expectations. Align on one primary metric before you start.
11. Can your team collaborate effectively with AI outputs? Humans who review agent outputs need to understand what the agent is doing well enough to catch systematic errors — not just obvious failures. This is a skill gap in most teams today. Budget for it: a two-hour training on "how to review AI output critically" is worth more than an extra sprint of development.
12. Do you have budget authority for at least 12 months? AI agent deployments mature in months three through nine, not month one. Projects that get reviewed for ROI at the three-month mark — before the agent has accumulated enough production data to improve — get cancelled at exactly the wrong time. Confirm you have runway before you start.
"The organizations that succeed with AI agents are the ones that treat month one as a baseline, not a deliverable."
Scoring
Count your "yes" answers.
10–12: You're ready. Start with one high-value workflow, build tight evals, and ship. The foundation is there.
7–9: Almost ready — fix the gaps first. Identify which "no" answers are blockers (data accessibility, process ownership, budget authority) versus nice-to-haves. Fix the blockers before you build. Running a pilot while the foundation is unfinished creates technical debt that's harder to unwind than a delayed start.
Under 7: Invest in the foundation. This isn't a failure — it's an accurate diagnosis. The organizations that try to skip foundation work and deploy agents anyway typically spend 12 months undoing the damage. Use the next quarter to fix data accessibility, document your highest-value workflows, and secure alignment on success criteria. Then revisit this checklist.
One more thing: this checklist is a starting point, not a complete audit. The enterprises we've seen succeed with AI agents share one trait beyond the twelve questions above — they treat the first deployment as something to learn from, not something to get right. Build that expectation into your organization before you build your first agent.