Download the checklist
Use the written version in internal procurement, architecture review, or leadership discussions before you approve a pilot.
Download the checklistA surprising amount of AI buying still happens backward. A vendor shows a strong demo, internal enthusiasm builds, and the commercial conversation starts from capability rather than operating reality. That is exactly how businesses end up with expensive pilots that never become reliable workflow infrastructure.
The first reframe is simple: most buyers should not start by asking whether the AI looks impressive. They should start by asking what kind of system it actually is, what commercial problem it is supposed to improve, and what risk or operating burden sits around the model.
1. Start with the commercial problem, not the category
“We need an AI tool” is not a buying case. Neither is “the market is moving.” A serious purchase starts with one explicit workflow or decision problem.
- What decision or workflow should improve?
- What is broken today?
- What measurable business outcome would make this worth funding?
- Who owns that result after go-live?
If those answers are weak, the buying process is probably happening too early.
2. Ask what the product really is
This is where many teams stay too shallow. “It uses GPT” or “it is fine-tuned” is not enough.
You need to understand whether the product is mainly:
- a fine-tuned model for a narrow behavior
- a retrieval layer over your content
- an orchestration layer around a major model provider
- a thin wrapper around an API call
Those are not semantic differences. They change data flow, control, defensibility, switching cost, and the amount of vendor substance you are really paying for.
3. Force the data-flow conversation early
Leaders often assume that if the feature sits inside a known platform, the data risk is understood. That assumption is weak.
Before approving the product, clarify:
- where input data goes
- whether a third-party model provider receives it
- whether prompts and outputs are stored or logged
- who can access those logs
- whether any layer uses data for training
- which subprocessors are involved
If the answers are vague, the buying process is not ready to progress.
4. Check permissions and failure modes, not just features
A polished interface is not the same thing as a controlled business system. For knowledge tools, copilots, and agents, you need to ask:
- Does the system respect document-level permissions?
- Can it surface content users should not see?
- What happens when confidence is low?
- Does it abstain, escalate, or hallucinate?
- Can a bad answer be traced after an incident?
The strongest vendors can explain their failure modes clearly. Weak ones retreat into demo language.
5. Model the real cost, not only the licence
Some AI products are cheap to start and expensive to run. That cost growth often appears only after the workflow is live.
- How does cost scale with production volume?
- What happens with long documents, retries, or heavy context?
- How much human review is still required?
- What internal support or monitoring burden remains?
Licence cost is visible. Review cost, exception handling, and usage growth are usually where the business case degrades.
6. Separate build-vs-buy from vendor-vs-vendor
Many teams compare vendors before they have even decided whether the workflow belongs in off-the-shelf software at all. Sometimes the right answer is a vendor product. Sometimes it is a lighter OpenAI API workflow with tighter internal control. Sometimes the workflow is not stable enough to automate seriously yet.
The correct question is not “should we buy AI?” It is “where should the critical logic, risk, and operating ownership live?”
7. Use a go / no-go gate
The final decision should not be vague enthusiasm. It should be one of five outcomes:
- proceed now
- proceed with controls
- run a contained pilot first
- re-scope the workflow before buying
- reject
That keeps the commercial decision explicit and protects teams from sleepwalking into avoidable complexity.
Conclusion
The most useful AI buying question is not “which tool has AI?” It is “which system can improve this workflow without creating hidden cost, governance risk, or fragile dependency?”
Use the checklist, push the vendor harder than they expect, and force the workflow conversation before the software conversation gets too far ahead.
Need a second set of eyes before you commit?
I help leadership, finance, operations, and BI teams challenge AI vendor claims and decide whether the right next step is buy, pilot, build, or reject.
Discuss AI software diligence