Garbage in, garbage out: why your AI pilot cohort is everything

You've been burned before. A consultant who promised the world, took the fee, and left you with a report nobody read. A software rollout that looked great in the demo and died on contact with real work.

So this time with AI, you are starting with a pilot.  
Smart move.

Fact: the people in your pilot matter more than the tool you pick.

The thing about AI adoption is that it's not a tools thing, it's a people thing. Your AI adoption success begins and ends with how your people use it. And your pilot success hinges on who you choose for it.

The number 1 thing not to do?

Fill it with enthusiasts.

Seems like a logical move. Pick the enthusiasts, the lovers of the shiny and new. They'll really want AI to succeed. They'll move fast, stay positive, and the feedback will come back strong.

Except you'll learn nothing useful.

All you're really doing when you fill an AI training pilot with enthusiasts is measuring how much eager people enjoy something they were already excited about.

It doesn't measure what happens when you give it to a sceptic? Or a busy site manager? You'll have no precedent for that and you won't be ready for the issues that they bring because you won't have encountered them.

Who should you put into a pilot?

The best pilot group has three qualities.

1. They're willing and have not been press-ganged into it.

2. They do the same work as most in the company.

3. They sit inside a workflow where you can actually measure what changes.

A single department, end-to-end, is far more useful than a horizontal slice across the whole business. One team. One manager who can see before-and-after. One set of coherent use cases rather than random tests with people who don't work together.

What to avoid

1. Senior management only. They'll be genuinely interested in the room. Then they'll walk back to their desks and not touch it again for a fortnight. Their data will be thin. Their attention, elsewhere.

2. Younger staff only, on the assumption they'll be able to just figure it out. They won't; not in any way you can measure. They won't have enough lived experience in the company as to 'how we do things around here' and whether that will work.

3. Wildly mixed seniority in the same group. A principal engineer and a junior administrator are operating in entirely different worlds. The group dynamics will distort the data before you've even started analysing it.

4. People who are mid-crisis. A team firefighting a major project deadline will default to what they know. The pilot becomes one more imposition. Results will suffer - not because AI underperformed, but because nobody had the headroom to try.

5. People with a strong stake in the status quo. A long-serving process owner whose professional identity is built around a manual system they designed will comply minimally. Their presence skews everything around them.

Choosing your pilot cohort is like selecting a jury

Fans of The Lincoln Lawyer know the score. He doesn't just take whoever shows up, he studies who's in the room, understands their biases, their openness, their resistance, and constructs a group that will actually surface the truth.

Your pilot cohort works the same way. Get it wrong and you don't get useful data. Stack it with enthusiasts and you'll see great satisfaction scores that tell you nothing about real adoption. Populate it with sceptics and you'll manufacture failure.

The right cohort is a deliberate cross-section: some who love tech, some who are cautious, and some who hate it. At the AI Institute we love converting sceptics into enthusiasts, because if AI works for the person who didn't want it to, that's a powerful internal case study. That's the person colleagues will believe.

That's why we help you think through who goes in the room before anyone sends a calendar invite.

How we built our pilot process

Before any training begins, we use a voice-activated AI interview tool to take a baseline. We ask your team directly - what's slowing you down? Where does the work pile up? What part of the job quietly drains the day?

It's not so much a survey as a conversation, at scale, that surfaces the real problems - not just the ones people think you want to hear.

Then we design the course material around what comes back. The workbooks, the sessions, the video recordings - all of it is built around solving the problems your specific team named.

This is also why we don't pursue accreditation. Accredited programmes require curricula to be signed off months in advance which means locking in content before you've spoken to a single person on the recieving side. We'd rather build something that actually fits client needs than frame a certificate around a problem we haven't diagnosed yet.

Then we measure again at the end.

Same tool. Same questions. Different answers and a clear, documented picture of what shifted.

That's what before-and-after actually looks like.

An AI pilot is a rational risk management tool.

But a poorly designed AI pilot trades one risk for another by generating data that feels like proof and isn't. That's why it's so important to choose the pilot group carefully. Define success upfront and measure it properly.

AI optimised summary

A practical guide for construction firms running AI pilots. Covers who to include, who to avoid, and how to design a cohort that produces data you can actually trust - including how the AI Institute measures baseline pain points before training begins and tracks what shifts at the end.

Continue reading