We recently brought a group of TA leaders together for our first event: AI for TA Teams: A how-to guide for your hiring funnel. The idea was to hear from people who use AI every day and get a clear picture of what’s working for them.
Shiv Brodie (Metaview), Amandeep Shergill (Automattic) and Cait Mallinson (Hyperexponential) walked through real examples from their own teams, including sourcing agents that take unstructured kickoff notes, workflow automation that cuts days out of role openings, and screening tools that reduce a thousand CVs to something manageable in minutes.
A few things stood out:
↪ Regular experimentation keeps teams sharp.
↪ People learn faster when they try things themselves.
↪ AI is most useful when it frees people to do the work that actually needs a human.
This guide pulls those ideas into one place, so you can take the same steps in your own team, whether you’re exploring your first use case or refining the ones you already have.
We’ll start where the panel started: picking the right problem to solve.
#1. Pick the right TA problems for AI to solve

Our panellists started from the same place: with a part of the hiring process that had become unmanageable.
For Shiv Brodie at Metaview, it was sourcing. Hiring was ramping up, she was the only recruiter, and she’d already squeezed everything she could out of traditional LinkedIn searching. “Agent mode was fine, but it wasn’t as good as I would be,” she said. That was the moment she knew she needed something that could work while she wasn’t online.
Cait Mallinson at Hyperexponential faced a very different pain point. Role openings took two or three weeks. Information came from Slack, email, and direct messages. Some roles even went live without interview questions or a skills assessment. The problem was consistency. So she rebuilt the process around a simple trigger in Zapier and a Glean agent. “From four prompts, our agent builds the job description, assessment and interview plan in two minutes.” That alone removed days of back-and-forth and gave the talent team clean, complete briefs every time.
Then there was volume. Automattic regularly sees 1,000 applications for a single role, and at one point received 1,200 CVs over a weekend. As Amandeep Shergill put it, “One person cannot go through all that.” His team tested several screening tools because they wanted something that learned from human decisions. One platform reduced a thousand-CV slate to two hundred in ten to fifteen minutes. “But you still need the human in the loop.” Tools helped triage, but recruiters made the final judgment calls.
When you hear these stories side by side, it’s clear each leader picked a problem that was measurable. Hours lost. Incomplete role briefs. Thousands of inbound CVs. If you can measure it, you can test against it.
💡So if you’re unsure where to begin, look for the workflow that regularly gets pushed aside because nobody has the time to fix it. That’s usually your best starting point.
#2. Run a smart pilot

When teams talk about “testing new tools,” it often sounds more organised than it is. The reality is usually a bit chaotic with too many vendors, not enough baselines, and a lot of enthusiasm with no structure around it. Automattic learned that the hard way.
Amandeep explained it plainly. “We never just go with one. We build our own success framework, not the vendor’s, and if we’re not seeing results, we cut it off.” His team reached that point after running a pilot where the tool looked promising on paper, and even more promising in the vendor survey results, but didn’t hold up in practice. When he asked the recruiters privately if they’d pay for it, the answer was no. That’s when they decided their own criteria had to come first.
Before any trial starts, they log the current time spent and the current quality of output. They agree what “good” should look like. They test for a few weeks. They check in weekly, or sometimes twice a week, to capture what’s happening on the ground. If the trend isn’t improving, they stop early. “If it isn’t trending to success, get rid of it.”
They also negotiate longer unpaid trials when they can. “Because we’ve got big brands, it’s easier to get a two-month pilot,” Amandeep said. But even then, the team has to show up with structure. A two-month pilot without clear criteria is still just two months of noise.
Cait added a useful distinction when describing her own approach. Automation handled the workflow (the reminders, the Jira tickets, the Teams channels), but the AI agent handled the heavy lift: writing structured job briefs, designing assessments, creating interview plans. It’s a small point, but it helps teams stay clear about what they’re testing and why. If a workflow is broken because of missing information, you test automation. If the work itself needs to be produced, you test an agent.
One thing worth noting is the strain pilots can put on a team. Amandeep mentioned that switching tools every two months gets tiring, especially in a small TA function. Clear success criteria shorten the pilot window and stop the team getting stuck in endless testing cycles.
A good pilot should give you clarity on:
- How long the workflow took before?
- How long it takes now?
- Whether people actually enjoy using the tool?
- Whether it made the work easier to do?
If you can answer those questions honestly, you’re already ahead of most teams.
#3. Win over your team

Getting people to use AI is often a behaviour problem. The panel made that clear without saying it directly. Adoption only sticks when people see the benefit for themselves or when the organisation signals that it’s expected.
At Automattic, things moved quickly once the CEO stepped in. For weeks, recruiters and engineers pushed back. Some didn’t trust the tools. Others weren’t convinced AI would help them do their jobs better. Then came the line that changed everything: AI usage would be considered in performance reviews. “Everyone adopted overnight,” said Amandeep Shergill. HR and Legal, who had been cautious, immediately switched gears and started looking for ways to support AI use instead of blocking it.
Hyperexponential took a different path. They built momentum from the inside. Cait is part of an internal AI council made up of 15 AI champions from across the organisation. Each person has 20% of their time protected to experiment, build small tools and share learnings. Wednesdays became hackathon days. Thursdays were for training and building with the talent team. It created a steady rhythm of progress. “We started with team pain points,” Cait said. “Once they saw their capacity increase, they wanted more.”
A subtle point she made is worth repeating. The champions came from engineering, people operations and other teams. It meant new ideas didn’t compete for attention; they spread naturally because colleagues saw familiar faces working with the tools, not just the AI-enthusiasts.
Both stories show how different cultures make change happen. One uses a top signal to unlock movement. The other builds curiosity gradually, anchored in proof. Neither approach is better in theory. It depends on your organisation’s appetite for pace and how ready your teams are to experiment.
💡The common thread is that proof comes first. Fix one real problem. Show the before and after. Help people see what becomes possible once that friction disappears. Once you have that, adoption feels less like a push and more like the obvious next step.
#4. Keep score properly

Most teams say they measure the impact of AI. Few can actually point to numbers that mean anything. The panel’s experience made something very clear: if you don’t track the before, you’ll never be able to prove the after.
At Hyperexponential, Cait’s team had to build their own way of measuring progress because the usual reporting didn’t capture where delays actually happened. “It’s hard to measure time savings without prior baselines,” she said. The solution was to track the trigger points. For instance, when a hiring manager raises a request, when the Jira ticket gets created, when a role goes live, or when the process closes. These timestamps revealed where work slowed down long before AI entered the picture. Once the Glean agent and automations were in place, they could finally see the difference in black and white.
For Amandeep at Automattic, quality matters most; “I care about speaking to the best people fast.” With over a thousand applicants for many roles, volume alone made traditional metrics meaningless. His team focused on two things:
• time to surface the strongest 5 to 10%
• whether the shortlists generated better conversations
He also paid attention to how the team felt about the tools. Vendor surveys often painted a rosy picture, but private conversations told the truth. If the recruiters didn’t enjoy using something, it didn’t stay.
Another useful point: both teams measured adoption through behaviour, not sentiment. How many roles used the new workflow? How often did people skip steps? How many times did someone revert back to a manual process? Those patterns were far more revealing than satisfaction scores.
The takeaway is: Measure what the work actually looks like. Capture the moments when things move forward. Track where they slow down. And check if the tool changed any of that. If it didn’t, you’ll know immediately. If it did, you’ll have evidence you can take to your CFO without padding the story.
#5. Keep humans in the loop

Every speaker had examples of where AI helped, but they also had stories that made it clear where the line needs to stay firmly human.
Amandeep shared one of the clearest examples. At one point, Automattic AI-written outreach. It was fast and scalable, but the impact wasn’t there. “Human personalisation took response rates from 7% to 40%,” he said. The team kept the AI-generated structure because it saved time, but they always rewrote the final message in their own voice. Candidates could tell the difference immediately.
Shiv at Metaview saw the same pattern in a different place. After years of living in LinkedIn messages and email sequences, she noticed something surprising: “Picking up the phone is cutting through again with candidates.” The more AI-generated content candidates received, the more powerful an actual conversation became.
Cait raised another important point. AI is becoming part of the candidate experience whether companies like it or not. Candidates are using tools to prepare, draft interview responses and refine examples. Her view was straightforward. “We want to make it a fair playing field,” she said. Her team started building candidate-facing tools so people could prepare well without feeling like they were on uneven ground.
💡 A common theme runs through all of this. AI can handle scale, structure and speed. Humans handle judgment, connection and context. Interviews, phone calls, calibration sessions and final decisions all sit on the human side of the line. When teams respect that split, AI strengthens the hiring process instead of flattening it.
#6. What’s being built next

None of the speakers treated AI adoption as something they’d “completed”. Each team already had new projects underway, mostly driven by gaps they uncovered once the first wave of automation landed.
At Hyperexponential, Cait Mallinson has been developing a real-time interview assessment agent internally. “It listens in real time, prompts follow-ups, grades response quality automatically, and helps newer interviewers dig deeper.” The vision isn’t to replace interviewers. It’s to make sure interviews aren’t shallow, repetitive or dependent on whoever happens to be in the room. There’s also a longer-term benefit she’s exploring: linking interview quality with eventual hiring outcomes so the team can spot where decisions drift.
Shiv at Metaview is working on something simple but high-impact. Interview transcripts highlighted patterns she’d been sensing for a while, such as repeat questions, unclear explanations, and inconsistencies across interviewers. She’s using that data to build a self-serve FAQ bot for candidates. It answers the questions she’s asked constantly, using content automatically pulled from transcripts and past interviews. It saves candidates time and takes low-value admin off her plate without compromising the candidate experience.
Amandeep is thinking about where recruiters spend their time. “We’re leaning into Talent Advisory 2.0,” he said, meaning more strategic work and fewer manual tasks. He’s already experimenting with internal tooling to make that change happen. One project he mentioned could reduce 60 hours of design-team workload down to almost nothing. He isn’t an engineer, but AI has given him the ability to prototype tools himself and pass the working concept over to engineering once the value is proven.
What’s interesting about all three examples is how grounded they are. They’re practical improvements to interviews, candidate clarity and the recruiter’s workload. Each one solves a problem the team already feels. That’s usually a good sign you’re moving in the right direction.
Wrapping up
After spending time with the panel, one thing becomes clear fairly quickly. AI becomes genuinely useful in hiring when teams stop talking about it in the abstract and start treating it like any other operational tool. Pick a real problem. Test something small. Keep what works. Drop what doesn’t. Then repeat.
The companies winning with AI in talent aren’t automating recruiters out but making them better equipped. Every example in this guide reinforces that. Sourcing becomes lighter. Role openings get unblocked. High-volume screening stops swallowing entire days. Interviewing becomes more consistent. And recruiters get their time back for the work that actually needs a person.
There’s no perfect sequence or universal playbook. What the panel showed is that progress comes from momentum. When you build one successful workflow, you earn the right to build the next. Stakeholders trust the process a little more. The team becomes more confident experimenting. The tools get better because the feedback gets sharper.
If you’re starting now, keep it small, structured and honest. Track the before and after. Involve your team early. And remember the line that came up more than once during the event: humans decide, always.
Scede supports teams through this kind of work every day. If you want help shaping your first pilot or pushing an existing workflow further, we’d be happy to talk.