How We Streamlined our Hiring Process (And Stopped Missing Great Candidates)

We're hiring, and despite working with a top-tier recruitment partner who does the heavy lifting, I still must review resumes and selected from a shortlist of candidates, prepare for and conduct interviews. It's necessary work, but it's time consuming – especially if we're interviewing the wrong candidates.

And here's the thing that I've noticed recently: the resumes seem to have gotten harder to evaluate.

Candidates are obviously leveraging AI to tailor their resumes for every single role. No surprise, but it means that as a hiring manager it's even more important to filter through the buzzwords that mirror our advert and check for experiences that are relevant. I needed a way to cut through the noise and see what mattered.

So I built an AI Assistant to help.

The Setup: Creating the Assistant (5 Minutes)

The beautiful part about QuivaWorks is how accessible this is for non-technical people. I didn't need to write code or understand machine learning. I used the 'Create with AI' feature and simply described what I wanted—like I was delegating the work to a smart colleague.

Here's what I told it:

> "Configure this agent to be an expert at recruitment for key hires at QuivaWorks. It should be able to write role descriptions, job adverts and review resumes that are received from candidates against role adverts/descriptions and help evaluate good fit without any bias. > > When reviewing resumes, it should include a summary table that provides review of duration in roles (shortest, longest, average), number of years work experience and of that how many considered to be relevant. As part of the resume review, provide a very concise summary of relevant experience with brief citation against the key role requirements. Provide some sort of ranking of 'fit' against the role description and overall rating."

QuivaWorks then configured the entire assistant for me—setting up the instructions with an evaluation framework and the output format. I reviewed the detailed instructions it generated, made a handful of tweaks to match our exact hiring philosophy, and done. Start to finish: about 5 minutes.

The key part of those tweaks? Making sure the assistant evaluated candidates on experience and qualifications only - never demographics or unconscious bias triggers. I got it to anonymise the summary.

The Real Test: Does It Actually Work?

Setup is one thing. But does it actually work? That was my real question before we could use it.

I needed to test credibility with real data to establish a baseline. So I grabbed the latest two resumes from our recruiter's shortlist (good candidates, but not standouts) and manually reviewed them, then I dug out the resume from our preferred candidate - someone we'd already interviewed and knew was genuinely strong.

I fed all three into the assistant with a simple prompt: "Review these resumes against this role description" Then I pasted in the job description text.

What came back in under 1 minute:

- A detailed summary of each resume with specific examples of how their experience matched the role requirements - A fit rating (out of 10) for each candidate - Key strengths and potential gaps clearly mapped - Interview focus areas for each person - A formatted comparison table breaking down each candidate against core role dimensions - A clear recommendation of which candidate was the best fit

The results were uncannily accurate. Our preferred candidate scored highest—with the exact same focus areas we'd identified during our interviews. The two other candidates flagged similar gaps to what I'd noted during my own reviews. Details I'd caught, the assistant caught too. And in some cases, the assistant surfaced relevant experience I'd glossed over.

But more importantly: the assistant saved me from wasting interview time on candidates who weren't strong fits. I tested this theory by looking at candidates I'd interviewed and found mediocre. The assistant's analysis, applied retroactively, had flagged them as "Moderate Fit" with specific reservations. Those resumes had passed my initial scan. The assistant would have caught them earlier.

I ran this a few more times with additional resumes. Same pattern. The assistant isn't just useful - it is reliable.

What Made This Actually Work

There were three reasons this worked as well as it did:

1. Ease of creation and iteration. The 'Create with AI' feature in QuivaWorks meant I didn't need to be a prompt engineer. I described what I wanted in plain language, and the system handled the complexity of translating that into actual instructions. Then, when I wanted to refine something, I just tweaked the instructions - no rebuilding from scratch.

2. Accuracy on the details that matter. This was critical. Resume review relies on extracting specific information from PDFs and Word documents - dates, job titles, descriptions of responsibilities. With many AI tools, this is where hallucinations creep in. QuivaWorks' document handling is built to extract information without guessing or inventing details. That accuracy meant I could trust the analysis.

3. Team adoption and expansion. After I shared the assistant as a Team resource in QuivaWorks, a colleague saw the potential and took it further. Using the 'Improve with AI' feature, they added instructions for generating interview questions tailored to each candidate's background, real-time interview note-taking, and post-interview analysis. The team took shared ownership of the assistant and evolved it.

These enhancements were valuable, because once we had structured interview questions tied to the resume analysis, we were better prepared. And once we had real-time note-taking and post-interview summaries, we weren't scrambling to write things up from memory. We had a clean record of what we'd discussed, what we'd learned, and why we'd made decisions.

It went from a resume review tool to a complete recruitment pipeline tool in a matter of minutes. That's the real power here: it's not just that I could build it. It's that my team could collaborate on it, improve it, and own it together.

The Real Impact

Time-wise, this is saving me roughly 20 minutes per candidate we interview, plus around an hour per candidate we might have interviewed unnecessarily. That doesn't sound dramatic until you're juggling two or three hiring processes simultaneously. It adds up quickly.

But the bigger impacts are:

- Consistency. The evaluation framework is the same for candidate #1 and candidate #25. And the assistant flags the same types of gaps and strengths that I would - meaning I'm not missing great candidates because I'm rushing through a stack of resumes the recruiter shortlisted.

- Better interview decisions. Because the assistant has already done a thorough, structured assessment upfront, and prepared questions and key focus areas I'm walking into interviews knowing exactly what I need to dig into.

- No unconscious bias. The assistant evaluates based on skills and experience alone and even anonymises the results.

- Clear records. When we make an offer or pass on a candidate, we have documentation of why. Not gut-feel. Evidence. That matters both for consistency and for defensibility.

Could You Benefit From This?

If you're hiring, and you're spending time on resume review, interview prep and write-ups this is worth exploring. Especially if you're managing multiple hiring processes or evaluating candidates across different roles where consistency matters.

We've made the Recruitment Assistant available in the QuivaWorks Marketplace. You can add it to your own account and try it for free. See if it surfaces the same gaps and strengths you'd identify manually. Test it with candidates you've already interviewed - compare the assistant's analysis to your own notes.

Then decide if this is something worth building into your hiring process. Would love to hear your feedback.

Try the recruitment Assistant