The day AI stopped being magic and started doing work, with Ilan Frank
A studio conversation with Checkr’s Ilan Frank about starting from pain, earning trust, and why adding AI led to hiring more people
Recently I recorded a podcast episode with my good friend Ilan Frank. I asked Ilan to come in because he’s built through a few different climates. SAP. Slack. Airtable. Now Checkr. People who’ve shipped under different constraints tend to separate signal from theater. They know where the bodies are buried and where the leverage is.
We opened on the winter of 2022–2023, when ChatGPT landed and every roadmap started to wobble. Ilan remembered the feeling. “It felt very existential at the time.” He was at Airtable at the time, and they had two options: use AI to help people build Airtable apps faster, or hand customers an AI toolset inside the apps they already had.
They chose the toolset. The power showed up. The use case did not. Users stared at it like a CNC machine when all they needed was a hammer. The output was impressive. The job to be done was fuzzy. So users didn’t take to it.
Ilan’s takeaway now is simple: start where something already hurts, not where something might be cool. Build a very small thing with a few customers and watch real usage. If nothing sticks in two or three months, change course.
Where pain was obvious
Checkr lives in a world where the problem does not need a pitch. Background checks at scale are messy. Different courts. Different formats. A lot of typos and free text. In a single year the system saw “two million unique charge types.” Not because America invented two million new crimes, but because humans typed “Burglry,” “Burg.,” and every variant you can imagine.
This is a perfect place for machines. Learn the pattern. Normalize the mess. Attach a confidence score. Pull a human when you are unsure. The surprise was not speed or cost. It was accuracy. “It is more accurate than humans doing the same work,” Ilan told me, and they validated it side by side. Still, they kept people in the loop because a background check is not a place to win theoretical debates. “AI is non-deterministic,” he said, so the system has spot checks, exception queues, and thresholds that route to humans.
That pairing matters. Scale from models. Guardrails and trust from people.
When trust becomes the product
After classification, the team tried something bolder. An AI explainer inside the report that turned the raw court text into plain English. Click. Read. Understand. This allowed the adjudicators (those who decide what to do with the results of the check) to easily understand the data in the check. For a moment it felt like the right kind of magic. Then customers realized a model wrote it.
“Right away we basically had to pull that back,” Ilan said. The risk was not abstract. Hiring decisions ride on this text. The fix was slower on purpose. Every new “charge + county” explanation goes through human review with criminologists before it reaches a customer. Ilan was direct about the trade. “It would be a lot more convenient if we could just go straight from prompt to customer, but we work in a very regulated space, and we want to be very careful.”
Product velocity feels great. Product trust pays the bills.
The plot twist: AI created jobs
My favorite moment was the least expected one. “We add AI to our product and then we end up actually hiring more humans.” New capability created new work. Reviewers. Domain experts. QA for model outputs. The company grew because the product grew. Ilan’s wider view: “Productivity just creates jobs.”
I buy it. We have seen the smaller version of this. When a feature finally pulls its weight, it drags new responsibilities into the light. Support grows. Success grows. Quality bars rise. You hire.
Even at Solid we do something similar - most of the work is done by AI and other algorithms, but to increase quality we actually hire analysts. Humans, who review the output before it gets to a customer.
The long tail that breaks your reputation
We talked about the part of the experience most teams ignore: the last 1 percent. Delays from mismatched documents. A form in the wrong language. Employment verification that needs three phone calls. You can have a great median, but a bad P99 trains your customers to expect pain.
Checkr is pushing on exactly that. Ilan said employment verification will get “considerably faster” with more instant signals coming online by year’s end. On the candidate side, he wants a helper that nudges people to answer the question they were actually asked, not a ghostwriter. Fewer stalls. Clearer stories. More fairness for both sides.
He also flagged what is they see being uploaded by candidates sometimes: fake résumés and doctored proofs. It’s getting worse with AI - being able to generate these. The fraud fight is no longer theoretical and Checkr will develop capabilities to help customers deal with it reliably.
What I took from the conversation
If you are steering an AI roadmap, here is the spine I heard:
Start where it hurts your users now. Not where a demo looks good.
Put people next to the model in the places where trust carries the weight.
Measure reality with a few customers. Let usage, not opinions, choose the path.
Expect to hire when it works. New capability makes new work.
And if you only keep one line from this episode, keep this one: “It felt very existential at the time,” but the teams that got through it treated AI like a material, not a miracle.
Want to hear the full episode? YouTube, Spotify and Apple Podcast.


