Onboarding Your AI: how to roll out learning software like Solid in the real world
Rolling out AI that learns your data is less about switching on features and more about teaching a new teammate, together.
Imagine a new senior analyst joining your company.
They are smart. They have seen a lot of different data stacks. They know SQL, modern BI tools, even your industry.
On day one, they still know almost nothing about your business.
You do not measure their success after a 2 hour orientation and a couple of docs. You expect a ramp up: first week, first month, first quarter. You give them access to your tools, let them shadow people, and you watch how quickly they start making good decisions with your data.
That is exactly how you should think about rolling out an AI-based technology like Solid.
Solid is a piece of software, but it behaves much more like a new team member than like a static SaaS feature. It has to learn your environment. It will be wrong sometimes. It will get better. And the way you structure that learning journey is what makes the POC and rollout work or fail.
In this post I want to talk about that journey.
How Solid learns your world
When we plug Solid into an organization, it does not start from a blank screen. It starts reading. A lot.
Solid automatically pulls in things like:
Your data warehouse schema
SQL query logs
BI models, dashboards and how they are used
Data documentation and wikis (most organizations have very limited such documentation, and that’s fine)
JIRA tickets that talk about data issues or definitions
Your public website and help center
And usually a few more odd places where context is hiding
If you read Your Data’s Diary: Why AI Needs to Read It, you have seen this idea before. Your data stack is already full of tiny breadcrumbs that explain how the business actually thinks. Solid treats those as training material.
From that, the platform can usually get to something like 70 percent understanding on its own:
It can map core entities and tables
It can understand how business concepts manifest themselves in the data
It can see which dashboards actually matter
It can infer common metrics and how they are calculated
That initial pass is fast. It is also imperfect.
The last 30 percent is where the real journey starts.
The 70 percent rule (and why it is a feature, not a bug)
Traditional SaaS trained us to think in binaries:
Does the feature work or not?
Is the integration live or not?
Is the dashboard correct or not?
With learning software like Solid, that mental model breaks.
If you wait for “100 percent accuracy” before you let anyone touch the system, you will never ship. At the same time, if you treat every wrong answer as a catastrophic bug, your team will lose trust before the AI even gets a chance to learn.
In The Ghost in the Machine: How Solid drastically accelerates semantic model generation, we showed how much of the semantic model can be auto generated if you give the AI enough signals. But even there, the goal is not perfection out of the box. The goal is to shortcut the boring 70 percent so that humans can focus on the nuanced 30 percent.
That 30 percent is made of things like:
Subtle business rules that only live in someone’s head
Edge cases in how a metric is defined for one region or one product line
Legacy decisions that everyone hates but the business still relies on
Political constraints about which numbers are “official”
No model can guess these reliably without you.
Which brings us to the part most teams underestimate.
The customer is part of the product loop now
In the old SaaS world, the vendor built features, QA tested them, then threw them over the fence. You, as the customer, mostly validated integration and performance.
With Solid, the customer is inside the learning loop. That is not a nice to have, it is the design.
A typical journey looks something like this:
Ingestion and auto-learning
Solid connects to your stack, reads everything it can, and builds an initial semantic understanding.Expert pass from Solid’s team
Our own data and product specialists review what the AI learned. We correct obvious mistakes, tune prompts, and add domain patterns we have seen across customers. This takes the model from “raw” to “respectable”.Guided “stump the AI” sessions with your team
This is where it gets fun. Your analysts and business stakeholders start asking questions they truly care about. They try to break the system. When Solid gets something wrong, they mark it, correct it, and we treat that feedback as gold.Focused correction and pattern learning
Every correction is not just a patch. It is a pattern. If we learn that “active customer” has a very specific meaning in your world, we propagate that everywhere. This is where our approach in AI for AI: how to make “chat with your data” attainable within 2025 kicks in: using AI to fix data for AI.Rollout to broader users
When the core flows are stable, we expand to more use cases and more teams. Each new group brings new questions, which means more learning.Continuous refinement
Data changes, business changes, incentives change. So the model keeps learning. But at this point, you are improving an already useful teammate, not babysitting a toddler.
The key shift is this: your organization is now a co-teacher of the AI.
If you treat feedback as “extra work”, you will resent the process. If you treat it as “we are teaching a very fast learner who will keep that knowledge forever and share it with everyone”, the investment makes sense.
Redefining what “POC success” means
Most POCs still look like classic software projects: a few weeks of integration, a checklist of features, a final demo.
For learning software, that is not enough. You are not just checking “does it run.” You are checking:
Coverage – Did the system actually learn the important part of your world, not just a toy example?
Accuracy and trust – When it answers, is it reliable enough that people would act on it?
Learning velocity – When something is wrong, how quickly can you teach it the right behavior so it sticks?
If you only optimize for the demo, you will overfit to a narrow happy path. If you optimize for learning velocity, you get something that keeps getting better after the POC is over.
What actually happens in a Solid POC
The POCs we run at Solid are a good example of how testing AI-driven software is different. The deck we share with customers breaks this down very simply. A Solid POC has four main moves.
Align on scope and questions
We pick one business domain and one or two real use cases. Together we define success: which metrics matter, and which business questions Solid should be able to answer confidently. You give us a starter set of around ten “golden questions” that represent reality, not a made-up demo.Turn Solid on and let it read
For about two weeks, Solid connects to your warehouse, SQL logs, BI, docs, Jira or Slack, and sample data where needed. It auto-builds documentation and semantic models and usually gets you to about 70% accuracy without anyone writing specs or YAML.Build a shared benchmark and test
From your initial ten questions, Solid generates more variations and uses them as a benchmark. Your team and ours test the answers directly in tools like Snowflake Cortex Analyst, Genie, or BI. When something is off, we fix the model, glossary, or context and re-run the benchmark. The goal for the POC is very concrete: reach about 85% accuracy on that benchmark, in the tools your users will actually use.Go live with real users and expand
Once the first domain clears the benchmark, we invite a small group of business users to ask their own questions and give feedback. Then we expand to more users and more domains, reusing everything the AI and your team already learned. Over the first month you usually end up with AI-ready documentation, working semantic models for at least one key domain, and a live feedback loop that keeps accuracy improving.
Under the hood, there are weekly sessions, 1:1 time with our product team, and very explicit milestones, but the essence is simple: define the questions, let the system learn, and then stress-test it together.
Aligning expectations between vendor and customer
For this to work, vendor and customer have to agree on a few things up front:
We are onboarding a learner, not installing a feature. There will be a ramp. Day one value is real, but it is not full.
Your people are part of the loop. We bring the platform and our playbooks. You bring the meaning of “active customer,” “churn,” or “qualified lead” in your world.
Feedback is first-class. Marking an answer as wrong and correcting it is not “extra work.” It is literally how your institutional knowledge gets encoded into the system.
If we are clear on that, the relationship feels less like vendor vs. buyer and more like a joint team trying to level up a new colleague.
This is the new shape of software
Zooming out, this is where a lot of AI products are heading.
Old software shipped features. You configured them a bit and then mostly hoped nothing changed too much.
New software ships learning systems. They arrive with opinions, read everything they can, then adapt to the specifics of your company. The real question is not “does it have feature X” but “how quickly can it learn Y.”
That requires a different rollout playbook and a different mindset from both sides.
Bringing it back to your next POC
If you are about to start a POC for any AI that needs to understand your data, a few simple questions can keep you honest:
Have we agreed this is a learning journey, not a one-off demo?
Do we know which real business domain and questions we are using as the test?
Are we giving the system enough “breadcrumbs” to read: schemas, logs, docs, tickets, dashboards?
Do we have named people who will teach it and review answers?
Are we measuring not just accuracy, but how fast that accuracy improves?
If most of these are a “yes,” you are not just running another POC. You are onboarding an AI teammate that will keep getting better at your business long after the slideware is gone.


The human-in-the-loop framing is exactly what distinguishes succesful AI deployments from failed POCs. When organizations expect 100% accuracy Day 1, they miss that the real value is in how fast the system learns your specific context rather than generic patterns. The 70/30 split you describe maps perfectly to what I've seen with semantic layer implementations...most of the grunt work can be automated, but the business logic nuances that actualy matter require domain expertise. Teaching the AI through corrections rather than treating errors as bugs is the mental shift that's hardest for teams used to traditional SaaS.