Agentic analytics, for real: how Solid is already doing what Gartner describes
Gartner’s Market Guide for Agentic Analytics sounds like a roadmap for the future of data and AI. For our customers, it is mostly a description of what they are already doing today.
When I read Gartner’s new Market Guide for Agentic Analytics, I had a very specific reaction.
This is not some far off vision of how analytics might work one day. It is a pretty good description of what we are already building and deploying with customers.
And if you want a concrete example, we have a public case study with SurveyMonkey that shows these ideas in production today:
SurveyMonkey x Solid Case Study
What Gartner actually means by “agentic analytics”
Let me translate the report into four simple ideas.
1. Start from specific, repeatable scenarios
Agentic analytics is not “ask anything in a chat box.” (we already discussed how that’s not an important use case)
The successful patterns start from well defined, high value workflows, for example:
Weekly business reviews
Account health and renewals
Campaign performance
Operational anomaly detection and follow up
These are things that already happen, on a schedule, with some manual collection of numbers and spreadsheets.
Gartner’s point is that you get real value when agents make these workflows faster and better, not when you show a cool one time demo.
2. Semantics and policy are the foundation
The report is very clear. If you point agents at raw tables and column names and hope they “figure it out,” you are going to get confident nonsense.
Organizations that succeed:
Invest in semantic layers, ontologies and knowledge graphs for their important domains.
Align metric definitions across tools, dashboards and teams.
Make sure agents and models are grounded in that shared layer.
In Journey we have been calling this the “context layer” between your warehouse and your AI. Posts like “The Two Souls of a Semantic Layer” and “Behind the scenes: how we think about semantic model generation” are basically a deep dive on this point.
3. Human and AI roles need to be explicit
Gartner spends a lot of time on delegation, escalation and human review.
The questions they ask are simple:
Which tasks can an agent do alone?
Which tasks must always be reviewed by a human?
When should an agent suggest rather than act?
The answer will be different for forecasting revenue, sending emails to customers, or building a one off dashboard for a team. The key is to make the rules explicit instead of letting each team invent its own.
4. Governance and cost are first class
Agentic systems can run expensive queries and call expensive models.
That means you need:
Observability into what agents are doing.
Guardrails around which data they can touch.
Controls on how much they can spend for a given type of insight.
Gartner talks about this as managing the “cost per useful insight” rather than just tracking model usage. It is a good mental model.
So that is the theory. Now let us talk about how Solid is positioned inside that world.
How we position Solid in the agentic analytics landscape
When people ask what Solid is, I usually describe it like this:
Solid is the context layer between your data and your AI systems.
More concretely, Solid:
Learns semantic models automatically from your warehouse, query history, BI and other sources.
Keeps those models governed, tested and up to date as things change.
Exposes that logic through APIs and MCP so that any agent, LLM or workflow can use it.
This is not a generic chatbot. It is an an enabler for all of your AI projects - those that leverage your structured data.
Here is how that maps to Gartner’s four themes.
Semantic foundations, automated
We agree with Gartner that semantic and policy alignment is foundational. The problem is that almost nobody has time to build and maintain a semantic layer manually.
That is why Solid:
Mines query logs, BI dashboards and recurring reports to see how the business already uses data.
Auto generates candidate semantic models that reflect real joins, metrics and entities.
Lets humans review and refine those models instead of starting from a blank file.
We wrote about this in “The Ghost in the Machine: How Solid drastically accelerates semantic model generation” and in “End-to-end: generating semantic models for Snowflake Cortex Analyst/Intelligence in two weeks”.
The result is very similar to what Gartner describes: a selective semantic layer for the most important domains, aligned with how the business actually works.
Analysts first, “AI for everyone” second
In “Stop saying ‘Garbage In, Garbage Out’, no one cares”, we argued that the business will not wait until all your data is perfect before they try AI. You need a safe way to get value now and improve quality over time.
Our approach is to start with the analysts and data team:
Give them an AI copilot that understands their warehouse and their metrics.
Let them supervise and correct what the AI does.
Only then open up the same capabilities to broader audiences in a governed way.
This is very close to Gartner’s picture of humans as supervisors and orchestrators of agentic systems rather than bystanders.
Embedded, not “sidecar chat”
The value of a “chat with your data solution” is extremely limited. That should NOT be your goal.
Solid is designed to show up where work is already happening:
Inside recurring analytics rituals like QBRs and weekly reviews.
Right next to existing BI tools and documents.
Triggered by events in the data, not only by user questions.
This maps to what Gartner calls conversational and perceptive analytics. We just think of it as embedding AI where it can actually help people do their jobs.
How we run pilots in an “agentic” way
The other part of the Gartner guide that felt familiar was their advice to “start with concrete scenarios” and “treat pilots as the first step toward production, not proofs of concept.”
That is almost exactly how we run pilots.
1. Pick one high value, bounded scenario
We never start a pilot with “ask anything about your data.”
We start with something like:
Help marketing run weekly campaign reviews without manual data collection.
Give customer success a reliable view of at risk customers before renewals.
Make it trivial for product managers to answer their five most common usage questions.
These scenarios have three things in common:
They happen all the time.
People are already doing them, but with a lot of manual effort and spreadsheets.
There is clear business value in making them faster and more reliable.
This is very close to Gartner’s advice to anchor agentic analytics initiatives in specific analytical scenarios and measure cycle time and quality of decision, not just “AI usage.”
2. Build the semantic slice that matters
Once we have a scenario, we narrow down the data to the actual slice that powers it.
Solid then:
Learns from historical queries and dashboards which tables and joins really matter.
Proposes a semantic model with the relevant entities, metrics and relationships.
Generates human readable documentation so business stakeholders can sanity check it.
You do not need a company wide semantic layer on day one. You need a good semantic model for the scenario in front of you, that can then grow step by step.
3. Define human and AI roles up front
Before any agent does anything beyond drafting and suggesting, we agree with the customer on clear guardrails.
For a given workflow we decide:
What the agent is allowed to do automatically.
What requires explicit human approval.
What the agent is not allowed to touch at all.
For example:
An agent may be allowed to draft commentary for a business review slide, but not to send that slide to an executive without someone reading it.
An agent may flag unusual patterns in the data, but not create or modify alerts in production systems.
Anything that touches pricing, discounts or regulated metrics always goes through human review.
This is our practical version of the delegation and escalation frameworks Gartner describes.
4. Measure outcomes, not just demos
Finally, we define what success looks like in terms that matter to the team.
Typical success metrics include:
Reduction in analyst time spent on repetitive requests.
Faster turnaround for key decisions (for example weekly reviews prepared in hours instead of days).
Fewer “shadow” spreadsheets and one off queries.
Reuse of the semantic model in new workflows after the pilot.
The goal is that a pilot creates a durable asset, not just a one time demo. The semantic models, guardrails and workflows we build in the first month are designed so that they can power the next set of use cases.
A quick example from the field
One short quote from our public case study captures the spirit of what “agentic analytics” feels like when it works.
In that case study, Meenal Iyer says:
“Our goal is for AI to remain dependable,” Meenal explained. “Teams should be able to use it with confidence, knowing the answers reflect how the business actually works.”
This is exactly why we care so much about semantics, governance and human in the loop design. Agentic analytics only matters if teams can depend on it.
If you want to see the full story, including the impact on accuracy, time to production and maintenance, you can download the case study here:
Download the case study
Where we are going next
Gartner also talks about “perceptive analytics” and agents that:
Monitor events and changes continuously.
Understand goals and constraints.
Adapt their behavior based on feedback.
In this blog we explored similar ideas in “Vibe Analytics: The new era of data experiences”. The short version is that we want AI to feel less like a tool you occasionally consult and more like a reliable colleague who taps you on the shoulder when something important happens.
A lot of our current roadmap is about:
Learning not just from schemas, but from how people actually use data over time.
Bridging the “two souls” of the semantic layer, so that governance and insight are not in conflict.
Making it easy for customers to go from one well defined scenario to a network of agentic workflows that reuse the same trusted foundation.
If you care about agentic analytics, here is what I would do next
If your leadership is asking “What is our agentic analytics strategy?” or “How do we move beyond pilots?”, here is a simple plan.
Read the Gartner Market Guide
Grab it here and read it with your own organization in mind:
Analyst Report: Gartner Market Guide for Agentic AnalyticsRead the case study in parallel
Look at how the ideas in the report show up in a real deployment:
SurveyMonkey x Solid Case StudySkim a few Journey posts to see how we think about this internally
For example:
You will see the same themes repeated again and again.
Start from concrete scenarios.
Invest in semantics where it matters.
Keep humans in the loop by design.
Treat governance and cost as product features, not afterthoughts.
That is what Gartner calls “agentic analytics.” For us, it is just how we build. We’d love to hear from you.

