Why Enterprise AI Struggles with Data, and What Leaders Should Do About It
On March 25, I hosted a webinar with Meenal Iyer and Solomon Kahn on where enterprise AI breaks down, what actually works, and how data leaders should think about trust, context, and business impact.

On March 25, we hosted a webinar on a topic I hear about almost daily: everyone wants AI on top of their data, but very few teams are fully happy with the results. I got amazingly positive feedback on it, and wanted to share with you.
I had the pleasure of leading the discussion with Meenal Iyer of SurveyMonkey and data leader Solomon Kahn. We spoke candidly about what’s working, what’s failing, and where data leaders should focus if they want AI to drive real value inside the enterprise.
If you’d like to watch the full webinar, you can do that here.
A few ideas stood out to me.
1. AI literacy does not happen in a slide deck
One of my favorite parts of the discussion was Meenal’s story about trying to bring AI into her organization.
Her first instinct was a very reasonable one: train the team. Create an internal AI academy. Upskill people. Teach the concepts.
It did not work the way she hoped.
What did work was much more hands-on. She had the team do a mini AI hackathon, focused on real problems in their own workflows. Suddenly, the conversation changed. People were no longer thinking abstractly about models, prompts, or which tool was hottest that week. They were thinking about friction in their day-to-day work, and whether AI could actually remove it.
That shift matters.
I keep seeing the same pattern across companies: people understand AI much better once they build with it. Not when they hear about it. Not when they read another thought piece on LinkedIn. When they actually try to solve something with it.
This is also very much in line with what I wrote in Building an AI-powered Intelligent Enterprise, where I shared more of Meenal’s broader journey and how her team has been evolving from experimentation toward production.
2. The problem is not “the data” alone. It’s the missing context around it
Another major theme in the webinar was one I think many teams learn the hard way.
You cannot just put a GPT on top of a dataset and expect magic.
Even if the dataset is clean.
Even if the columns are named well.
Even if there is a semantic layer underneath it.
That still does not mean the AI understands the business.
It does not know the internal logic behind the metrics. It does not know which edge cases matter. It does not know which definitions changed last quarter, which exceptions live in Confluence, which nuance is buried in Jira, or what your executives actually mean when they use a certain term.
Without that context, AI fills in the blanks on its own. That is where hallucinations and bad decisions start showing up.
This connects directly to what I wrote in “Sorry for the mess”. Enterprise data is messy. Everyone’s is. The answer is not to wait for some mythical moment where everything is perfectly documented and pristine. The answer is to build systems that can understand the reality of how businesses actually work.
3. “100% accuracy” is usually the wrong conversation
A big part of the Q&A ended up centering on accuracy, and understandably so.
If an AI system is 92% accurate, what do you do with the other 8%? Can you trust it? Should you use it for decision-making? Is that good enough?
My view is that we often hold AI to a strange standard.
Human analysts are not 100% accurate either. In many cases, they get to a high-confidence answer through back-and-forth, clarification, validation, and iteration. AI should be judged with similar realism. Not every use case requires perfection. Some do. Many do not.
The more useful question is: what is the use case, what is the risk, and what level of validation is needed?
For financial reporting, compliance, and tightly audited workflows, you still want highly controlled systems and dashboards. For exploratory work, data literacy, pattern finding, and helping people navigate large datasets, AI can already be very valuable even if it is not perfect.
That is one of the reasons I still believe throwing away BI is a bad idea. AI and BI should work together. One is not replacing the other anytime soon.
During the conversation I also brought up Waymo’s safety work as an analogy. The standard is not “never make a mistake.” The standard is whether the system performs well enough, consistently enough, in the context it is being used for.
4. Data leaders should be measured on business impact, not just on perfection
Solomon made a point that I think many data leaders feel deeply.
Data teams are often only noticed when something is wrong.
When the number is right, silence.
When the dashboard works, silence.
When the business gets what it needs, silence.
But when something breaks, or a metric is off, or an answer is confusing, that is when everyone suddenly remembers the data team exists.
That is a very hard way to operate.
The better framing is business impact. Are you helping the organization close deals, reduce churn, improve workflows, support better decisions, and move faster with more confidence? That is the real scoreboard.
This also ties back to something I wrote last year in Nobody cares about the efficiency of the data analyst. Efficiency is nice. Happier analysts are nice. But the main thing the business cares about is results. If AI helps deliver those results, people will forgive a lot. If it does not, no amount of cleverness will save it.
5. Start small, but start
The closing advice from the panel was refreshingly practical.
Do not boil the ocean.
Start with a smaller domain. Start with a use case where the data is relatively well understood. Start with a workflow where the upside is clear. Let people experiment. See what breaks. See what works. Learn quickly.
There is far too much pressure right now for companies to show they “have an AI strategy.” That pressure often creates giant top-down initiatives that look impressive in a deck and disappoint in reality.
A smaller, grounded, iterative approach is much less glamorous. It is also much more likely to get you somewhere real.
Final thought
If there is one message I would want people to walk away with, it is this:
Enterprise AI is not struggling because there is not enough excitement. It is struggling because real business data is nuanced, fragmented, and deeply contextual.
The teams that succeed will not be the ones with the flashiest demo.
They will be the ones that help AI understand how their business actually works, earn trust over time, and stay relentlessly focused on business outcomes.
Thanks again to Meenal and Solomon for such a sharp and honest discussion.
If you missed the webinar, you can watch it here. And if you want more on Meenal’s journey, this earlier post is a good place to start.

