It’s not AI. It’s transformation enabled through AI.
In our Season 2 kickoff of Building with AI: Promises and Heartbreaks, Todd James (ex-Fidelity, ex-Kroger/84.51°, now Aurora Insights) shares what it really takes to scale analytics & get real EBIDTA
I kicked off Season 2 with Todd James. Todd has the kind of resume that makes you wonder if this guy is real: Coast Guard officer, Deloitte, 15 years at Fidelity, then leading data and AI at Kroger and 84.51°, and now running Aurora Insights, where he works directly with boards and executive teams on turning AI into real enterprise value.
I can assure you, he is real.
This episode had so much great content, we had to split it into two parts. Today, we’re sharing Part One - available on YouTube Spotify and Apple Podcasts.
Below are the ideas I keep thinking about, plus a few practical takeaways if you are trying to make AI matter in a real organization.
1) The line that should kill most AI roadmaps
Todd said it in a way that simplifies what we’ve heard before:
“It’s not AI. It’s transformation enabled through AI.”
If you want to “play around with AI,” Todd’s point is that anyone can do it. You can fund pilots, do proof-of-concepts, and declare victory. But if you want material outcomes, including real EBITDA contribution, you are changing how the business operates.
That shift is why a lot of programs stall. A pilot can live in a corner. Transformation cannot.
This connects to a theme we’ve written about before on Journey: efficiency stories often feel good internally, but they rarely map cleanly to the P&L without operational change. If you missed it, this one is the blunt version: Nobody cares about the efficiency of the data analyst.
2) The 70/30 rule of enterprise AI
One of Todd’s most practical frameworks is how a senior data and AI leader should spend their time.
Roughly:
30% on the technical and programmatic mechanics
70% on the organizational dynamic: business alignment, incentives, trust, mobilization
I truly think most people don’t realize that’s the true ratio for success. It can sound weird, but it tells you what the job actually is.
A lot of leaders still treat AI as an engineering upgrade. Todd treats it as an operating model upgrade that happens to use AI.
3) Fragmented analytics is the silent killer of “strategic AI”
Todd talked about walking into environments where analytics is everywhere, and value is nowhere.
Not because the people are bad. Because the system is bad:
Teams are embedded and optimized locally
Prioritization is fragmented
Definitions are inconsistent
“Impact” is measured inside the analytics org, not in the business
If you have this setup, AI will not fix it. AI will amplify it.
This is also why the “chat with your data” wave is simultaneously inevitable and underwhelming. If the underlying data reality is messy, the chatbot just makes it easier to generate confident nonsense.
We’ve hit this from a few angles on Journey:
The common thread is the same: AI is an interface. The hard part is the system underneath.
4) Centralizing is not the hard part. Proving it works is.
Todd described reorganizing analytics in a way that will feel familiar to anyone who has tried to centralize talent.
There is always a moment where leaders feel like something was taken away from them. They lost “their” analysts. They lost control. They lost speed.
So Todd framed it as a race: reorganize quickly, then demonstrate that the leaders who “lost resources” will get better outcomes through the new model.
This is one of those dynamics that sounds obvious, but almost nobody treats it with the urgency it deserves. If you centralize and take a year to show impact, you will be reversed.
5) A perfect story about humans and automation: “I manage a micro bot.”
Todd told a story from a deployment where they involved the people who would actually be impacted (which is, by the way, a rare and crucially important move).
One woman told him:
“I used to be a transaction processor. Now I manage a micro bot. It makes me better. I make it better. And I get to do more of what I really love, which is creating great experiences for our customers.”
That is the actual win condition for most enterprise AI.
Not “we replaced a team.”
Not “we reduced headcount.”
Not “we automated analysis.”
It is: people move up the value chain, and the organization redesigns the work so that improvement compounds.
It also ties to a point that came up through the episode: as agents get real, more people will become managers of AI units. If your system is not legible, if it is a black box, nobody will trust what they are managing.
6) “AI is just another club in the bag”
Todd had a great metaphor I think more leaders should adopt:
“AI is no different. It’s just another club in the bag.”
Meaning: stop treating AI like a religion. It is a tool. A powerful one, yes. But still a tool.
The practical implication is important. You should not start with “where can we use AI?” You should start with “where does the business get stuck, where are decisions bottlenecked, and where is judgment repetitive enough that prediction helps?”
If your AI strategy starts with the model and ends with the business case, you are going to have a lot of demos and very few outcomes.
7) Why 84.51° could move fast (and why most incubations die)
I asked Todd about 84.51° and why it exists as a separate entity from Kroger.
First, the fun fact:
“It’s Longitude of Cincinnati. Think longitudinal analysis.”
Second, the more important insight: he explained why you do not “plant it in the mothership.”
“The last thing in the world you do is plant that in the mothership and put it on the core platforms and make it part of the corporate hierarchy. It’ll fail quickly.”
This is one of those quotes that should be printed and taped to the wall of every “AI Center of Excellence.”
If your “fast team” is forced to inherit the slowest platform constraints, governance cycles, and decision latency of the core enterprise, it will become slow too. Separation is not a branding choice. It is a survival choice.
8) The investor litmus test: “revenue per employee”
Near the end, Todd shared something interesting: investment analysts are trying to tell who is real about AI versus who is just talking about it.
One metric they are watching is “revenue per employee,” with the logic that real transformation should show up as either more sales or more efficiency per unit of sales.
I like this as a forcing function, but it comes with obvious caveats (you can “improve” it by cutting headcount). The point is not that it is perfect. The point is that serious stakeholders are trying to measure reality, not vibes.
This is also a nice reminder: if you cannot explain your AI program in the language of outcomes, someone else will ask the question for you.
A practical checklist I pulled from this conversation
If you are trying to get AI out of the pilot phase and into “this changed how we operate,” here’s what I would pressure-test:
Is the CEO meaningfully involved? Not “supportive,” involved.
Do we have 1-3 business outcomes we are willing to reorganize around?
Do we have a prioritization mechanism that business leaders trust?
Can we prove impact quickly enough to survive the org immune system?
Are the people impacted involved early, not after the build?
Is the system legible enough that humans can manage the automation?
If the answer to most of these is “not really,” the good news is you can still do pilots. The bad news is you should stop pretending pilots equal transformation.
If there’s one meta-takeaway from this episode, it’s this: most organizations do not need a better model. They need a better willingness to change.
That is what “transformation enabled through AI” actually means, and Todd is one of the few who knows it.


