GenAI prototypes are everywhere, but getting real systems into production remains painfully hard. Eduardo Ordax, Principal GTM GenAI at AWS, joins us to unpack why. We explore culture as the biggest blocker, the challenges of scaling non-deterministic LLM systems, and what separates companies that are genuinely succeeding with AI from those stuck in experimentation mode.
Key Topics Discussed
- Why culture, not technology, is the biggest barrier to AI adoption
- How outdated business models clash with non-deterministic LLM behavior
- Why companies get stuck in “prototype purgatory”
- The operational reality of deploying agents, memory, and constantly evolving model stacks
- What changes when you architect around LLMs vs traditional web apps
- The types of companies currently winning with AI (and why)
- The rapid evolution of roles and why “everyone becomes a developer”
- How individuals can stay relevant as the pace of AI change accelerates
How did you get into AI, and what are you working on today?
I’m a computer engineer, and I’ve been in this space for a long time. Around eight to ten years ago, when I was at Vodafone, I started working on what we called big data and analytics—it wasn’t really “AI” back then. From there I jumped fully into AI.
I even worked as a data scientist for a short time, but honestly, me and code were never good friends. With low-code tools things changed, and for the last four years at AWS I’ve been helping companies, customers, and startups all over the world adopt artificial intelligence. That’s my story.
What are the main challenges companies face when adopting AI?
There are many, but the biggest one is people—culture. AI changes how we do business, but companies try to use it with the same traditional business models. That doesn’t work.
Some people still think AI is just a trend. Others want to put AI everywhere but don’t have the skills or mindset for it. Even though leadership now pushes for “AI-first,” if there’s no value, skills, or mentality, it won’t matter.
So the core blocker is culture: understanding that AI works differently and requires a different way of thinking.
Can you give an example of old business models clashing with AI?
Take AI agents. Everyone wants to implement agents because they can automate support or internal processes. But AI is non-deterministic—by nature it will fail sometimes.
Companies say, “I’m not sure if it’ll work 1–2% of the time.” But that’s how AI works today. When you try to run old processes alongside new AI-driven ones, and expect the same determinism, you won’t get the business value you expect.
It requires a mindset shift.
Why do companies struggle to move from prototypes to production?
It’s funny because every year we say, “This was the year of experimentation—next year is production.” And every year the tech changes again: new models, agents, memory, tool use, computer use… so people keep experimenting.
This isn’t new. In traditional ML we had the same thing—models were a tiny piece of the larger system with pipelines, monitoring, retraining, observability, etc. GenAI is the same: the LLM is only a small part.
But the real issue is people. You need forward-deployed engineers, DevOps, SREs, data engineers, and architects—the folks who know how to make things production-ready. And then you need to avoid heavy vendor lock-in because the landscape shifts fast.
Moving five agents into production is easy. Moving 500 reliably is completely different.
How different is it to architect around an LLM compared to a normal web application?
With a website, you know exactly what you’re building. With LLMs, we still can’t fully explain how they behave. Even top researchers say models sometimes do things they can’t explain.
That’s the challenge: non-determinism and scalability. Using AI to generate code is great—I love it because I always hated syntax—but if you rely too much on AI, you start losing control over the system you’re building.
And when you run models across multiple GPUs, the randomness increases. So the uncertainty gets amplified.
Which types of companies are succeeding with AI, and why?
It’s not about size—it’s about how the company was built. Digital natives do well. Software companies are doing extremely well because they don’t treat AI as the goal—they use AI to improve their products. You don’t care whether Salesforce uses AI behind the scenes; it just makes the product better.
Startups too, because they’re building with AI from day one.
And big enterprises in finance, insurance, and healthcare are succeeding because they invested heavily in data. Good data pipelines, governance, lakes—so moving from data to AI is natural for them.
The biggest opportunity right now isn’t on the model layer—it’s on the application layer.
Why aren’t more AI tools showing up in developers’ toolboxes?
Because the tools that succeed are the ones built on top of already-strong products. Developers use AI-enhanced features inside tools they already trust.
It’s mostly incremental improvements rather than groundbreaking productivity shifts. We’ll see whether new AI-native startups can create real value—that part is still unclear.
Many people feel behind the curve with AI. What’s your advice?
The World Economic Forum says 80–90% of roles will evolve. That means most jobs won’t disappear, but they’ll change.
Every day I get messages from people saying, “I feel behind.” My advice is: it’s never too late to learn how to use these tools. Everyone will need to understand AI the same way everyone learned to use the internet or a computer.
In the future, everyone becomes a kind of developer—not writing production code, but able to build or prototype things. The gap between builders and designers is getting thinner.
And the pace is insane. With previous technologies, adoption took decades. With AI, if you’re out for a month, you’re already behind.
Where to Find Eduardo
- LinkedIn: Eduardo Ordax
- Location: Based in Madrid; frequently travels across Europe, the US, and the Middle East for AI engagements
.avif)






















