

SRECon EMEA 2025: Top Talks + Events
5 AI and reliability talks you can’t miss, plus the perfect after-conference events to wrap up Days 1 and 2 in Dublin
September 22, 2025
8 mins
The panel warned: the opportunity is massive, but without observability, security, and strategy, the regrets will be real.
Agentic AI is moving fast. The energy in the industry today feels a lot like the early cloud days, or even the early web. Everyone is rushing to ship something, to stake a claim in the agent-driven future.
That urgency came through loud and clear during our recent panel in San Francisco with Aarush Sah (Head of Evals at Groq), Gregg Mojica (Co-Founder & CEO at Alloy Automation), Gourav Nagar (Head of Information Security at Upwind Security), and Sylvain Kalache (Head of Rootly AI Labs).
As Gregg put it:
“There’s this huge rush to go implement agents… but most people cannot articulate what they actually want. They just know they need to launch an agent.”
The demand is undeniable. The excitement is contagious. But if there was one theme uniting this conversation, it was caution. The opportunity is massive, but if we don’t slow down and build responsibly, the regrets will be real.
The technical heart of this discussion was the shift from REST APIs to MCP (Model Context Protocol). REST has long been the backbone of connectivity in enterprise software. Now, MCP is emerging as the connective tissue for the new era of AI agents.
Gregg, who has lived through this evolution at Alloy, reminded us not to get caught up in declaring REST dead
“The death of REST APIs is greatly exaggerated. REST APIs aren’t actually going anywhere… many MCP servers today are literally built on top of REST APIs. So they’re not going anywhere anytime soon. I think the direction the world is headed is REST APIs and MCP will live side by side.”
That coexistence matters because the future won’t just be one agent calling one deterministic system. We’re heading toward agents talking to other agents. Imagine a workforce of thousands of digital employees that need to communicate, coordinate, and hand off tasks.
That sounds powerful. But, it raises questions of trust and governance. As Gregg admitted: “Observability is really critical. A lot of people are just basically tacking on OAuth to MCP and calling that authentication. Whether that’s the solution long term or not, TBD.”
In other words, we’re still bolting the seatbelts onto the car while flooring the accelerator.
Gourav brought the perspective of someone who has seen this story play out before. He compared today’s AI adoption to the early rush into the cloud: speed first, security later.
“Security is an afterthought for sure, just like we saw early in the cloud days,” he said. Companies raced to move workloads into the cloud, only to be blindsided by breaches and compliance failures. Only after painful incidents did best practices mature.
The same pattern is happening in AI. Today, most enterprises can’t answer basic questions about their AI stack:
Without those answers, the principle of least privilege (a cornerstone of security) goes out the window.
Worse, the attackers aren’t waiting. “All the tools available to us are also available to the attackers,” Gourav warned. “They’re also building MCP servers, they’re also using the protocols.”
That’s a sobering reminder. We talk about democratizing AI, but democratization cuts both ways. Just as open-source transformed software development, it also armed bad actors. The same will happen with MCP and agent frameworks unless enterprises bake security in from the start.
If Gourav’s warning was about what we don’t know, Aarush’s was about what we think we know but shouldn’t trust.
Right now, the AI industry loves leaderboards. Whether it’s MMLU, Math 500, or S-code, companies race to post higher scores and declare their models “state of the art.” But Aarush didn’t pull punches:
“We are optimizing for the wrong signals. Labs will build very powerful synthetic data flywheels around those specific evals to achieve higher scores. And that sort of nullifies the purpose of using these evals as a good metric of how good a model is.”
Evals, evaluation frameworks for models, can and should be more than leaderboards. Done right, they’re the compass enterprises need to navigate the messy tradeoffs between quality, cost, and robustness.
Aarush outlined the real opportunity:
“The best case scenario is to catch issues before they end up in main production, and that’s where evals come in,” he explained.
The kicker: many companies are finding smaller, cheaper models outperform frontier models like GPT-5 on their actual workloads. “It’s always a tradeoff between quality and cost,” Aarush said. “A lot of open-source models balance that better than the frontier ones do.”
That’s a hard truth in a market obsessed with “frontier” everything. Sometimes the best model isn’t the biggest, it’s the right-sized one.
Sylvain brought the reliability lens, and it fit perfectly with the panel’s warnings. In incident management, we’ve learned the dangers of reactivity without guardrails. Teams firefought their way through outages until observability, runbooks, and proactive testing made resilience possible.
AI is on the same trajectory. As Sylvain said,
“At the end of the day, a lot of the code generated by AI is inspired by human-written code. So most of the bugs we’re seeing are similar to what humans would do. But there are a few new types of bugs emerging, and we need visibility to catch them.”
That’s a critical insight: AI won’t just introduce the same problems faster, it will introduce different problems we haven’t seen before.
Sylvain also pointed out the importance of transparency. Some tools, like Claude’s code generator, tag pull requests with “co-authored by Claude.” That signals to engineers: check this PR more carefully. Reliability isn’t just about tooling, it’s about building the culture to handle new risks responsibly.
The most striking moment of the panel was the closing question: What will companies regret about AI in 2026?
Each panelist answered from their own lens, but the pattern was impossible to ignore:
That chorus tells us exactly where the danger lies. We’re not facing unknown unknowns. We can already see the cracks forming.
The verdict was unanimous: “The opportunity is massive, but if we don’t slow down and build responsibly, the regrets will be real.”
So where does that leave us?
The opportunity in AI is enormous. MCP and agents will change how enterprises operate, just as REST and cloud did before. But history tells us that without guardrails, speed becomes fragility.
Security, evals, and reliability are the seatbelts of this new era. They won’t slow the ship down, they’ll keep us safe while we floor the accelerator.
As Gourav said: “We are here as a seatbelt to make sure that you arrive safely.”
The challenge for enterprises isn’t whether to build agents. It’s whether to build them responsibly. That means:
AI is not just another tool. It’s becoming business infrastructure. And infrastructure demands trust.
The opportunity is massive. But if we don’t slow down and build responsibly, the regrets will be real.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.