

When You Do DevSecOps, Don’t Forget the SREs
It's time to break down the silos separating SREs from security engineers.
March 31, 2025
8 mins
A practical framework for evaluating AI tools based on four core pillars: Accuracy, Transparency, Adaptability, and Agentic capabilities.
With so many shallow new AI announcements popping up ever week, my immediate reaction as an skeptical engineer is often an eye-roll.
As an SRE leader, I’ve witnessed firsthand how rushing version one of copilot tools led to frustration within engineering teams.
Early-stage AI copilots generated enthusiasm that soon plateaued, leading to eventual disengagement. There were a few reasons for that.
However, the most recent, V2 of these copilot tools have significantly improved, with many now addressing the early gaps and introducing capabilities that are operating beyond code assistance.
We are now seeing a wave of excitement around AIOps tools, tools that promise to operate as an SRE copilot or autopilot. How can you evaluate which ones can actually be useful to your team?
My previous experience of onboarding code copilot tools has required me to approach AIOps evaluation with a structured framework based on four critical pillars:
In this guide, I will outline my approach to evaluating AIOps and AgenticAI tools based on four critical pillars, what are some tactics to evaluate these and what to watch out for.
Accuracy is a fundamental requirement. 49% of organizations cite false positives as a major challenge in AIOps adoption, contributing to alert fatigue among IT teams.
If an AIOps platform promises to enhance incident detection, automate responses, or provide root cause analysis, then it must demonstrate a high success rate in achieving those goals.
Typically you want your “acceptance rate” for AIOps (or Copliot) tools to be over 80% to begin with and continue to get better over a period of 6 months to be over 90%.
A recent G2 report found that AIOps software adoption is at 56%, compared to an average of 70% for other IT infrastructure and management software, with transparency cited as a key concern.
For engineers to trust an AIOps system, they must have visibility into how the tool arrives at its conclusions. A tool that offers clear, verifiable reasoning will drive better adoption and confidence among SREs.
AIOps tools that enable trust and transparency keeping a great developer experience in mind where engineers can validate the recommendations will gain acceptance more widely than the rest.
AIOps is a field that continues to evolve rapidly, with a few key players competing to define the next generation market. This presents an opportunity to select a vendor that not only meets your current needs but can adapt and evolve alongside your organization.
Choosing a vendor that treats you as a partner can maximize long-term value and adoption.
By selecting a vendor that prioritizes adaptability, you ensure your AIOps solution remains aligned with your evolving infrastructure and operational needs.
Agentic AI promises that the tool doesn’t just analyze past data, it learns and adapts to your specific environment. Unlike traditional AI models that provide static recommendations, agentic AI evolves, deepening its contextual understanding over time.
If a tool genuinely exhibits agentic behaviour, it should become more valuable as it continuously refines its understanding of your infrastructure.
The AIOps landscape is evolving rapidly, and evaluation criteria will continue to shift as capabilities advance. However, Accuracy, Transparency, Adaptability, and True Agentic Promise remain foundational pillars for assessing these tools today. While no framework is static, this rubric provides a good starting point for making informed decisions in an evolving space.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.