

When leaders shouldn't lead incidents
Mastering Incident Management in Chaos
If we look at the history of computing, traditional products have always boiled down to three core components: input → system → output. This process is entirely predictable. Users control the input. The system follows a predefined logic—rules, workflows, and constraints—to generate the final output.
Take a simple example: someone searches for cycling shoes. In a traditional search engine like Google, the system scans the internet and returns the most relevant links to buy cycling shoes. Straightforward, controlled, predictable.
Designing for AI is different. It introduces a fourth element: interpretation.
input → system → interpretation → output
Now, the system tries to understand the user’s intent—not just return results.
Using the same example, if you're searching for cycling shoes, the AI might respond very differently depending on the inferred intent. If you're shopping, it could suggest cheaper or better-performing alternatives, or recommend beginner-friendly gear. But if your goal is to get fit or lose weight, it might instead suggest protein bars, workout plans, or even a motivational quote.
That interpretive layer introduces unpredictability. And that unpredictability brings new challenges for design:
To navigate these challenges, we’ve started evolving our process. Design and engineering now happen in near-parallel. A typical kickoff begins by identifying the core features of the AI product, mapping what needs to be figured out on the technical side, and flagging where design can support.
The goal is to understand the system’s constraints and capabilities from day one.
Throughout the process, designers sit in on engineering syncs, helping define how the model works: how components are named, how the schema is shaped, and what terminology we use. We brainstorm and iterate in real time—mocking ideas side by side with engineers as the system comes to life.
Eventually, design and AI converge into something tangible in the UI. That’s when we begin internal testing and get feedback from early beta users. This loop feeds the next iteration.
Because outputs are dynamic, our design process has to be dynamic too. Static Figma frames don’t cut it anymore. Instead, we rely on interactive prototypes wired to live models, watching real data shape real experiences.
At Rootly, we’re pushing past the limits of today’s design tools. We’ve been trying out vibe-coding tools like Cursor, V0, and Figma Make to prototype in higher fidelity. Our principle is simple: the more realistic the prototype, the better the feedback—and the more confident we feel in our decisions.
Live prototypes also help us see the permutations—use cases and outputs—that would have gone unexplored in a traditional flow. It lets us design shared patterns across those variations. It also makes the system’s unpredictability visible, forcing us to rethink how we model the product itself.
This reshapes the design process entirely. We don’t hand off polished designs for others to build—we design while the system is being built.
It’s also worth saying: AI is powerful, but it’s not a universal solvent. I’ve found it most helpful when:
But when you’re tweaking typography, finalizing layouts, or designing something simple and predictable? Traditional tools are often faster and clearer.
This is just the beginning of our journey into AI-native design. As tools evolve, so will our workflow. But the principle stays the same: experiment with new tools, adapt your process, and keep iterating. You won’t know what works until you build with it.
I hope this gives other teams the courage to explore, experiment, and find the creative rhythm that works best for them.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.
Get more features at half the cost of legacy tools.