Get Rootly's Incident Communications Playbook

Don't let an incident catch you off guard - download our new Incident Comms Playbook for effective incident comms strategies!

By submitting this form, you agree to the Privacy Policy and Terms of Use and agree to sharing your information with Rootly and Google.

AI can boost developer productivity, if used right, with Justin Reock, Deputy CTO at DX

🧠
Flow state fan
📚
Productivity Researcher
🎤
Speaker
⚙️
DevOps Aficionado

Listen on Spotify and Apple Podcasts!

Table of contents

Justin Reock, Deputy CTO at DX, is a leading voice in developer productivity and platform engineering. His latest research offers a practical guide to AI-Assisted Engineering, covering meta-prompting, multi-model adversarial prompting, and test case generation—drawing on extensive experiences from top engineering teams.

Measuring the Impact of Generative AI on Developer Productivity

It’s not enough to say “oh our developers are saving a few hours a day on coding tasks tahnks to AI.” Without a comprehensive way of measuring how AI impacts developer productivity, your team is at risk of saving those hours of coding, yes, but not re-investing them on anything meaningful.

That’s why DX’s platform uses a combination of qualitative and quantitative data that go beyond surface-level metrics. DX correlates the usage of AI tools to business objectives, such as how many more efforts could a developer dedicate to new features.

Crucially, Justin challenges the misconception that the role of AI in developer productivity is reducing effort by reducing the amount of cognitive load. “Flow state usually happens in a very high state of cognitive load,” Justin explains. While developers and teams often report time savings from tools like Copilot, at the organizational level, there might not be a significant improvement.

The Risks of Misusing Productivity Metrics

Productivity metrics can make things worse if you’re not looking at the right ones. For example, Justin warns against misusing metrics like PR throughput. “If you’re interpreting that metric at the individual level, and you get rid of those developers, a lot of times those engineers were the resources that were enforcing higher quality.” These developers may not be pushing the most code, but they might be mentoring others or maintaining high standards. Their removal can degrade the team’s output. The team's output quality will decline, which basically means that the reliability of these services will decline.

He also cautions against over-optimization and gaming behavior. Developers might be tempted to game the system and just do what the metrics want them to do. Metrics used incorrectly can create a culture where developers focus on boosting numbers, not outcomes. Justin offers a better path: use comprehensive, continuous readiness metrics instead of checkbox compliance. The key is using data to uncover friction and support improvement—not punish or rank individuals.

Evolving Beyond DORA: Modern Frameworks Measuring Engineering Performance

While DORA metrics laid the foundation for engineering performance measurement, Justin argues that the framework only tells part of the story. That’s where the SPACE framework comes in. SPACE is an acronym: Satisfaction, Performance, Activity, Collaboration, and Efficiency. It’s about how each dimension affects each other. Instead of prescribing specific metrics, SPACE helps teams interpret and balance what really matters. Justin explains how researchers categorized productivity statements by these dimensions to assess whether changes to one had ripple effects on others like performance or efficiency.

Justin introduces the DX Core 4 as a distilled and portable solution. DX Core 4 is a a modern framework that combines the measurable action of DORA, the balance of SPACE, and the human factors of DevX. The message is clear: metrics need to evolve as engineering practices mature. And, if you're doing DORA, make sure you're doing it right, but remember, it's 10 years old.

How Developer Experience Ties Into Velocity and Reliability

Traditional DevOps thinking, Justin argues, treats engineers like factory workers but misses their human complexity. DevOps comes from a philosophy that’s like a factory floor but when you have highly opinionated knowledge workers, you have to consider what’s going on inside their head.

Platform engineering steps in here by adding product management and developer experience into the equation.

Developer satisfaction isn’t a soft metric, it’s a critical performance driver. “Developer joy equals developer productivity,” Justin says. If developers “are not spending time on interesting work, then we can predict a lot of their overall satisfaction.” He ties this back to platform teams taking responsibility for the inner loop: “Operations teams are suddenly stewards of developer experience, there’s a shared responsibility which increases the overall psychological safety of developers.”

High-performing companies are already leading here: “Netflix, DoorDash, LinkedIn are all companies known for making really good investments in their platform.” And with frameworks like the DX Core 4, others can follow suit, focusing on metrics that reflect how engineers actually thrive.