Unlocking the Link Between AI Usage and Software Performance: What DORA Metrics Reveal for the Whole Tech Ecosystem
- SlashData Team
- Jun 11
- 4 min read
Artificial intelligence is transforming how software is built, deployed, and maintained. Yet, questions persist: Are AI-assisted tools really improving software delivery performance? Do high-performing teams use them more? And what lessons can the rest of the tech industry learn from these trends? These and more insights are being uncovered as part of the Developer Nation Series, currently in its 29th edition - and counting!
SlashData's latest research report, USAGE OF AI ASSISTANCE BETWEEN DORA PERFORMANCE GROUPS tackles these questions by comparing the usage of generative AI tools across DORA performance groups (the industry-standard framework for measuring software delivery performance). From elite performers to lower-performing teams, the report investigates where AI helps, where it doesn’t, and how these insights can inform not just developers, but marketing, product, HR, and executive leadership alike

Why This Matters for Everyone in Tech
This isn’t just a developer story. Understanding the impact of AI tooling at a performance level informs:
Marketing on how to position productivity-enhancing technologies
HR and Talent Acquisition on what skill sets to recruit and where gaps may appear
Product Leaders on prioritising AI capabilities in internal tools or user-facing platforms
Executives on investment decisions and operational strategy
In short, this is market research for tech that drives value across the entire business ecosystem.

AI Tools Alone Don’t Improve Lead Time
Across all DORA performance groups, AI-assisted coding tools like GitHub Copilot showed minimal impact on lead time for code changes. Even elite performers didn’t significantly outperform lower-performing teams in this metric due to AI adoption.
Why? Because lead time is more influenced by internal processes, team coordination, and review cycles than by the speed of code writing alone.
Implication:
Organisations investing in AI should balance tool deployment with process refinement. Without the latter, AI becomes a speed bump, not a shortcut.
AI Boosts Deployment Frequency Among Elite Performers
Elite teams that deploy code frequently are also the highest adopters of AI-assisted development tools. 47% of elite performers use these tools versus only 29% of low performers.
Why it matters:
Teams that ship often may be more open to experimentation and continuous improvement. These environments are fertile ground for AI adoption because they allow quick iteration and feedback.
Implication for product teams:
If you’re building AI features into dev tools or platforms, target high-performance environments first. They’re more likely to adopt and validate your innovations.
AI Chatbots Help Restore Service - But Come With Tradeoffs
When it comes to time to restore service, AI chatbots like ChatGPT were more commonly used by elite performers (50%) than low performers (42%). These tools help developers recall information, identify fixes, and reduce downtime.
However, an interesting contradiction emerges: elite teams also have a high proportion of developers who don’t use any AI tools at all. This could reflect a reliance on well-documented, deterministic processes that work faster than AI in critical moments.
Implication for operations and support:
AI tools are helpful, but they must complement - not replace - structured incident response frameworks.
Low AI Usage Correlates with Lower Failure Rates
Among the most striking insights: elite performers in the “change failure rate” metric are least likely to use AI-assisted development tools. Just 31% of elite performers use them, compared to 40%+ of other groups.
Why? AI-generated code often lacks contextual awareness. If poorly understood or inadequately reviewed, it can increase the risk of service impairments and rollbacks.
Implication for engineering and quality teams:
A higher volume of AI usage doesn’t always mean better outcomes. Vetting, review, and knowledge-sharing processes remain vital.
The Industry Factor: Why SaaS and Regulated Sectors Differ
SlashData’s research also notes that SaaS companies have higher adoption of AI tools - but also higher failure rates. This is likely due to a culture of rapid deployment and lower tolerance for delay.
By contrast, financial services, energy, and government sectors showed the lowest AI adoption and the highest proportion of elite performers. These industries typically require rigorous testing, governance, and audit trails-all of which demand caution when using generative AI.
Insight for leadership:
Don’t blindly chase AI adoption. Tailor usage to your industry’s risk profile, regulatory environment, and tolerance for failure.
Bringing It Together: AI in Context, Not in Isolation
This report is a reminder that generative AI isn’t a silver bullet. Its success depends on the context in which it’s used: the team culture, the processes in place, and the performance goals driving development.
For decision-makers across the tech ecosystem, these findings are a call to action:
Use AI adoption data to refine your hiring strategy
Align product development with proven high-performance behaviors
Set realistic expectations for AI-driven outcomes
Invest in complementary capabilities: documentation, QA, and developer education
SlashData’s DORA-based analysis of AI adoption helps demystify where AI actually moves the needle in software performance - and where it doesn’t. More importantly, it shows that market research for tech is not just for engineers or analysts. It’s for anyone who wants to build smarter strategies, stronger teams, and more resilient products.
To explore more AI adoption data and performance insights, visit [https://www.slashdata.co/free-resources].
You can also read more on the Developer Nation Series by access our blog library covering;