75% of professional developers are using AI-assisted tools: Insights on Developer Tools Usage and Measuring AI ROI
- SlashData Team
- 29 minutes ago
- 14 min read
This is a transcript with the key highlights from the live webinar on software development Q1 2025 trends. You can watch the full presentation in the following video.
Natasa Ljikar: Welcome to today’s session. We’ll be going over insights on developer tools usage and measuring AI tools ROI.
SlashData is a technology-focused analyst firm exploring the broader software development space as well as evolving AI technologies.
The data we’ll be discussing comes from SlashData’s biannual Omnibus Global Developer Surveys, specifically the latest Q1 2026 Developer Nation survey. In addition to the topics we’re covering today, the survey also explores cloud, mobile, web, games, and other areas, capturing a snapshot of the broader software development ecosystem.
The survey has a global reach, with over 12,400 valid responses from software developers in 95 countries.
I’d like to pass the mic to Bleona Bicaj, Principal Research Consultant and Product Strategist at SlashData.
The state of AI in software development in Q1 2026
Bleona Bicaj: Today I’ll be sharing data about the state of AI and development.
Over the past two years, AI has transformed from a technology developers were curious about into something embedded in the everyday workflows of the majority of professional developers.
But using AI and quantifying its benefits are two very different things. That’s one of the things we wanted to explore in the most recent wave of our Developer Nation survey.
I’m going to walk you through two separate research findings that paint a picture of where the developer community stands right now with AI, both in terms of what they’re actually doing with these tools and whether they can prove that these tools are working.
By the end of this webinar, you’ll have three key takeaways:
a clear picture of AI adoption among professional developers,
evidence about the measurement gap that most organizations are facing,
and an introduction to a new benchmarking product that directly addresses the questions technology leaders are asking about AI developer tools.
AI adoption among professional developers
This first report comes from our latest Global Developer Survey, where we asked over 10,000 professional developers a straightforward question:
Do you use or work with ML, AI models, tools, APIs, or services? And if so, in which of the following ways?
The data gives us a clear snapshot of where the market stands right now.
The majority of professional developers (75%) are using AI-assisted tools in some form, and almost half (45%) are adding AI functionality or developing AI models.
Within that 75% using AI-assisted tools, there are three distinct ways developers are engaging with AI, and they represent fundamentally different relationships with the technology.
First, 53% of professional developers use AI-assisted tools outside the coding environment, such as AI chatbots or agents like ChatGPT, Claude, or other LLMs integrated into their workflow to get answers to coding questions.
This is lightweight, low-friction adoption. Developers are not building with AI; they are mainly consulting with it.
Second, 42% of these developers are using AI-assisted development tools or agents integrated into the coding environment itself.
We’re talking about GitHub Copilot, JetBrains AI, Amazon Q Developer — tools that live in the IDE, understand the code base, and surface suggestions in real time.
This is a deeper level of integration because the AI is no longer just a side tool, but part of the development loop.
Third, a quarter of developers are using AI tools to generate creative assets for projects such as images, diagrams, documentation, or other non-code assets.
This is more niche, but it signals that AI is being used not just for code generation, but for the full spectrum of what goes into shipping software.
Now, 45% of developers report either adding AI functionality to their applications or developing AI models and infrastructure. That’s a different category entirely because these developers aren’t just using AI as a productivity tool; they’re actually building with it.
Within this group, we also see another split.
Around a third of developers are adding AI functionality to their applications either through fully managed AI services or APIs, or self-managed local AI models.
Here, they’re leveraging the service provider’s model. They’re not training or hosting their own. This is fast, managed, and abstracts away a lot of infrastructure complexity.
The remaining developers represent increasingly specialist activities such as customizing pre-trained AI models and fine-tuning hyperparameters.
These are the developers building the foundation that the first group of tool users depend on.
For the purposes of today’s discussion, we’re going to focus on this first, largest group: developers using AI-assisted tools, because this is the group experiencing the most immediate pressure to justify investment, the group most likely to be affected by organization-wide decisions around AI tooling, and the group we’ll be diving into with our new benchmarking product.
The 75% using these tools represent the market we need to understand in detail.
Before we dive into those results, I want to show how we got to this adoption level.
This chart tracks developer involvement with AI over the past two years, from Q1 2024 to Q1 2026.
The story is one of consolidation at the top and migration toward more sophisticated use cases.
When it comes to using AI-assisted tools, this has grown from 61% in Q1 2024 to 75% in Q1 2026.
That is steady, consistent growth over two years. The trajectory has remained consistently steep, and each quarter we see incremental gains.
We’re not seeing a phenomenon of early adopters jumping in and then plateauing. This continuous integration tells us that these tools are sticky and solving real problems for developers.
Adding AI functionality or developing AI models also shows an incline. It started at about 32% two years ago and is now at 45%.
The one thing that is declining is the share of developers saying that they don’t use or work with ML, AI models, tools, APIs, or services at all.
Two years ago, that was about 28% of the market, and today it’s down to 12%.
The holdouts still exist, but they’re clearly a shrinking minority.
Measuring AI ROI
This brings us to the second report and the harder question of how technology leaders know that this investment is actually working and bringing productivity gains.
AI has long been embedded in technology organizations, either through powering search engines or recommendation algorithms. But something has shifted in the past two years because it’s no longer a background technology. It’s visible, strategic, and expensive.
Generative AI chatbots, coding assistants, and enterprise automation tools have moved to the center of business planning.
As AI investment scales, pressure is emerging: the need to justify it to the board and to finance.
Boards want evidence, finance teams want numbers, and developers caught in the middle are discovering that believing AI works and being able to prove it are two very different things.
The second report examines how developers in leadership roles — or technology leaders — are experiencing and evaluating AI value: how they rate what it delivers, whether they measure it, and how rigorous or formal those measurements are in practice.
The findings come from more than 2,000 professional developers working in leadership positions, drawn from SlashData’s 31st Global Developer Survey.
Today we’re providing an overview of the headline findings.
For a full deep dive, including breakdowns by role, sector, region, and agentic AI maturity level, we prepared a more comprehensive premium report called The State of AI ROI Measurement in Software Teams.
Since we’re now focusing only on professional developers in leadership roles, I want to set the scene by saying that 80% of them use AI-assisted tools, which is five percentage points higher than professional developers overall.
From that group, 75% rate AI tools as valuable or extremely valuable relative to the cost and effort required.
This is a big number, so it deserves both celebration and scrutiny.
On one hand, it reflects a genuine shift in the market. It confirms that AI tooling has become core infrastructure for most software engineering organizations, and the people close to that transition — the engineering leaders using these tools every day — are overwhelmingly positive.
27% even describe the benefits as far exceeding the cost and effort.
But confidence and evidence are not the same thing, and the broader market context makes that distinction urgent.
For example, a study from S&P Global found that the share of companies abandoning the majority of their AI initiatives before reaching production surged from 17% to 42% in a single year.
Somewhere between pilot and production, the reality of the work (data quality challenges, integration complexity, and unclear ROI) caught up with the enthusiasm.
Gartner also predicted that at least 30% of generative AI projects will be abandoned after proof of concept, citing three primary causes:
poor data quality,
unclear business value,
inadequate risk controls.
This unclear business value is the issue we’re examining today.
When we asked technology leaders directly, “Do you measure the impact or ROI of the AI tools, models, or services that your teams use?”, 88% said yes.
On the surface, that’s reassuring. An overwhelming majority is actively tracking AI value, and only 12% seem to be making decisions about expensive tools based on trial and error.
But when you look past this binary of measuring versus not measuring, a more complicated picture emerges.
We can see that 39% of technology leaders who are measuring AI ROI are doing so through formal or automated processes: things like regular KPI tracking, integrated dashboards, and automated reporting systems.
This is the gold standard. Metrics are being collected continuously, tracked systematically, and when there’s a question about AI value, the data is already there.
Another 41% describe their approach as defined but manual.
They have structure — quarterly reviews, internal surveys, even periodic conversations between developers and engineering leads.
There’s a framework, but it depends on someone to trigger it.
So when it’s time for the quarterly review, someone organizes the conversation, gathers the feedback, and documents the outcome.
It’s rigorous in intent, but episodic in execution.
Then we have 17% operating informally or ad hoc, through occasional discussions and subjective impressions.
There’s no consistent tracking.
With this, I want to reframe that 88% headline.
Eighty-eight percent of organizations believe they are measuring AI value, but only 39% are doing so through processes that don’t require someone to initiate measuring.
If your organization sits in that manual middle — and many do — it’s worth understanding what this means operationally.
Manual measurement, at 41%, often involves structured efforts like quarterly reviews or team surveys.
But it also carries structural weaknesses that make it a poor foundation for high-stakes investment decisions like AI tooling.
First, it is vulnerable to deprioritization.
When delivery pressure rises, when teams are shipping a critical feature or fighting a production issue, quarterly AI review is often the first thing that slips.
Measurement cadence breaks down precisely when it’s most needed (during periods of change or uncertainty) when the organization is deciding whether to double down on AI or pull back.
Second, it is susceptible to recency bias.
Informal and periodic processes tend to weigh the most recent and most visible interactions disproportionately.
A high-profile failure shortly before a review, such as a hallucination in an AI suggestion, shapes the assessment more than months of quiet incremental productivity gains that were never explicitly tracked.
Third, manual reviews rarely produce the evidence that finance teams find convincing.
We’ve been running interviews with software engineering leaders about the future of developer teams and how ROI measurement is happening within teams.
We find that formality, actual numbers, and longitudinal data are often lacking.
Pressure from the board keeps increasing, so it’s paramount that teams are able to provide that data.
Gartner noted that a major challenge for organizations is justifying substantial investment for productivity enhancement, which can be difficult to translate directly into financial benefits.
Historically, CFOs have not been comfortable investing in indirect future value. Without longitudinal data connected to business outcomes, that investment is nearly impossible.
Among teams with no measurement in place, 59% rate AI as valuable and 13% rate it as not valuable.
Among teams that are measuring in some form, 78% rate it as valuable, compared to 4% finding it not valuable.
So the difference between non-measurers and measurers is a 19 percentage point gap in perceived value.
When we isolate teams measuring formally, those with automated processes, the figure rises further to 85% rating AI as valuable.
The gap is huge.
The intuitive explanation is straightforward: measurement captures value. Teams that track metrics are able to see what AI is actually delivering, so they rate it more highly.
But there might be something deeper happening.
We pose another hypothesis: measurement doesn’t just capture value, it helps create it.
When teams are tracking metrics consistently, they also use AI tools more deliberately. They start routing tasks toward AI where it demonstrably helps and avoiding it where it doesn’t.
They start to develop better practices. They correct for the asymmetry between the spectacular failure that sticks in memory and the hundreds of routine successes that don’t.
Systematic tracking also creates feedback loops.
You measure, you see what’s working, you adjust, and then you measure again.
The process gets tighter and the value increases.
Measurement doesn’t just answer the question, “Is AI working?” It also changes team behavior in ways that make the answer more likely to be yes.
Organisation size and measurement maturity
We also find that the clearest organization-size-related divide in our data set sits at the informal end of the measurement spectrum.
Among freelancers and organizations with up to 100 employees, 25% rely on occasional discussions and subjective impressions.
That’s more than double the rate of enterprises with more than 1,000 employees and notably higher than midsize firms.
The flip is true at the formal end.
46% of enterprises have KPI tracking, dashboards, or automated monitoring in place, compared to 41% of midsize firms and only 30% of smaller organizations.
That’s a 16-point gap between the smallest and largest organizations.
We also see that every organization type has landed in the manual middle in roughly equal proportion, regardless of resources or scale.
That consistency tells us that this manual tier doesn’t function as a stepping stone toward formal measurement.
It is more like a default state that organizations settle into and tend to stay in.
For smaller organizations, the practical priority is not building dashboard infrastructure from scratch, but escaping the informal tier entirely.
Moving from ad hoc impressions to even one defined manual process closes the most consequential gap in the data.
For midsize organizations already in the manual tier, the formal measurement gap relative to enterprises is the number worth closing, because it connects directly to the value perception gap between those groups.
If that gap is closed, the organization is not just improving its measurement process, but likely also improving how senior leaders perceive the value of AI investments.
What this means for technology leaders
The question is no longer whether to adopt AI.
The question is whether organizations have built the internal capabilities to know what that adoption is actually worth.
The data suggests that most have not done it in a formal way.
The gap between claiming to measure and measuring formally and rigorously is wide, and it carries real consequences for the quality of evidence available to senior leaders.
When a board or CFO asks, “Is this AI investment paying off?”, the answer depends almost entirely on whether the organization has systematic data or quarterly impressions.
When AI initiatives start being abandoned across the industry, boards will ask even harder questions.
Organizations with measurement frameworks in place will have answers. Others will be scrambling.
Without systematic tracking, it’s difficult to tell whether Copilot is saving more time than Q Developer, whether investment in an agentic AI platform is actually reducing toil, or whether the organization is paying for a tool that nobody is using effectively or to its full potential.
The key point is that knowing how your measurement practices compare to the market is the starting point for building that capability.
AI Developer Tool Benchmark
We’ve been listening to questions from the market about which tools are actually delivering, what’s worth the investment, how they’re being used, and what’s driving adoption.
We’ve developed something that is a direct response to that.
We’ve recently introduced the AI Developer Tool Benchmark, a new research product specifically designed to give technology leaders the data they need to make informed decisions about AI developer tooling.
Whether you’re a vendor building and selling AI products or a buyer looking to choose the right tool for the team or for your products, this data will be useful.
We launched a pilot version in Q4 last year to pressure-test the product in the market and gathered extensive feedback from clients.
Within April, we’ll be launching the official first benchmark with more than 2,400 professional developers worldwide.
The benchmark covers:
an overview of 20 AI developer tools scored on adoption, usage intensity, and satisfaction among 2,400 professional developers,
how developers actually work — whether they use a single tool or stack multiple tools together, how they make tooling decisions, and what’s driving preference,
task priorities and satisfaction, including code generation, debugging, and documentation,
cost and pricing evaluation, including how developers perceive value relative to cost,
decision drivers and trust perceptions,
productivity and measurable impact, including whether developers perceive AI tools as making them more productive in the sense of saving time,
and a deep-dive section that changes each wave based on client feedback and market interest.
This time, we decided to focus on AI agents, mainly the agentic form of the AI coding tools that we’re asking about.
This is fundamentally different from using them simply as coding assistants.
We built this section to understand how teams are experimenting with agents, the level of autonomy they’re providing, what blockers they’re hitting, and where they see opportunity.
Q&A
Natasa Ljikar: You mentioned that 42% of companies are abandoning AI initiatives, but 80% of tech leaders say that AI is valuable.
That seems contradictory.
Are the leaders saying it’s valuable the same ones whose companies are abandoning initiatives? Or is there a different group of people making those decisions?
Bleona Bicaj: The 42% abandoning is coming from an external source, and the 80% is coming from ours.
The 80% are engineering leaders and developers who are using AI tools and finding them valuable in their day-to-day work.
The 42% abandonment rate that S&P Global is tracking refers to projects that got greenlit at the board or executive level, often broader AI transformation initiatives, not just developer tools.
So you can have engineering leaders genuinely finding Copilot valuable while the company’s enterprise AI data pipeline project — something much larger and more complex — gets abandoned because the data quality wasn’t there or the business case fell apart during implementation.
Developer-level adoption and enterprise-level success are two different problems.
That’s exactly why measurement matters. If you’re only measuring at the team level — “my engineers like this tool” — you’re missing the executive or strategic question of whether this is solving the business problem at scale.
And that’s where the abandoned projects struggle.
Natasa Ljikar: You talked about how manual measurement is vulnerable to deprioritization and recency bias.
But doesn’t the act of doing a quarterly review, even if it’s manual and imperfect, create accountability?
Isn’t that better than nothing?
Bleona Bicaj: Manual measurement is genuinely better than nothing.
That quarterly review does create accountability internally.
The problem isn’t that it exists, but what you can’t see with it, and where the gap between the manual and formal process lies.
When you’re doing quarterly reviews, you’re documenting past impressions.
What you actually need is a more real-time signal about what’s working and what isn’t — more up-to-date data that you can adjust now and not in three months when the next review comes in.
Teams in the manual tier often end up doing the measurement work twice.
They do the quarterly review, report what they find, and then when the CFO asks follow-up questions three months later, they scramble to do a special investigation because the quarterly review data wasn’t structured to answer those new questions.
Formal measurement systems are expensive upfront, but they answer both the questions you knew you’d have and the ones you didn’t.
Natasa Ljikar: How is value typically measured in organizations?
Bleona Bicaj: This is something we’re trying to answer through interviews.
In our AI Developer Tool Benchmark study, we typically ask developers to tell us about time saved, for example, or PR merge — things that are easier to quantify and that people are measuring through KPIs.
But value is very subjective.
From the discussions we’ve been having with software engineering leaders, many are trying to include the subjective, human factor as well.
They are trying to understand how developers themselves perceive the help they’re receiving from these AI developer tools — not just in terms of productivity, but also how they feel about them potentially replacing their roles.
So this is something that may affect psychological safety, which we keep hearing about in interviews.
It is being interpreted differently across organizations.
There isn’t one exact answer yet, but it’s something we’re exploring through the Future of Developer Teams interviews, with the aim of getting both the quantitative and qualitative sides.
Natasa Ljikar: Have you run any causal impact studies for measuring impact of AI on coding productivity?
Bleona Bicaj: That is what we’re trying to get with the AI Developer Tools Benchmark, and that is a quarterly product that we keep trying to improve.
If someone is interested in something that the benchmark doesn’t carry in a given quarter, we can add questions in other quarters and make it a richer product.
In terms of trust- and performance-related metrics, we’re able to have an answer about the AI developer tools — the 20 tools that we’re measuring.
Natasa Ljikar: If any other questions arise, please feel free to reach out to us.
You will also be receiving the recording of this session and the report when it’s ready in your inbox.
Bleona Bicaj: Thank you.