top of page

Search Results

617 results found with an empty search

  • 14 software developer trends & insights you need to know in Q1 2026

    The AI transformation currently taking over the software development industry has already shown that many aspects of this revolution are here to stay. Amidst rapidly changing landscapes, we have always found solace in the reality of data, sourced through best-in-class research.  The software/AI industry is drowning in noise: Hype cycles, vendor-spin, conflicting “evidence”, model benchmarks that don't reflect real-world use, and adoption statistics that are basically…marketing.  Executives in this space have been burned repeatedly by analysis that turned out to be extrapolation dressed up as data. CTOs, CPOs, and VPs of Engineering make daily build/buy/partner decisions, AI adoption strategies, and platform bets. These are high-stakes, high-regret decisions. The cost of acting on bad analysis is enormous.  We’re proud to be able to tell them what's actually happening: how software gets built, what developers need, what teams prioritise, and how AI is actually adopted. With that in mind, this article is a “highlight reel” of the top findings we discovered over the past few months.  A new, updated batch of insights is coming in within the next few weeks. Join the newsletter to get updated first. Here’s what you need to know, now (with sources because we’re into insights, not clickbait). If you want to know something very specific, we're here for you . Artificial Intelligence in software development highlights in Q1 2026 AI on Edge: An on-device focus What we found: Smartphones and tablets are rapidly evolving edge AI targets, driving demand for NPU-optimised on-device models. Source: Integration of AI into edge devices Edge devices are becoming an increasingly important way for artificial intelligence (AI) to reach end users, from smartphones and laptops to wearables, industrial machines, and connected vehicles. This report aims to understand how developers are currently integrating AI models into edge devices and where the main opportunities to reduce friction lie. Based on a global survey of professional software developers who reported building or implementing AI functionality in the 30th edition of our global Developer Nation survey, the analysis details the widespread usage of edgeAI among these developers, regional differences, the devices they target, the approaches they use, and the main challenges they face when deploying models on the edge. AI in Game Development  What we found: Over half of game developers fear AI will further reduce job opportunities amid an already fragile industry marked by widespread layoffs Source: The State of Game Development 2025 In this report, we take a look at today’s landscape of game development. We examine who game developers are, the technologies, engines, and programming languages they rely on, the platforms they target, and the types of games they create. The report also explores how game developers perceive the impact of AI in the industry, shedding light on both the opportunities and the challenges it introduces. AI tool usage across professional developers (full report free to access) What we found: As of Q3 2025, ChatGPT and GitHub Copilot lead in adoption and satisfaction as AI-assisted coding tools among professional developers, reinforcing their position as the safest bets for large-scale rollouts. Source: Choosing the right AI coding tools for your team The rapid rise of AI-assisted coding tools marks a pivotal moment in software development. What began as experimental add-ons has quickly evolved into a crowded market of products, each claiming to boost productivity and transform workflows. Yet with so many options and so much noise, it can be difficult to know which tools are truly delivering value. By examining adoption, satisfaction, and trust-related attributes such as accuracy, support, and security, this report provides a data-driven benchmark of which AI coding tools developers are embracing and which they rate most highly. The analysis reveals where usage aligns with satisfaction, where trust is earned through consistent delivery, and where gaps remain between expectations and reality. For engineering leaders making decisions about which tools to integrate and scale across their teams, the insights in this report help distinguish the coding tools that enable productivity from those that are still struggling to meet developer needs. AI Blockers: why developers don’t build GenAI apps What we found: Most developers remain open to using generative AI if key concerns are addressed. However, stronger privacy and security controls are at the top of confidence drivers, especially for those facing data and compliance barriers. Source: Understanding the reluctance towards building generative AI applications The aim of this report is to understand what prevents developers from integrating generative AI functionality into their applications and what could increase their confidence to do so. For vendors of generative AI platforms and APIs, these findings highlight the areas where developers most need reassurance and support, from robust data protection and clear documentation to seamless integration paths. Agentic AI in software projects (full report free to access) What we found: Agentic AI is moving beyond the experimental stage. Of those integrating AI into their applications, half have already deployed agentic AI architectures to production Source: The state of agentic AI adoption in software projects Agentic AI is emerging as one of the most transformative shifts in how companies design and deploy intelligent systems. This mini-report analyses insights from over 8,400 professional developers to help CTOs and engineering leaders navigate the rapidly evolving agentic AI landscape and make informed architecture and use case decisions. We’ll explore how the implementation of agentic AI varies by company size and project type as well as looking at the types of agentic architectures that are being deployed to production, along with the use cases developers are targeting. Programming language communities and software developer population size  There are 48.4 million developers around the world “How Many Developers Are There in the World?” is our most frequently asked question here at SlashData, both from Product and Marketing people who want to measure adoption, executives who care about their Target Addressable Market (TAM), and software industry journalists and enthusiasts.  To help them all with their goals, we happily share this number and update it as new data becomes available. Go ahead and confidently use this number in your pitch, BoD presentation, or article. We follow a strict methodology to ensure that this is the most accurate estimate you can get. Developer Population Trends Tracking Page JavaScript is the most popular language for software development (full report free to access) What we found: As of Q3 2025, JavaScript was the largest language community, with approximately 27M developers worldwide. Source: Sizing programming language communities Programming languages sit at the heart of the software development ecosystem, shaping not only the kinds of projects developers work on but also the communities they become part of. For product executives, understanding language adoption is more than an academic exercise as it directly informs decisions about which SDKs, APIs, and platform features to prioritise. Choosing the right languages to support can expand the reach of your platform, lower barriers for developers, and ultimately drive product adoption. Assessing how widely used a programming language is and estimating the size of each language community in absolute terms remains a challenge. The estimates presented here are based on two key data sources. First is our independent estimate of the global number of software developers, which we have been publishing for more than eight years. Second is our large-scale surveys, which reach tens of thousands of developers every six months.  A look into DevOps  DevOps: Lack of standardisation is connected to less security What we found: Organisations without DevOps standardisation show between two and three times lower rates of integrating security practices into their CI/CD pipelines Source: Impact of Platform Strategies on Security Practices in Software Development This report examines the security practices that developers integrate into their CI/CD pipelines, with a particular focus on how platform standardisation approaches influence which security tools see adoption and success. In this report, platform standardisation refers to organisation-wide standardisation strategies for DevOps practices, and we categorise platform configurations into five distinct groups: specialised internal developer platforms (IDPs), dedicated teams or individuals responsible for developer experience, unified systems for managing DevOps processes, curated lists of approved tools, and organisations engaging in none of these approaches. This report is based on data from SlashData’s 30th edition of the Developer Nation survey and represents the adoption patterns of more than 4,700 professional developers using CI/CD pipelines.  Company size and industry shape affect deployment strategies (full report free to access) What we found: As organisations grow in size, two overarching strategies to backend DevOps maturity emerge, with some empowering their developers to use a wide range of advanced technologies effectively, while others abstract away infrastructure behind internal development platforms leading developers to prioritise business needs Source: Benchmarking backend and cloud technology strategies This report examines cloud and server-side technology adoption patterns across organisation sizes and industry sectors, revealing insights that challenge conventional wisdom about technology maturity.We explore how multi-environment strategies evolve with organisational scale, why container adoption varies across company sizes, and how platform teams create infrastructure capabilities that are frequently invisible to their developers. Through analysis of deployment strategies, modern architecture adoption, and industry-specific technology leadership, we provide IT executives with frameworks for evaluating their technology strategies against relevant peer organisations rather than generic industry trends.The findings reveal that successful technology adoption depends lesson following best practices and more on aligning technology choices with organisational capabilities, industry requirements, and strategic priorities. Cloud updates you should know in Q1 2026 Cloud-native development (update coming in March 2026) What we found: There are 15.6M cloud native developers, of which 9.3M are backend developers Source: State of Cloud Native Development Q3 2025 (full report free to access) This report explores the current state and scale of cloud native development in Q3 2025. The report provides approximations of the cloud-native developer population in backend services, machine learning or AI, and throughout the entire developer population. The report also provides information on the popularity of different cloud native technologies or approaches among backend developers, to reveal the sophistication path organisations often go through.  In addition, the report explores the trends in cloud deployment approaches, as well as the technologies that developers are using in their backend or cloud development processes and services. We also provide estimates for the proportion of cloud nativeness throughout the range of types of development (e.g. mobile, desktop, DevOps, etc.).  Data residency: Compliance in practice  What we found: Collaboration between developers and legal teams is the leading challenge developers cite. Source: Data Residency Compliance Challenges and Organisational Responsibility This report provides an examination of how organisations are coping with data residency compliance in practice. It explores the primary challenges developers face when building compliant services, how these challenges vary by region and organisation size, and where responsibility for compliance tasks falls within organisations. The analysis reveals significant regional differences in both the nature of compliance challenges and how organisations structure accountability, offering insights for how cloud service providers (CSPs) should design their compliance offerings and which capabilities matter most to customers in different markets. FinOps beyond cost-cutting (full report free to access) What we found: Mid-sized organisations lead in FinOps adoption, likely due to scaling cloud complexity.  Budget monitoring and reporting are the most common FinOps activities, highlighting the importance of visibility into cloud spending. Source: The State of FinOps in 2025 Cloud spending can become one of the largest operational expenses for tech companies. It is often unpredictable due to elastic consumption models, hundreds of services and pricing models, and decentralised purchasing by developer teams (particularly in large companies). Cloud financial management (FinOps) sits at the intersection of finance, engineering, and product, ensuring that cloud resources are used efficiently, focusing on aligning cloud spending with business value, not just cost-cutting. In this report, we examine insights from over 6,300 professional developers working for companies with at least 2 employees who use cloud services. We’ll explore the adoption rate of FinOps practices among developer teams and how it varies by company size and region. Additionally, we’ll cover how teams implement FinOps by looking at the specific practices they have embraced. The report is designed to help technical leaders benchmark their organisations against industry peers and make informed decisions about where to focus their FinOps efforts. AR, VR and IIoT software developer trends AR and VR: ARVR practitioner numbers remain stable What we found: There are approximately five million AR/VR practitioners worldwide, a figure that has remained relatively stable over the past two years. 83% of AR/VR practitioners are leveraging AI across multiple use cases, from coding to content creation. Source: The State of AR/VR Development 2025 This report provides a detailed examination of today’s XR landscape. It explores how many practitioners there are, how they participate in the ecosystem, the types of projects they are building, and the platforms they target. It also investigates how XR practitioners are leveraging AI and which other technologies make up their stack. Finally, the report looks at the main challenges XR practitioners face today and looks ahead to the future of the AR and VR industries, capturing XR practitioners’ and other developers’ perspectives on its direction for the next decade. IIoT onboarding is frictionless  What we found: First IIoT development board onboarding is largely frictionless, as 65+% of professional developers involved in IIoT projects find “setting up the hardware” and “running a basic project” easy. Source: IIoT accessibility This report explores how developers begin their IIoT journey: which development boards they start with, how they experience onboarding across tasks and ecosystems, and how these patterns differ by professional status, experience level, and region. The findings highlight where the industry is lowering technical barriers and where better documentation, community support, or learning pathways are still needed. Insights come from the 30th edition of the Developer Nation survey, which ran from June to August 2025 and reached 830 developers worldwide involved in IIoT projects. After 20+ years of researching the software industry, we have a huge (HUGE) data library we can tap into to answer your questions. Our analysts are subject-matter experts on software development topics and can foresee trends and help you power your strategy with evidence. Let's dive into your priorities together. Get in touch . About the author Stathis Georgakopoulos, Marketing Manager at SlashData Stathis leads SlashData's marketing activities and product marketing and loves building helpful content that turns complex research into practical decisions. He focuses on setting the table for launches and campaigns, and has a soft spot for content marketing and terrible puns.

  • What game developers actually think about AI

    The games industry has faced significant turbulence in recent years, marked by widespread layoffs, reduced investment , and declining market confidence. Earlier this month, Google’s Project Genie announcement triggered a sharp drop in several major game stocks , including Unity, Roblox, and Take-Two, further highlighting the broader uncertainty surrounding the industry’s direction. Against this backdrop, AI has emerged as both a widely adopted tool and a highly contested topic. Player communities have pushed back visibly; review-bombing titles suspected of using AI-generated art, criticising Ubisoft’s AI-powered NPC system, and prompting Valve to update Steam’s policies to require developers to disclose AI-generated content following sustained community pressure. In some cases, studios have even cancelled projects and publicly committed to avoiding AI altogether. 76% of professional game developers are currently using AI to assist with coding or generate creative assets Despite this backlash, the data shows that AI adoption is already mainstream among game developers. According to our Q3 2025 survey with more than 2,000 game developers, 66% are currently using AI to assist with coding or generate creative assets. Among professional game developers, that figure rises to 76%. In this blog post, we’ll explore how game developers perceive AI’s impact across several key dimensions. The full findings, along with insights into the platforms developers target, the engines they use, or the types of games they build, are available in The State of Game Development 2025  report. AI accelerates game production and might help indies rival big studios, but raises alarms over shrinking career opportunities So how do game developers themselves evaluate AI’s impact? Beyond public backlash, our data reveals a more nuanced perspective from within the industry. The most widely shared perception is that AI accelerates the game development process. Over two-thirds (68%) of game developers agree with this statement, highlighting how AI is reducing friction from concept to execution. This highlights AI’s role in accelerating coding workflows (e.g. boilerplate code, debugging, and troubleshooting), as well as in enabling faster prototyping and iteration for assets. 68% of game developers agree that AI accelerates the game development procecss. A closely related finding is that 62% of game developers believe AI will make it easier for indie developers and smaller studios to compete with large publishers. However, indie developers themselves are the least convinced. Only 58% agree, compared to over 70% of those working for publishers or large studios. Indie developers might recognise that while AI can amplify their capabilities, it also scales the advantages of well-resourced studios, enabling them to produce more content, iterate faster, and optimise performance at greater scale. Moreover, many indie game developers might face their biggest challenges in areas like distribution, visibility, and marketing, which remain largely beyond AI’s scope. When it comes to career opportunities, just over half (55%) of game developers believe that AI will reduce the number of roles and opportunities available in the industry. This concern sits within a broader context of instability across the tech sector, one that has disproportionally affected games . The relationship between AI adoption and employment uncertainty remains a debate. On the one hand, AI can augment productivity and create demand for new hybrid skill sets. On the other hand, it risks displacing entry-level responsibilities, as automation absorbs many of the structured, repetitive tasks that once served as gateways for junior developers. As seen in the Stanford Digital Economy study , for jobs with high AI exposure, such as IT and software engineering, employment has been steadily declining for early-career professionals while increasing for the more seasoned ones. If this pattern extends to game development, the industry may face a structural challenge: fewer entry points for newcomers, combined with growing demand for senior talent to oversee, integrate, and validate AI systems. Game developers believe AI enhances player experience, while noting bugs and creativity risks Despite the backlash from some players towards games that use AI, 62% of game developers believe that integrating AI improves the overall player experience. From adaptive difficulty systems and more responsive AI-powered NPCs to personalised storylines and dynamic environments, AI is viewed as a tool that can potentially enable richer, more immersive, and more reactive gameplay. However, confidence is lower among developers in creative roles (art, asset production, audio), where 56% agree. Concerns about originality further illustrate this divide. Overall, 52% of game developers believe AI poses a threat to creative originality, rising to 59% among those involved in creative activities. Many of these creative practitioners might fear that AI-generated content, trained on similar datasets and optimised for popular aesthetics, leads to homogenised content that prioritises speed and scale over originality, making games feel increasingly alike. For many game developers, the drive for efficiency risks dulling the diversity and individuality that define great games if AI is adopted without strong creative direction. There are also technical reservations. Although most developers acknowledge AI’s productivity benefits, 53% agree it increases the risk of bugs or unpredictable behaviour in games. Unlike traditional rule-based systems, AI models can behave in ways that are difficult to fully anticipate, test, or reproduce. This unpredictability can lead to broken dialogue trees, erratic NPC behaviour, balance issues, or edge-case logic loops that only emerge under specific player interactions. As a result, while AI can enhance immersion, it can also introduce new layers of systemic complexity that demand stronger oversight, validation processes, and design safeguards. Taken together, the findings in this blog post suggest that AI adoption in the game development industry is widely perceived as beneficial, but not without meaningful trade-offs. While AI is transforming workflows and accelerating production, it also raises concerns about shrinking career opportunities, creative homogenisation, and technical unpredictability. Ultimately, AI’s role in game development will be shaped not only by what the technology makes possible, but by the strategic decisions developers make about how, and how far, to integrate it. Dive deeper into the game development world. Explore what is shaping the industry with the help of our analysts and 20+ years of software development data. Book a call with Natasa and Petro. About the author Alvaro Ruiz Cubero, Research Manager, SlashData ​Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis.

  • Rapid growth in edge AI developers and where the opportunity lies

    Edge devices are becoming an increasingly important way for artificial intelligence (AI) to reach end users, from smartphones and laptops to wearables, industrial machines, and connected vehicles. Running models directly on these devices can improve responsiveness, support offline or low-connectivity scenarios, and reduce the need to transmit sensitive data to the cloud. At the same time, doing more on the device introduces new constraints around compute, power, storage, and how data privacy and security are managed in practice. At the infrastructure level, recent industry analysis points to hundreds of billions of dollars being spent on edge computing over the next few years [ ref1 ] and several trillion dollars of cumulative investment in AI-driven compute capacity by 2030 [ ref2 ]. For anyone building hardware, frameworks, or platforms for AI at the edge, understanding how developers fit into this picture is essential. Here, we use SlashData’s latest Developer Nation data toestimate the size and growth of professional developers integrating AI into edge devices, and where this work is concentrated. Find a deeper analysis on the full report . 11 million professional edge AI developers worldwide and growing As of Q3 2025, we estimate that there are currently around 38.4 million professional developers worldwide. Of these developers, 29% (11 million) report building or integrating AI functionality on projects that target direct implementation into edge devices. We refer to this group as edge AI developers. Our data shows that edge AI is therefore already a substantial part of the AI developer ecosystem, rather than a niche reserved for early adopters. There are currently around 38.4 million professional developers worldwide. Of these developers, 29% (11 million) report building or integrating AI functionality on projects that target direct implementation into edge devices. We forecast that this population will receive substantial annual growth, even under conservative assumptions. In our conservative scenario, the number of edge AI developers is expected to rise by 30% to 14.3 million by late Q3 2026. Meanwhile, in the optimistic scenario, this figure reaches 18.1 million, representing a 64% increase. In both cases, the pool of developers integrating AI into edge devices is a moving target rather than a static market. As such, vendors should plan for a larger, more diverse edge AI audience in the near term. For technology leaders, there are three clear implications: Treat  edge AI as a strategic focus area with dedicated product planning and clear ownership, rather than as an add-on to cloud-only AI initiatives. Act  early to capture default status with developers by using the coming year of growth to position your products, APIs, and hardware platforms as the natural choice for teams starting or expanding edge AI work. Track  edge AI separately from broader AI efforts, so that usage, community engagement, and revenue for edge-specific offerings are visible in their own right and can inform investment decisions. Regional hotspots for edge AI development As of Q3 2025, edge AI activity is concentrated in three major hubs. North America and Western Europe account for 3.1 million and 2.9 million edge AI developers, respectively, while the Greater China area forms a third major centre at about 2.4 million. By Q3 2026, these regions are projected to grow by 30% to 60%, making them the highest-priority markets for advanced edge AI offerings where both scale and absolute growth are strongest.  The Middle East and Africa (MEA) and South Asia present notable but smaller markets, each with 800 thousand professional edge AI developers. However, we see major opportunities in our optimistic forecast, with both regions potentially reaching 1.4 million each by late Q3 2026. Vendors looking to grow in these two regions may benefit from lowering barriers to first deployments by offering accessible hardware options, opinionated tooling, and strong implementation support. South America presents a more extreme case, where the focus on edge AI development is significantly lower than in other regions. As such, penetrating this market may require a longer-term commitment, with particular emphasis on education, partnerships, and solutions that clearly demonstrate value under tighter resource constraints. At the same time, there is considerable interest and clear indications of increased activity over the next 12 months. This combination of low current penetration and rising intent points to significant headroom for growth for vendors prepared to invest early and build a presence over time. Edge AI is already a mainstream developer activity with clear room to grow Taken together, these findings show that edge AI is already a mainstream developer activity with clear room to grow, rather than an early-stage experiment. There are already 11 million professional developers working on AI functionality for edge devices worldwide, with an expected annual growth rate between 30% and 64% at the present time. North America, Western Europe, and the Greater China area are leading both in scale and growth, highlighting the three natural priority markets for edge AI offerings. Meanwhile, the Middle East & Africa, South Asia, and South America represent smaller markets with headroom for investment. Building tooling for edge AI? Access our full report , which breaks down device targets, integration patterns, and adoption barriers.  About the author Nikita Solodkov, Principal Research Consultant at SlashData Nikita Solodkov is a multidisciplinary researcher with a particular interest in using data-driven insights to solve real-world problems. He holds a PhD in Physics and has over five years of experience in data analytics and research design

  • Decoding the cultural bias in your data

    Why a satisfaction score of 6 in Japan might be better than an 8 in China If you’ve ever looked at the same satisfaction question broken down by country and thought, “ Why are these numbers so wildly different? ” – you’re not alone. In global research, interpreting data responsibly is one of the hardest parts. At SlashData, we run developer studies across regions year after year (including our rolling Developer Program Benchmarking (DPB) , where we help vendors identify concrete improvement paths for their developer programs). One pattern shows up consistently: the way respondents use rating scales is deeply cultural. A satisfaction heatmap promises a unified view of performance. Executives can scan rows and columns for the green of success or the red of failure. Yet, once that data spans continents, it often conceals as much as it reveals. As we will see throughout this post, a growing body of research suggests that the standardised metric of customer satisfaction is often just a map of cultural biases. Without a cultural lens, your global heatmap isn't just a distorted mirror – it’s a dangerous map that can lead to strategic missteps, from the misallocation of resources to the unfair penalisation of high-performing regional teams. For simplicity, in this blog, we will assume a satisfaction scale from 0 to 10. Treat global satisfaction scores as directly comparable, and you can end up misallocating budget, fixing markets that aren’t broken, or missing early warning signals in markets that look healthy. The "Optimists" vs the "Sceptics" (and the Japan paradox) A quick glance at our DPB vendor satisfaction cuts often reveals a geographic divide. North American respondents are frequently at the high end. We also see a cluster of high-scoring Southeast and East Asian markets – especially the Philippines, Vietnam, Indonesia, and China. On the other hand, Japan consistently shows up as more conservative in its ratings, and we often see the same tougher grading in parts of Western Europe, including Germany and the Netherlands. This is not just a SlashData thing. Ipsos specifically flags that the Philippines, Indonesia, and Vietnam give high scores, while other Asian markets provide much lower scores, explicitly including Japan in the low-scoring set. And SurveyMonkey ’s cross-country NPS study shows just how dramatic this can be on a simple 0-10 “recommend” scale: Japan is the lowest of the markets they studied, while the United States and Canada are much higher, and the Netherlands sits well behind many other countries. If taken at face value, this data would suggest that developer programmes are thriving in North America but failing to impress in Western Europe. But does this align with actual retention rates? Often, the answer is no. This disconnect is likely driven by a few powerful cultural forces, like optimism bias and polite agreement on the one hand, and scepticism on the other.  High satisfaction scores can coexist with low developer retention, especially across regions. North America & Emerging Asia are “optimists” when it comes to surveys Research indicates that in cultures prioritising social harmony, such as some Asian markets (often correlated with high collectivism), respondents are predisposed to be agreeable. For high-scoring Asian markets (like the Philippines, Vietnam, China, Indonesia), one driver is often what researchers call agreement and harmony effects: in some cultures, direct negative feedback is less comfortable, and respondents can lean toward more socially acceptable, relationship-preserving answers. S ome markets systematically use the top end of the scale more than others, enough to reorder league tables without any real change in underlying experience. In high-agreement environments, a score of 8-9 can be closer to “fine” than “fanatically loyal”. The danger is overconfidence. In the US and Canada, high scores are often driven by Optimism Bias . Culturally, there is a tendency to view things in a positive light. “Good” is often rated as “Great”. As noted in global NPS studies by SurveyMonkey , American respondents consistently score higher than their European counterparts for the same service levels. Western Europe & Japan are the “sceptics” when it comes to surveys On the other side of the chart, we see Western Europe (e.g. Germany, the Netherlands) and Japan hovering at the bottom of the scale. Western Europe is home to the “ Dutch effect ” or sober grading. In these cultures, hyperbole is viewed with suspicion. A score of 10 is reserved for perfection – a standard almost no B2B service achieves. Japan is the ultimate outlier in our data. Despite being geographically close to the high-scoring Asian nations, it often produces the lowest satisfaction scores in the world. Multiple other studies reveal this pattern: in Japan, service expectations are famously high. A minor friction point that an American consumer might forgive is often punished harshly by Japanese consumers in surveys. Importantly, lower scores don’t automatically mean unhappiness. It often means that top ratings are reserved for rare perfection, and the cultural norm is to score more conservatively.  The cultural bias disconnect: confusing ratings with retention The biggest error a vendor can make is assuming a linear, universal relationship between these scores and churn. In harder-grading markets like Japan and some of Western Europe, a 6 or 7 can still behave like a loyal consumer. If you treat every sceptic as a problem, you risk wasting resources fixing relationships that aren’t broken – or worse, annoying stable customers with constant nudges to “rate us a 10.” In high-scoring markets, the risk flips. If the baseline is inflated, then a drop from 9 to 8 might not be just noise – it can be your early warning signal. And if the culture discourages direct complaints, you may not hear why until the customer leaves. If direct criticism is culturally discouraged, customers can look satisfied right up until they churn. Your first visible signal can be cancellation, not feedback. Adding a culturally intelligent framework to your research To navigate this landscape, vendors and clients must move beyond raw comparisons and adopt a relative, culturally intelligent framework. Benchmark intranationally, not internationally  Stop asking, " Why is our German market less satisfied than the Chinese market? " The correct question is, " How does our German score compare to the German benchmark? " If the regional average is a 6.5 and you score a 7.0, you are likely a market leader. Create country indices that normalise scores against the local average to reveal true performance. This is where tailored services like our Developer Program Benchmarking shine – we help you normalise scores against local averages to reveal true performance. Adjust your internal thresholds In “stoic” markets like parts of Western Europe and Japan, a 7 or 8 can be a genuine win. In high-scoring markets like some East Asian countries, treat anything below the local norm as a potential signal – especially if it’s trending down. Use ranking over rating  Where applicable, complement ratings with questions that force trade-offs. Ranking-style questions (“ Which programme is best? ”) are harder to game through polite scale use than “rate each one 0-10,” and they often reveal the competitive truth hiding beneath the friendliness. Conclusion A red Japan does not necessarily mean a failing program, nor does a green China guarantee a secure future.  By interpreting survey data through a cultural lens – acknowledging the sceptics, the polite promoters, and the scale's structural biases – you get closer to the true voice of the customer. The goal is not to standardise the customers, but to sophisticate the analysis so your data predicts what users will actually do next. Are you worried that your retention strategy is based on skewed heatmaps? We can audit your satisfaction data to separate real performance issues from cultural noise. Contact us today! About the author Bleona Bicaj - Principal Research Consultant Bleona is a research consultant, enthusiastic about product strategy and behavioural science. She holds a Master’s in Economic and Consumer Psychology. With more than 6 years of professional experience as an analyst, she has worked across quantitative and qualitative research studies, turning complex data into clear narratives that inform better products, smarter investments, and long-term growth.

  • The Two Branches of DevOps Standardisation

    Throughout the development world, we are seeing two competing approaches to DevOps maturity: developer empowerment or business focusing. Both models aim to ensure that their organisations are able to increase their developer velocity, ship securer code, and be able to respond to feedback and demands, but take diametrically opposed approaches to do so.  In this article, we explore both approaches: where each excels, what challenges they create, and how they manifest in real development teams. Drawing on data from SlashData's 30th Developer Nation Survey (which reached more than 10,000 developers globally in summer 2025), we'll show how these philosophical differences translate into concrete security practice adoption patterns, and why organisations should choose based on their specific context rather than industry trends.  Developer Empowerment, Autonomy and Visibility Those who follow the developer empowerment model focus on ensuring their developers are knowledgeable, informed, and have autonomy and visibility over their DevOps processes. Organisations adopting this approach typically value developer satisfaction and retention highly. This model hopes that recognition of experienced developers desire to control their toolchains and their resistance to imposed limitations will create happier developers who are willing to experiment freely. The organisations can provide guidance, approved vendor lists, or internal documentation, but ultimately they leave the decision to the ground-floor developers.  The challenge with this is consistency. Security practices may vary between teams, which can lead to blind spots. While individual developers, or teams, may have high levels of visibility into their processes and build up deep familiarity with security practices, the lack of consistency can lead to blind spots in the organisation-wide security posture. Adding to this challenge is that knowledge can become siloed within teams, with successful approaches not being shared with others. At its worst, developers who lack security experience can have their autonomy instead become a liability rather than an asset. However, while decentralised approaches to security risks gaps, it also allows developers to react very quickly to new vulnerabilities without having to wait on a central platform team.  In our current examination, this can include developers who are provided a curated list of tools for their  selection and configuration (34% of professional developers). This leads to a slightly higher adoption of IDE security checks (32%), pre-commit hooks (20%), and container-scanning(28% ) integrated into their CI/CD pipelines, as they are selecting the tools that they interact with during development.  Business Focus: Abstraction and Efficiency The other approach is business-focused, where the goal is to abstract away the concerns about security, infrastructure, deployment, and other DevOps processes behind an IDP or a controlled list of tooling configured for them (27% of professional developers). This aims to allow the developers to focus more on addressing business needs, and their core responsibilities, rather than having to consider wider aspects of the software development lifecycle. This approach emerges from different organisational priorities, including consistency at scale, meeting compliance requirements, or protecting specific business interests, even if it means constraining developers' choices. This can become especially true for companies with hundreds or thousands of developers, where complete heterogeneity of tooling can create maintenance headaches. In addition, organisations that want to prioritise their developer time on product differentiation, or need to onboard developers rapidly, a centralised process supports both of these. This aims to allow the developers to focus more on addressing business needs, and their core responsibilities, rather than having to consider wider aspects of the software development lifecycle. In practice, this can manifest as developers interacting with an IDP with abstracted interfaces. When a developer might deploy to staging, they may not be aware whether this instruction triggers Kubernetes, ECS, or Cloud Run behind the scenes. Within this approach, security checks happen automatically in the pipeline, where developers see the results but don’t necessarily configure these themselves. With these developers, we see higher rates of SCA (29%), DAST(26%), and IAST (27%) practices built into CI/CD pipelines because these happen behind-the-scenes for developers, which are benefitted by having highly centralised platforms.  However, despite the benefits to organisations, and developers, these systems risk creating ‘black box’ problems. If developers don’t understand what is happening behind the abstraction, they can become less effective at debugging, and have a shallower understanding of security practices. Additionally, platform teams can risk becoming bottlenecks, with every new tool or feature request platform team time. This can leave developers unable to work, or risk them engaging in shadow IT and compromising the goals of centralising security practices The False Choice Neither approach is inherently better or worse than the other. Every few years thought leaders emerge to declare that development teams should shift-left or shift-right  as the ‘correct’ way to do development, or to unlock previously unimaginable benefits. However, the reality is that simply shifting  doesn’t actually do anything, and it is instead the processes, practices, and culture within organisations and development teams that have the largest impact, and centralising or decentralising are just mechanisms to achieve this.  What matters instead is for organisations to consider other factors that will motivate them, and what capabilities it instead needs: faster feedback loops, comprehensive security coverage, developer satisfaction, or operational reliability. Some of these benefit from centralisation, and others from distribution, and organisations frequently blend aspects together to meet their specific needs.  What to consider when choosing a DevOps approach   Rather than asking 'which approach is better?', organisations should ask 'what does our context demand?'. Consider: Organisational size and growth trajectory: A 50-person startup might start with curated lists, knowing they'll need an IDP at 500 people Team security maturity: Less experienced teams may need more guardrails; senior teams may resent them Regulatory requirements: Financial services or healthcare often require centralised control and audit trails Cultural values: Does your organisation optimise for innovation speed or operational consistency? Platform team capacity: Building an IDP requires sustained investment—do you have the people and time? Your choice isn't permanent. Many organisations start with developer autonomy and gradually centralise as they scale. Others go the opposite direction, decentralising after realising their IDP became a bottleneck. The key is being intentional about the trade-offs you're making and regularly reassessing whether your approach still serves your needs. Our team of analysts can help you decide on the best option, using concrete data to help your decision-making. Let’s talk and find the solution that works for you.  About the author Liam Bollmann-Dodd Principal Market Research Consultant at SlashData Liam is a former experimental antimatter physicist, and he obtained a PhD in Physics while working at CERN. He is interested in the changing landscape of cloud development, cybersecurity, and the relationship between technological developments and their impact on society.

  • Happy New Year!

    With AI taking centre stage these days, we thought we'd take a moment to step out from behind the algorithms. And say something simple: Thank you. Thank you for trusting the brains and hearts behind SlashData to help you make sense of the ever-expanding universe of AI and data. 🎇 From all of us, wishing you a joyful, curious, and very Happy New Year! 🥳 Happy New Year from  Alex ,  Álvaro ,  Andreas ,  Berkol ,  Bleona ,  David ,  Evgenia ,  Jed ,  Liam ,  Maria ,  Máté ,  Mina ,  Natasa ,   Nikita ,  Petro ,  Sarah  and  Stathis ! ❤️

  • How to harness AI Agents without breaking security

    We are entering a new era in which AI doesn’t just generate content, it acts. AI agents, capable of perceiving their environment, making decisions, and taking autonomous actions, are beginning to operate across the enterprise. Unlike traditional Large Language Models (LLMs) that work within a confined prompt-response loop, agents can research information, call APIs, write and execute code, update records, orchestrate workflows, and even collaborate with other agents, all with little to no human supervision. The excitement and hype surrounding AI agents is understandable. When designed and implemented correctly, these agents can radically streamline operations, eliminate tedious manual tasks, accelerate service delivery, and redefine how teams collaborate. McKinsey predicts that agentic AI could unlock between $2.6 trillion and $4.4 trillion  annually across more than sixty enterprise use cases. Yet, this enthusiasm masks a growing and uncomfortable truth. Enterprises leveraging agentic AI face a fundamental tension:  the trade-off between utility and security . An agent can only deliver real value when it’s entrusted with meaningful control, but every additional degree of control carries its own risks. With agents capable of accessing sensitive systems and acting autonomously at machine speed, organisations risk creating a new form of insider threat  (on steroids), and many are not remotely prepared for the security risks that agentic AI introduces.  The vast majority of leaders with cybersecurity responsibilities ( 86% ) reported at least one AI-related incident from January 2024 to January 2025, and fewer than half ( 45% ) feel their company has the internal resources and expertise to conduct comprehensive AI security assessments. Rushing to deploy digital teammates into production before establishing meaningful security architecture has a predictable result. Gartner now forecasts that more than 40% of agentic AI projects will be cancelled by 2027 , citing inadequate risk controls as a key reason. This blog post covers the risks that pose the greatest challenges for organisations building or adopting AI agents today and how to minimise them, enabling technical leaders and developers to make informed, responsible decisions around this technology. Harness the power of agentic AI with our analysts' help. Talk to an analyst here . The dark side of AI agents Rogue actions and the observability gap Traditional software behaves predictably. Given the same inputs, it produces the same outputs. Understanding results and debugging is therefore a matter of tracing logic, replicating conditions, and fixing the underlying error. However, agentic AI breaks this paradigm. Agents do not follow deterministic paths, meaning their behaviour isn’t always repeatable even with identical inputs, and complex, emergent behaviours can arise that weren’t explicitly programmed . Worse, most systems that agents interact with today lack any understanding of why an agent took a particular action. Traditional observability wasn’t designed to  understand why a request happened, only that it did. This creates a profound observability gap, where organisations can’t understand or replay an agent’s decision sequence. A minor change in context, memory, or input phrasing can lead to an entirely different chain of tool calls and outputs. As a result, traditional debugging techniques collapse. When something goes wrong, teams are often left guessing whether the issue came from the underlying model, the agent design, an external dependency, a misconfigured tool, corrupted memory, or adversarial input.  This problem is exacerbated by the degree of autonomy an agent has, as the longer an agent operates independently and the more steps it takes without human oversight, the larger the gap between intention and action can become. Without robust audit logs designed for agentic systems, organisations can’t reliably answer fundamental questions such as: What did the agent do? Why did it choose those actions? What data did it access? Which systems did it interact with? Could the behaviour repeat? Expanded attack surface and agents as a new insider threat When you give an AI agent the ability to act, particularly across internal systems, you effectively create a new privileged user inside your organisation. Too often, this user is granted broad, overly generous permissions, disregarding the principle of least privilege, a cornerstone of cybersecurity. Teams often grant generous permissions because restrictions seem to “block the agent from being helpful”. However, as highlighted earlier in this post, every added degree of autonomy or access carries its own risks. Your “highly efficient digital teammate” can very quickly become a potent insider threat. Granting agents broad access and permissions to internal documents, systems, repositories, or databases dramatically expands an organisation's attack surface, especially when these agents interact with external services. If an attacker succeeds in injecting malicious instructions through poisoned data, manipulated content, compromised memory, tampered tools, or adversarial prompts, the agent can unknowingly carry out harmful actions on the attacker’s behalf. It may leak sensitive information, modify records, escalate privileges, execute financial transactions, trigger unwanted workflows, or expose data to external systems. The danger compounds in multi-agent environments, where one agent’s compromised output can cascade into others, amplifying the impact of even small vulnerabilities. Agentic drift Agents operate in dynamic environments, learn, adapt, and evolve. Over time, this evolution can lead to agentic drift . An agent that performs well today might degrade tomorrow, producing less accurate or entirely incorrect results. Many factors can influence this, such as updates to underlying models, changes to inputs, changes to business context, system integrations, or agent memory. Because drift often emerges gradually, organisations may not notice until the consequences are significant, especially for agents interacting with external stakeholders (e.g. customer service agents) or operating in multi-agent workflows, where drift can cause cascading failures. Moreover, because AI agents are inherently goal-driven, drift can emerge in which agents start optimising for the metrics they can observe, rather than the ones humans intended. This leads to specification gaming , where agents find undesirable shortcuts that technically satisfy the objective while undermining policy, ethics, or safety. For example, an agent tasked to “reduce task completion time” may quietly eliminate necessary review steps; an agent configured to “increase customer satisfaction” might disclose information it shouldn’t; or a coding agent tasked to “fix errors” might make changes that violate security or compliance constraints. How to build agents safely The risks of agentic AI are significant, but the solution is not to avoid agents altogether. The value is too great, and the competitive pressure is too high. Instead, organisations must treat agentic AI as a new class of enterprise technology, requiring its own security model, governance structures, and operational rigour. As the saying goes, “ a chain is only as strong as its weakest link ”. Don’t introduce a weaker one. To position your organisation to harness the full potential of agentic AI safely, it’s essential to understand how to mitigate these risks. Establish a rigid command hierarchy.  To ensure accountability, AI agents must operate under a clearly defined chain of command where human supervision is technically enforced. Every agent should have a designated controller(s) whose directives are distinguishable from other inputs. This distinction is crucial because agents process vast amounts of untrusted data (such as emails or web content) that can contain hidden instructions designed to hijack the system (prompt injection). Therefore, the security architecture must prioritise the controller’s voice and system prompts above all other noise. Furthermore, for high-stakes actions, such as deleting important datasets, sharing sensitive data, authorising financial transactions, or modifying security configurations, explicit human confirmation should always be required (“human-in-the-loop”). Enforce dynamic, context-aware limitations.  Security teams must move beyond broad, static permissions and instead enforce strict, purpose-driven limits on what agents can do. Agents’ capabilities must adapt dynamically to the specific context of the current workflow, extending the traditional principle of least privilege. For example, an agent tasked with doing online research should be technically blocked from deleting files or sharing data, regardless of its base privileges. To achieve this, organisations require robust authentication and authorisation systems designed specifically for AI agents, with secure, traceable credentials that allow administrators to review an agent’s scope and revoke permissions at any time. Ensure observability of reasoning and action.  Transparency is the only way to safely integrate autonomous agents into enterprise workflows. To ensure agents act safely, their operations must be fully visible and auditable. This requires implementing a logging architecture that captures more than just the final result. It must record the agent’s chain of thought, including the inputs received, reasoning steps, tools used, parameters passed, and outputs, enabling organisations to understand why an agent made a specific decision. Crucially, this data cannot remain buried in server logs; it should be displayed in an intuitive interface that allows controllers to inspect the agent's behaviour in real time. Organisations that fail to invest early in these foundations may find themselves facing a new generation of incidents, faster, more powerful, and more opaque than anything their current security posture was designed to handle.  The next wave of innovation will not be driven by models that generate text, but by systems that take action. Is your organisation ready for what those actions entail? At SlashData, we can help you navigate the challenges of implementing and scaling agentic AI systems by providing data-backed evidence and insights on how developers successfully create agentic AI workflows, avoiding common pitfalls along the way. About the author Alvaro Ruiz Cubero, Market Research Analyst, SlashData ​Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis.

  • Agentic AI has moved from lab to production, ChatGPT and GitHub Copilot are the leaders, says AI analyst firm SlashData

    Manchester, 3/11/2025 SlashData has released new findings revealing the real-world adoption of AI in late 2025.  As early adopters and reliable predictors of technology trends, developers provide a window into where AI is heading next. Based on their responses, SlashData highlights three trends transforming the AI landscape: Agentic AI goes mainstream, AI coding tools preferences, Gen AI adoption blockers. AI coding tools: ChatGPT and Copilot dominate ChatGPT (64%) and GitHub Copilot (49%) lead in adoption and satisfaction among professional developers using AI coding tools. JetBrains AI shows low adoption and high satisfaction, signalling a growth opportunity. Adoption varies by experience: “Satisfaction with ChatGPT drops notably among experienced developers, as they appear less happy with its accuracy, scalability, and ease of use compared to newcomers” says Bleona Bicaj, Senior Market Research Analyst at SlashData Agentic AI goes live: half of adopters already in production 50% of professional developers adopting AI functionality have already deployed Agentic AI into production, marking the end of the experimental era. Text generation, summarisation or translation (28%) is the top use case for Agentic AI. AR/VR and IoT projects lead adoption. Reliability and security concerns might be slowing the adoption of agentic AI in backend systems.  “Large enterprises’ governance complexity may be neutralising their resource advantages in agentic AI deployment” says Alvaro Ruiz Cuber, Market Research Analyst at SlashData Data privacy & security fears slow down AI rollout Organisations face two core hurdles: privacy risks that delay approval and quality concerns that undermine developer trust as only 25% of professional developers are currently building applications powered by Generative AI.  “Organisations must prioritise enterprise-level safeguards to prevent projects from stalling under compliance reviews.” urges Nikita Solodkov, Market Research and Statistics Consultant at SlashData Full analysis and 29 charts instantly available to all through the SlashData Research Space .   The insights come from 12,000 developers surveyed in Q3 2025. The six State of Developer Nation reports cover AI, FinOps, Cloud and Language communities. About SlashData SlashData is an AI analyst firm. For 20 years, we have been working with top Tech brands like Google, Microsoft and Meta. We track software technology trends to empower industry leaders to make product and marketing investment decisions with clarity and confidence, and drive the world forward with technology.

  • From Hype to Data in Q4 2025: 6 developer signals on Agentic AI, Cloud, FinOps and language communities to break through the noise

    You don’t need another hype post. No one does. What the Tech world needs are the clear signals developers are actually sending: where adoption is real (and measurable), where it stalls, and how to present this at a board-level.  Developer Signals, Not Vendor Noise The latest State of the Developer Nation (DN30) series from SlashData gives you that edge across: Agentic AI architectures being implemented The AI coding tools developers rely on The barriers to adopting Generative AI applications  The current stage of Backend/Cloud Sizing the language communities FinOps in 2025 Responses from 12,000 developers are combined into 6 in-depth reports, filled with data and analyst commentary. The insights within, curated by our analysts, experts in their field, will help you make go/no-go decisions faster and with confidence.  Think developer sentiment, adoption curves, regional differences, and tech maturity, not guesswork. Below is a quick, exec-ready tease of what’s inside each report and how to dig deeper. What’s New in AI, According to Developers AI coding tools: concentration + clear satisfaction leaders Only 20% of professional developers currently use AI-assisted coding tools, and usage is heavily concentrated in ChatGPT (~65% of AI-tool users) and GitHub Copilot (49%).  65% of AI tool users use ChatGPT Both also top satisfaction (CSAT 78 each), with JetBrains AI close behind on 76 despite only ~10% adoption — a classic high-satisfaction/low-awareness opportunity.  Attribute-level scores explain why: ChatGPT leads on ease of use and setup; Copilot wins on integration and in-IDE workflow fit.  Insights Source: Which AI coding tools do professional developers rely on? Agentic AI: single-agent now, multi-agent building blocks next Among developers who’ve implemented agentic AI in the past six months, 56% ship single-agent systems, while 44% use multi- or hybrid-agent designs.  Text generation/summarisation/translation is the top use case (~28%), with multi-agent setups over-indexing on tasks like multimedia creation, web retrieval, and database querying — building blocks for orchestration.  Adoption varies by context: immersive (AR/VR/games) and IoT projects lead; backend and web services lag, where reliability/security constraints make autonomous agents a tougher sell.  Insights Source: The state of agentic AI adoption in software projects GenAI barriers: privacy first, then quality, skills and ROI 77% of developers not adding GenAI cite specific blockers. The top is data privacy/security (22%), with budget (16%), limited expertise (15%), output quality (14%), and integration complexity (13%) close behind.  As company size rises, privacy and compliance hurdles climb too.  Source: Barriers to adopting generative AI in applications Backend & Cloud: Hybrid Peaks Mid-Size; Private Cloud Scales with Risk Larger organisations are more likely to use private cloud, driven by security and compliance, while hybrid cloud adoption peaks in mid-sized companies and drops at the very large and very small.  Multi-vendor strategies remain the norm across sizes; smaller firms average 3.8 cloud providers vs. 3.3 for enterprises. Optimisation over consolidation.  Look at sector patterns: financial services lead on containers (40%) and orchestration (21%), while  AI model/service companies top MLaaS usage (29%).  One nuance worth watching: container usage dips at 501–1,000-employee “large businesses”. While we might generally expect container usage to increase as organisations grow and they have a greater need for the flexibility and scalability of containers, this low container adoption instead gives us insight into how platform teams are changing the developer experience and removing direct interaction with specific technologies. Insights Source: Benchmarking Backend and Cloud Technology Strategies  FinOps: Wide Adoption, Clear Regional Spread Two in three developers say their teams practice FinOps (66%), with mid-sized organisations leading as cloud bills and complexity bite. Regionally, adoption is highest in the Greater China Area (88%) and strong in North America (73%), while South America trails at 22% — signalling big upside for early movers in emerging markets. Visibility (budget monitoring/reporting) is the common entry point. Insights source: State of FinOps in 2025 Programming Language Communities: Scale, Momentum, and Who to leads JavaScript remains the largest community (~26.9M) with Python (24.4M) now ahead of Java (23.1M).  Over the last year, JavaScript usage dipped from 61% to 56% — maturity, not a collapse.  Momentum stories: C++ adds 7.6M developers over two years, expanding across embedded, desktop, games, even web and ML. Ruby doubles to 4.9M in the same period. Experience curves matter: Python skews earlier-career; PHP and C# adoption rises with tenure: Languages often “learned on the job” inside established stacks.  Insights Source: Sizing programming language communities Why this matters For CTOs & Heads of AI: De-risk platform bets. Align agentic AI architecture choices to today’s real use cases; prioritise privacy, evaluation pipelines, and governance to unblock GenAI adoption. For Product Managers, PMMs and DevRel: Position to developer reality. Back the tools and languages developers actually rate and use; target regions and segments where FinOps and cloud maturity shift the buying criteria. Next step: Talk to an analyst for a briefing and a go/no-go view for your roadmap or AI rollout. Or access all State of the Developer Nation insights if you want to drill into charts, regions, and cohorts yourself, in the SlashData Research Space : Which AI coding tools do professional developers rely on?  The state of agentic AI adoption in software projects  Sizing programming language communities State of FinOps in 2025 Benchmarking Backend and Cloud Technology Strategies  Barriers to adopting generative AI in applications  About the author Stathis Georgakopoulos, Product Marketing Manager at SlashData Stathis leads product marketing and loves building helpful content that turns complex research into practical decisions. He focuses on setting the table for launches and campaigns, and has a soft spot for content marketing.

  • Navigating AI Tech Trends with confidence and clarity

    If you have been following SlashData for a while, you know how we are not only tracking the latest technology trends, but are also early adopters ourselves. Now we are taking one more step forward.  SlashData has been tracking the developer ecosystem and economy for 20 years. We have been working on analysing the current state of software development, predicting software trends, and benchmarking industry leaders. All these, through expert analyst insights, backed by solid data.  SlashData’s reputation has been built on understanding developers and technology through research, including population sizing, tool adoption, and ecosystem trends.  In the age of AI, developers are the drivers of the technology trends.  Today, developers adopt technology first, followed by builders (aka vibe coders), followed by the rest of enterprise users and the world. The exponential evolution and adoption of AI tech has created enormous uncertainty in the world. We are here to provide confidence and clarity.   Our next step for SlashData is to become the trusted analyst firm specialising in AI technology, helping tech companies make the right decisions when adopting AI technology.  We focus on helping clients navigate AI technology decisions with confidence & clarity, through analyst guidance validated by data. This is more than a change in branding or service lines. It’s a strategic shift, a recommitment with a sharpened focus. We are doubling down on delivering not only where AI tech is heading, but also on why, and how, always backed by rigorous, data-validated analysis. Andreas Konstantinou (SlashData founder) is returning to the role of CEO to lead us through this shift in strategy and work with a technology he is very passionate about. Why we are shifting our resources to serve AI tech right now AI is the fastest adopted technology on the planet, and also the most disruptive.  It is the most profound change the industry and the world has ever experienced. AI updates dominate news, media, your timeline, Slack messages, and hallway conversations.  Here are the core reasons we believe the world and businesses need us to take this step now: Explosion of AI options & fragmentation. New models, platforms, tools, frameworks, and deployment choices are multiplying rapidly. What works in one context doesn’t in another. Without an analyst lens, many organisations are overwhelmed by choices and risk making the wrong investments, losing credibility and opportunity in the race for AI adoption. Gap between hype and reality. Vendors, online communities, influencers, media, even some internal teams often overpromise on what AI can do. The real effects: performance, cost, security, ethics, maintainability, scalability can diverge wildly. Organisations need grounded, evidence-based guidance to separate signal from noise. High stakes in adoption. AI decisions are no longer just technical decisions. They affect strategy, operations, governance, risk, and customer trust. Poor choices can lead to compliance violations, security incidents, ethical lapses, or wasted budget. The “analyst + data” combination helps mitigate those risks: our analysts provide an expert outlook, backed by solid datapoints. What SlashData can do for your AI needs  Strategic AI technology roadmaps.  We’ll help you choose which models, platforms, infrastructure, and partners make sense for your goals. Vendor and product benchmarking.  Compare capabilities, performance, costs, and trade-offs in real-world conditions. Use-case validation.  Before investing heavily, validate which AI use cases are likely to deliver the ROI you are going for. Regular data-grounded trend & forecast reports.  Not just “what is” but “what is coming,” and what it means for you. Building on our strengths If you’ve worked with us, you know that we have been tracking developer trends for two decades. We know that developer trends are the early signal of which AI platforms and technologies will win. Success lies within the developers at the heart of AI.  Our proven track record of working with industry leaders is a true testament to that. We have worked with teams that push the world forward at Google, Microsoft, CD Foundation, Cisco, Dell, DigitalOcean, Intel, Linux, Meta, Okta, Qualcomm, SAP, Sony, Stripe, and many more.  Additionally, our insights are: Elevated and validated by data:  Developer population sizing, adoption curves, performance metrics, competitive benchmarking, and usage patterns all serve to prove our point. Accessible : clear language, insightful framing for both technical and non-technical stakeholders. Trusted : We have been tracking developer trends and the technology landscape for 20 years. We are prepared and know how to track AI in a way that brings value and reduces friction. Let’s see how we can work together. Talk to our analysts .

  • Integrating AI into cloud infrastructure and processes is a key priority for one-third of cloud decision-makers in Europe and the US

    Cloud computing continues to reshape how organisations manage infrastructure, data, and digital services. As adoption accelerates, data privacy, residency, and compliance have gained prominence alongside ongoing concerns around performance, cost, and security. The increasing complexity of regulatory requirements and the diversity of cloud deployment models underscore the need for organisations to balance innovation with risk management and operational efficiency. The 2025 Cloud Landscape in Europe and the US. We worked together with UpCloud to research European and US organisations and examine current cloud service providers’ (CSPs) adoption trends, key challenges, and future priorities across both regions. Together, we produced a report where we: Analyse the landscape of cloud provider usage, distinguishing between organisations that rely solely on US-based providers, European providers, or a combination of both. Explore where these organisations choose to store their data physically.  Examine the various cloud deployment models adopted by organisations of different sizes and the key factors they consider when choosing a cloud service provider. Look at the challenges organisations encounter with their current cloud environments.  Analyse the motivations behind adopting or avoiding European CSPs.  Look to the future, outline the main organisational priorities for cloud services and infrastructure and explore how organisations are integrating AI-related practices into their operations. This latter part is what we will present in this article, so keep reading. Access and deep dive into the full report here . You can find a short summary of our methodology at the end.  UpCloud and SlashData will also publish a webinar to discuss the findings. You can watch it here . Future cloud services and infrastructure priorities Exploring organisational priorities around cloud services and infrastructure reveals that AI integration will be the main focus in the next two years. One in three organisations identifies it as one of their main priorities.  The trend is far more pronounced in the US, with 40% calling AI integration a priority, compared to only 28% in Europe, highlighting the US’s stronger investments and more aggressive approach to AI integration. Additionally, those in management positions (such as CEOs, CTOs, or tech leads) are substantially more likely to identify AI integration as a key priority (36%) compared to others (27%), suggesting that leadership sees AI as a strategic lever for transformation, competitive advantage, and operational efficiency. Integrating AI into cloud infrastructure and processes is a key priority for one-third of cloud decision-makers Following closely are objectives to improve scalability (32%) and performance (30%). This highlights the need for infrastructure that can flexibly support growth and deliver consistently high performance, enabling organisations to remain agile and resilient in a context of rapid change. How organisations are supporting AI workloads To understand how organisations are supporting AI workloads and integrating AI into cloud infrastructure, we asked cloud decision-makers about the status of several key practices within their organisations. As it turns out, for each practice listed, around half of the organisations have already implemented these or are currently in the process of implementing them. Training developers on cloud-based AI tools and infrastructure (56%) and the adoption of AI platforms and services from cloud vendors (55%) are the most widely adopted activities, highlighting the need both for upskilling teams and leveraging specialised AI solutions to facilitate adoption. More than half (51%) have also incorporated or are in the process of incorporating AI-specific security and compliance measures, further emphasising the concerns surrounding data protection and regulatory obligations. One thing is clear: few organisations are opting out. For all these activities, only a small minority (11–14%) report having no plan to implement them, indicating strong industry momentum. This widespread engagement signals that AI integration is no longer limited to early adopters or specific sectors; it is becoming a mainstream priority as organisations face increasing pressure to innovate, improve efficiency, and remain competitive. Where does the data come from The findings of this report are based on data collected from an online survey designed, hosted, and fielded by SlashData in May 2025. The survey reached 300 professionals in Europe (55%) and the US (45%) who are involved in the selection and purchase of cloud services in organisations with at least five employees.  About the author Alvaro Ruiz Cubero, Market Research Analyst, SlashData ​Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis.

  • How developers, sales and marketing professionals use Generative Artificial Intelligence in 2025 

    This is the transcript of our latest live session “Artificial Intelligence in Tech: usage, adoption and challenges in 2025” which you can watch in the following video. Intro & welcome  Moschoula Hi everybody, welcome back to SlashData's webinar series for 2025. For those who aren't familiar with us and are joining for the first time, SlashData is a market research firm active in the technology community for nearly 20 years. We serve the technology community, helping companies make data-backed, high-impact decisions with confidence. We help you understand your customers, your users, and your decision-makers, and understand how to do everything from product design to marketing strategies with data. We will continue this series throughout the year, so stay tuned and join our newsletter to get invited to the next ones. For housekeeping, before I hand off to our featured speakers, we will be open for questions. The live chat that you should see to the right of your screen is available, and we will be reading through that at the end of the presentation. We have two senior analysts here with us today: Bleona Bicaj and Alvaro Ruiz. They will address the most topical subject we are all dealing with and learning more about each day—AI and tech usage, adoption, and challenges. Without further ado, I'll hand it over to Alvaro. Alvaro Ruiz Hello everyone, and welcome again. I'm Alvaro from the research team at SlashData. Today, in this first half of the webinar, we will explore how developers are working with AI and integrating it into their applications. Here’s a quick overview of what we’ll cover. First, we’ll look at how developers are actually working with ML and AI—whether that’s using AI tools in development workflows, adding AI functionality to applications, or building AI models. Next, focusing on the second group—those adding AI functionality to applications—we’ll explore the types of models they’re using and do a deep dive on open and open-source models to understand why developers choose to use them and the challenges they face. Finally, we’ll look at the type of AI functionality developers are adding to their apps—generative versus non-generative—and how the proportion of developers adding GenAI functionality varies based on experience, region, and company size. Developers using AI tools in their workflows According to our data from the 27th edition of our Developer Nation survey, fielded in Q3 2024, about two-thirds of developers are already using AI tools in their development workflows. The most common use case is AI chatbots for coding questions, with 46% of developers doing this, followed by 32% using AI-assisted development tools like GitHub Copilot. Another 21% use AI to generate creative assets for their projects, such as 3D models. When it comes to adding AI functionality directly to applications, 21% of developers are doing so—15% through fully managed AI services or APIs, and 10% using self-managed or local models. Finally, 15% of developers are involved in creating AI models themselves—customizing with their own data, building and training models, or fine-tuning hyperparameters. That leaves only about a quarter of developers not yet working with AI, highlighting just how integrated AI has become in software development. For the rest of this presentation, we’ll focus on those adding AI functionality to their applications. In the next presentation, Bleona will share insights on those using AI tools in development workflows. How developers bring Artificial Intelligence into their applications Now let’s take a closer look at how developers are bringing AI into their applications. Here, we see the types of AI models developers use. Of the 21% of developers adding AI functionality, 66% indicate they use open or open-source AI models, which equates to around 6.3 million developers. As these are the most popular types of AI models, we’re going to explore developers’ experiences using them. It’s worth noting that while 58% within this 66% rely exclusively on open and open-source models, a substantial portion—42%—also use in-house or proprietary models. Use cases of developers adding AI models to their applications Now, moving on to use cases—modern AI models are opening up a world of possibilities. We asked developers what kinds of AI features they’re building using these models.  Here’s what we found: Text generation leads with 37% of developers using open or open-source AI for this. Right behind are conversational interfaces such as chatbots at 36% and text summarisation at 34%. This is no surprise, as natural language processing powers many of today’s most useful AI features—from creating content to streamlining customer support. But the story doesn’t end there. Developers are also using AI for predictive analytics (30%) and personalisation or recommendation systems (29%). Image generation is equally popular at 29%, reflecting demand for creative, visual AI tools. Many other functionalities also show substantial adoption, highlighting how AI is shaping the next generation of smart applications. Why developers use open source models Now let’s explore why developers use open or open-source models. Top reasons include ease of integration, customisation, flexibility, and belief in the open-source model—cited by 34% of developers. This shows that developers want models that fit into their workflows and can be adapted to their needs. Community support is another major factor, cited by 33%, tied closely to the open-source philosophy—developers can share knowledge, get help quickly, and contribute improvements. No licensing costs (26%) and transparency (25%) are also key. Developers gain visibility into how models work, which is critical for trust, compliance, and addressing ethical concerns. Other reasons, each cited by fewer than 25%, include algorithm suitability, alignment with organisational values, and avoiding vendor lock-in. However, using these models comes with challenges. According to our data, 86% of developers using open or open-source models face at least one challenge. Top among these is security and privacy, cited by 25%. Developers must ensure that AI models don’t compromise user privacy or create vulnerabilities. Finding the right model is another major issue (23%), especially for those adding conversational interfaces, where this rises to 29%. These use cases generate added complexity, as models may not meet the nuanced needs of conversational AI. Other challenges include ensuring accuracy (21%), lack of specialised support (19%), and difficulties with fine-tuning or customisation (19%). Many developers also cite limited training resources, knowledge gaps, or compatibility issues (18%). To complement this, we asked developers why they avoid open or open-source models. The top reasons closely match the challenges discussed earlier. However, 19% say they opt for managed services simply because they’re more convenient. And 25% avoid open or open-source models due to security and privacy concerns. While this doesn’t mean open-source models are inherently insecure, they may lack the guarantees offered by proprietary solutions. Now, for the last part of the presentation, let’s see what types of AI functionality developers are adding and profile those using GenAI. Types of AI functionality developers are adding and who the developers using GenAI are According to new Q1 2025 data, 25% of developers are now adding AI functionality to applications, up from 21% in Q3 2024. This shows rapid growth. Breaking this down, 20% of developers are adding generative AI, while 11% are adding non-generative AI for tasks like analysis, prediction, or classification. Looking at experience levels, developers with less than a year of experience are least likely to build GenAI-powered apps—only about 1 in 10 have done so. This makes sense, as newcomers are often focused on learning the basics. Developers with 6 to 10 years of experience lead the way at 26% In contrast, developers with 6 to 10 years of experience lead the way at 26%, followed by those with 3 to 5 years at 23%. These mid-career professionals have built enough expertise to handle complex projects and are often tasked with experimenting with new technologies. Interestingly, adoption drops among developers with over 10 years of experience—only 17% are adding GenAI features. Many senior developers focus more on oversight, refining workflows, or mentoring. Regionally, North America leads with 27% of developers integrating GenAI, thanks to its strong tech ecosystem and funding environment. Eastern Europe and South America have the lowest rates, at 11% and 12%, respectively. Contributing factors include weaker infrastructure and economic barriers. Looking only at professional developers, company size also plays a role. Freelancers and those at very small companies are least likely to integrate GenAI—13% and 16%, respectively, likely due to limited resources. Mid-sized companies show the highest adoption at 29%, striking the right balance of resources and agility. At large enterprises, adoption drops to 24%, likely due to legacy systems, regulatory concerns, or segmented team responsibilities. So that’s all for today. We’ll take your questions during the Q&A session at the end of the webinar. And now I’ll hand it over to Bleona to cover how developers—and non-developers—are using AI in their daily work. The users of Artificial Intelligence in 2025  Bleona Bicaj Thank you, Alvaro. I'm Bleona, and I’m also part of the research team here at SlashData. Now that Alvaro has walked us through the builder side of AI-enabled apps, let’s switch to the people who use them. I’ll open with a snapshot of how developers are working with AI-assisted coding tools. These figures are not from our most recent data set, so think of them as a baseline—we’ll be collecting fresh numbers soon. According to our data, 32% of developers are already using tools like GitHub Copilot, DeepCode, or Source3. Looking at experience, those new to software development are least likely to use these tools—only 22% of those with under a year of experience. That’s not surprising, since beginners tend to be cautious about suggestions they can’t yet debug. But usage rises quickly. By the six-year mark, it reaches 37% as productivity starts to matter more than practice. It levels off and even dips for developers with 16+ years of experience—28%. These veterans may be more selective or focused on tasks like architecture or mentoring, which don’t benefit as much from code generation. One use case clearly dominates: code generation, reported by 55% of AI tool users. The more seasoned the developer, the stronger the uptake—75% of developers with 16+ years rely on AI to generate code, compared to 37% of those with less than a year. When asked for their top three reasons to adopt AI tools, 51% mentioned increased productivity. That priority grows with seniority, as senior developers handle larger projects and more responsibility. Related reasons—like automating repetitive or time-consuming tasks—follow the same pattern, resonating most with experienced professionals. As I said, this is just a snapshot. We’re collecting new data and will share updates soon. How decision makers use Generative Artificial Intelligence Now, shifting from engineers to decision-makers—earlier this year, in January, we interviewed 10 leaders in large tech companies (five in the U.S. and five in Europe), all heading marketing or sales teams. We focused on these functions to explore how GenAI is reshaping non-technical work. For marketing and sales, GenAI’s promise is clear: it can amplify human effort and streamline operations. Over the past few years, these teams have used GenAI for content creation, customer support, and lead generation. But they’re also learning its limitations. In our interviews, we asked:  Why did you introduce GenAI? What tasks does it handle today? What benefits or risks are you seeing?  Their responses gave us a well-rounded picture of GenAI’s impact. Interest in GenAI spiked as soon as the wider industry started talking about its potential. Early adopters launched pilot projects 2–3 years ago to streamline workflows, deepen engagement, and extract better insights from data. As one sales strategy manager put it:  “GenAI became a topic in our company ever since OpenAI came into existence and the world started talking about it.” However, other firms moved more recently, potentially encouraged by a new generation of easier and more capable tools. Whatever the timing, we can see a very clear pattern. What began as a small-scale experiment has now shifted to the strategic core of many organisations. And for most of these organisations, Gen AI is no longer just an optional R&D project—it is a priority for staying competitive in this rapidly evolving digital market. Across every interview, three motives came up again and again for using Gen AI: greater efficiency, relief from repetitive work, and sharper decision-making. Sales teams turned to AI for lead qualification, customer segmentation, and personalised outreach—tasks that once took hours but now convert faster with far less manual effort. How marketing teams use generative AI  Marketing teams use Gen AI to generate blogs, social posts, and email campaigns at scale, while also keeping tone and quality consistent, which is very important for marketing firms in particular. Most organisations began with small, low-risk pilots, trialing tools like ChatGPT, Gemini, or Copilot before committing at an enterprise level. As one sales enablement manager told us: “One or two years ago, we started playing around with co-pilots to author materials both internally and externally within a very small group. Based on our input, we decided to implement a pilot throughout the company.” This start-small-and-then-scale approach was pretty common—launching Gen AI in one team, learning fast, and only then extending it across the organisation. When we asked where Gen AI shows up in day-to-day work, three main buckets emerged. We have certain use cases for sales, others for marketing, and some that span both. How sales teams use generative AI  Sales teams are using Gen AI to zero in on high-potential leads, automate initial outreach, and tailor follow-up messages. It also helps crunch historical data for sharper forecasting and takes care of routine tasks like drafting sales briefs or updating the CRM. That way, sales representatives can spend more time building relationships and closing deals. Marketing tells a similar story. Gen AI drafts blog posts, social copy, and even edits images in minutes while keeping brand tone intact. Marketers feed Gen AI campaign data to fine-tune and personalise their messages, and they rely on it for quick-turn visuals such as infographics or short videos. Some tasks, however, cut across both functions. AI tools record and summarise client calls, draft email replies, and generally serve as an idea sparring partner during planning sessions. By offloading these repetitive jobs, sales and marketing teams alike can redirect their time toward higher-value strategies, creativity, and high-level conversations. Gen AI is proving to be a genuine workflow changer. Across all of our interviews, sales and marketing leaders highlighted four recurring payoffs: speed, personalisation, cost control, and sharper decisions. Generative AI: time-saving, personalising and cost-saving Starting with time saved—AI tools summarise reports, draft sales briefs, and generally clear away low-value admin work in minutes rather than hours. One sales manager put it this way: “Something that used to take two hours now takes 20 minutes.” Yes, there are accuracy issues, but for many tasks, AI dramatically improves efficiency. The result is more calendar space for strategy, creativity, and client conversations. Then there’s hyper-personalised content. By crunching customer data on the fly, Gen AI tailors ads, emails, and pitch decks to smaller segments—but at volume. A marketing manager said, “With more and more use, AI is starting to learn the tone of our brand and how we communicate to our audience. Now I barely need to tweak anything.” Sales teams see the same benefit: targeted messages that land better and convert faster. Next, we have cost savings. A major upside of Gen AI is straightforward savings. Marketing leaders told us they are trimming agency fees, especially around media planning and creative production, because AI now builds assets and places ads in real time. One head of marketing said, “We’ve cut down on agency costs significantly because AI allows us to automate creative production and ad placements in real time.” Sales teams see a similar impact—AI automates lead generation, keeps the CRM updated, and sharpens forecasts, reducing manual effort and freeing budget for higher-value activities. Better, faster decision-making—Gen AI not only automates but also improves the quality of choices. AI-driven analytics pull insights from live data rather than just last quarter’s reports, so strategy adjusts in real time. Automated transcripts and summary notes capture meetings, customer calls, and performance reviews verbatim. AI removes corporate amnesia “AI removes corporate amnesia,” one head of marketing told us. “It records exactly what was said, reducing confusion and ensuring clarity in decision-making.” By reducing human error and preserving a reliable record, Gen AI supports compliance and provides a solid data-driven foundation for next steps. Adoption is no longer the question—scaling is. Most companies we spoke with have Gen AI running in at least one part of the business. Yet rolling it out to new use cases is proving tricky, and the obstacles after the pilot stage tend to be the same. Trust sits at the top of that list. Gen AI still hallucinates—producing confident-sounding but incorrect answers. In data-driven roles where accuracy is key, employees hesitate to act on or even vet AI output. A director of sales operations said, “AI generates outputs that sound highly convincing but aren’t always factually correct. The challenge is that someone might trust this information without verifying it.” Security and privacy are also major concerns. Many firms handle sensitive or proprietary data, and sending that to external AI services raises the risk of leaks, third-party access, and compliance breaches. As a result, some limit AI use to low-risk tasks, while others are building in-house models or imposing strict governance frameworks. A sales enablement manager noted, “We handle a lot of sensitive data. We can’t afford to upload proprietary information into an external AI system without knowing how the data is secured.” A third roadblock is know-how. Some employees experiment readily with Gen AI, while others hesitate due to lack of trust or uncertainty about value. Without formal upskilling, adoption stalls—people don’t feel confident using AI in daily work. Companies that invested early in structured training saw faster uptake and a smoother transition. A marketing expert observed, “There is still a large portion not adopting or unwilling to adopt. It’s more about lack of education and confidence. Training needs to happen across the organization.” Even with skills in place, standardization is another hurdle. In many firms, one team races ahead with AI while another sticks to manual workflows. The lack of clear company-wide guidelines—when to use AI, how to validate output, who signs off—keeps adoption uneven and dilutes the impact. An advanced analytics manager said, “Some staff are pushing these tools, while others don’t care. It’s not affecting our roles much yet, but when AI catches up, we’ll need to rethink training.” There’s a clear pattern that more senior or experienced staff are usually more hesitant to adopt AI. The future of Generative AI for sales and marketing professionals Looking ahead, most leaders see Gen AI as an augmenter—not a replacement. Its value lies in speeding up routine work, lightening admin loads, and sharpening decisions, while humans supply context, judgment, and ethics. Adoption will expand across business functions, but human oversight will remain essential. Customer-facing roles, especially in sales, illustrate this well. AI can qualify leads and send automated follow-ups, but complex negotiations and relationship building still need human intuition. In other words, AI handles high-volume, low-value touchpoints—humans own the moments that matter. A global alliance lead from a sales team said, “I could see a world where maybe smaller deals have an AI rep selling to small clients, but for now, sales reps are still necessary.” According to them, we haven’t reached the point of replacement. Accuracy remains a top priority. Teams are refining models with better training data and validation loops. This “human-in-the-loop” approach builds trust and tamps down hallucinations. Beyond the tech, companies need to re-engineer workflows and invest in upskilling so people and AI can work side by side. Organizations that invest in training and thoughtful integration will capture the biggest gains. Those that don’t risk stalled adoption and employee pushback. In short, Gen AI’s future is as a productivity multiplier and strategic ally—if businesses strike the right balance between automation and human strengths like creativity, critical thinking, and relationship management. A head of marketing said, “AI isn’t going away—it’s only becoming more embedded in how we work. The key is using it responsibly and strategically.” Work alongside AI—not let it replace us. Key takeaways from professionals in sales and marketing using Generative AI Now let me wrap up with five key takeaways: From trial to strategy  – Two years ago, Gen AI was an experiment. Today, it’s on the board agenda. The shift from pilot to priority happened faster than any tech we’ve tracked. Sales and marketing see the fastest wins  – Leads qualified, campaigns drafted—often six times faster. Agency spend drops as creative moves in-house. Trust and security still matter  – Every AI output still needs a human eye. Many firms prefer private models to keep data locked down. Skills gap is the choke point  – Tech only scales as fast as people’s skills. Without structured training, adoption stalls. The future equals augmentation  – Gen AI takes the high-volume, low-judgment tasks, but humans stay accountable for complex decisions. These are the headline lessons from the report . Ready for your questions. Q&A from the audience Moschoula:   Thank you so much, Bleona. Thank you, Alvaro, as well. Here’s a question for Alvaro: Do you think that finding the right model for the job will remain a barrier to AI adoption, or will it decrease over time? Alvaro Ruiz: Good question—it could go either way. The open-source AI ecosystem is expanding rapidly. There are now hundreds of models, architectures, and fine-tuned variants, making it hard for developers—especially less experienced ones—to identify the best fit. If growth continues like this, it might get harder. But as the ecosystem matures, we may see more user-friendly platforms and automated model selection tools, making it easier. So, I think the proportion of developers citing this as a barrier will decrease, but it won't vanish, especially for niche domains or junior developers. Moschoula: Thank you, Alvaro. Now a question for Bleona. If AI boosts productivity by six times, doesn’t that reduce the need for more staff? Bleona Bicaj:   That’s a fair point, and it came up in interviews too. But the six-fold boost mainly applies to mechanical parts of a task—drafting boilerplate code, summarising meetings, first-draft copywriting. Instead of making roles redundant, it frees people to work on backlogs of higher-value tasks—shipping more features, localizing campaigns, strategic conversations. Since we must review AI output for accuracy and bias, the reclaimed time is repurposed, not cut. AI removes busywork, not brain work. Moschoula: Yes, I’ve seen that too. One more question for Alvaro: Are developers specialising in just one or two use cases, or are they integrating multiple functionalities? Alvaro Ruiz: According to our data, developers are using, on average, 3.8 out of 14 AI functionalities. So yes, most are working with multiple use cases. Moschoula: Last question for Bleona—what training formats are delivering the fastest results? Bleona Bicaj:   Companies use several formats, but the most effective is a blended approach: short self-paced modules for basics, followed by live workshops with real tasks, then reinforced through monthly micro-sessions and co-worker collaborations. The last part—peer support—proved especially helpful. Companies that relied only on written guidelines or video courses saw slower adoption. Moschoula: That's really helpful. Thank you both, Alvaro and Bleona. We look forward to the next session—stay tuned for announcements in our newsletter. And let us know if there are topics you want us to cover. Bleona Bicaj: Thank you. Bye. Alvaro Ruiz: Thank you. Bye.

  • Inside Technology Trends: AI Chatbots & Network APIs

    What role are artificial intelligence and network APIs playing in shaping the tumulus digital landscape? How much traction have AI chatbots gained in the last six months?  What does the adoption of network APIs look like among developers?  You can watch the webinar anytime: What You’ll Learn This exclusive webinar will offer deep insights into key technology trends. Together we will look at the most recent data, provided by real developers. Dive into the world of AI chatbots and Network APIs. Our expert speakers will shed light on key findings, including: The Rise of AI Chatbots for Problem-Solving Which roles and industries are experiencing the fastest growth in AI chatbot usage? ​How has the overall adoption of AI chatbots changed over the past six months across different user groups? ​How does AI chatbot usage differ between professionals, hobbyists, and students? ​What regional trends are emerging in AI chatbot adoption? ​What specific challenges or needs are prompting beginners and non-technical users to increasingly turn to AI chatbots for problem-solving? Network APIs: The New Oil in the 5G Economy ​What percentage of developers use network APIs? ​Which regions have higher adoption rates of network APIs? ​In what types of projects are developers using network APIs involved? ​What functionalities of network APIs do developers use?  Which roles and industries are experiencing the fastest growth in AI chatbot usage? Join the Conversation Don't miss this opportunity to stay ahead of emerging technology trends. Gain insights into AI chatbot growth, network API adoption, and more—straight from industry experts. Register now and be part of the discussion! Live Q&A: Ask our analysts your questions.  Exclusive Reports Each topic addressed in this webinar is backed by a free-to-access, in-depth report: The Rise of AI Chatbots for Problem-Solving Network APIs: The New Oil in the 5G Economy Meet the Experts Our presenters bring a wealth of experience in market research and technology analysis: Álvaro Ruiz, Research Manager Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis. Bleona Bicaj, Senior Market Research Analyst Bleona is a behavioral specialist, enthusiastic about data and behavioral science. She holds a Master's degree from Leiden University in Economic and Consumer Psychology. She has more than 6 years of professional experience as an analyst in the data analysis and market research industry. Hosted by Moschoula Kramvousanou, SlashData CEO Where the data come from  The insights presented in this webinar come from the global, independent State of Developer Nation  survey and its 27th wave which reached over 9,000 respondents worldwide. As one of the most comprehensive independent studies on developers across mobile, desktop, Industrial IoT, consumer electronics, cloud, game development, AR/VR, and machine learning, this survey, and the expert analysis after it, provides essential perspectives on the evolving tech landscape. Reducing bias ​To eliminate the effect of regional sampling biases, we weighted the regional distribution across nine regions by a factor that was determined by the regional distribution and growth trends identified in our Developer Nation research. To minimise other important sampling biases across our outreach channels, we weighted the responses to derive a representative distribution for technologies used, and developer segments. Using ensemble modelling methods, we derived a weighted distribution based on data from independent, representative channels, excluding the channels of our research partners to eliminate sampling bias due to respondents recruited via these channels. Each of the separate branches: Industrial IoT, consumer electronics, 3rd party app ecosystems, cloud, embedded, augmented and virtual reality were weighted independently and then combined.

  • Why NVIDIA dominates despite low developer program scores

    In the competitive landscape of technology vendors, developer programs are often seen as essential for building robust ecosystems. Our Developer Program Benchmarking research consistently reveals a puzzling phenomenon: NVIDIA. Its developer program scores lower than average across all vendors we benchmark in terms of engagement and satisfaction for two consecutive years. Yet, the company maintains strong leadership in market capitalisation, having recently hit a record high in shares .  This paradox highlights a broader industry insight; dominance doesn't always stem from developer program polish. Instead, it can come from holistic ecosystem strategy. In this blog, we explore what has worked for NVIDIA and what other vendors, particularly silicon-focused players such as AMD, Intel, and Qualcomm, can learn from their model. The CUDA ecosystem NVIDIA’s most significant developer engagement lever is not its formal program, but the CUDA (Compute Unified Device Architecture) ecosystem. Launched in 2006, CUDA has become the gold standard for GPU programming in AI, HPC, and scientific computing. It’s a comprehensive ecosystem of libraries, including cuDNN for deep learning and cuBLAS for linear algebra, along with deep integrations with frameworks such as PyTorch and TensorFlow . This makes CUDA not only powerful but also incredibly sticky. Developers and researchers who build with it rarely look elsewhere, because switching means losing access to the world’s most mature and optimised GPU platform. What really sets CUDA apart is its network effect. It’s taught in universities, required in job postings, and baked into the workflows of thousands of startups and research labs. According to NVIDIA, over 4.5 million developers now use CUDA, up from 1.8 million in 2020 . That’s a 150% increase in just a few years. That growth is self-reinforcing: more users mean better community support, more shared code, and more third-party tools, an ecosystem momentum few competitors have matched. However, this community-driven approach can also present strategic vulnerabilities. NVIDIA has limited control over the developer experience, support quality, or messaging within this ecosystem. Much of this knowledge transfer is happening through informal channels or community groups, rather than optimised pathways. Silicon vendors like AMD and Intel, by comparison, have struggled to build similarly mature software ecosystems around their hardware offerings. University partnerships and training NVIDIA has strategically invested in academic partnerships that create a continuous pipeline of developers already familiar with their technology. Its partnership with the University of Florida is a prime example: a $70 million initiative that resulted in the HiPerGator 3 supercomputer, powered by NVIDIA DGX SuperPOD systems. Beyond infrastructure, this collaboration includes curriculum development and access to the latest GPU tools, embedding NVIDIA’s technology directly into teaching and research pipelines. This effort is mirrored in the NVIDIA Deep Learning Institute (DLI) University Ambassador Program . The program equips faculty with cloud-based GPU labs and ready-made teaching kits to deliver hands-on training in CUDA and AI. Rather than relying on documentation or forums, NVIDIA meets students where they are, inside classrooms, with real tools and real use cases.  This early career intervention is one of NVIDIA’s most successful developer strategies, and one that bypasses traditional program metrics entirely. For other vendors, especially those with strong hardware portfolios but weaker developer engagement, replicating this academic integration could yield significant returns in loyalty and talent development. A key advantage for other vendors is the ability to combine this intervention strategy with a superior formal developer program which accelerates developers' success and advocacy once they enter the workforce. NVIDIA’s full-stack integration strategy Beyond chips and training, NVIDIA’s edge lies in owning the full AI stack, from hardware to software to networking. Unlike competitors who sell only silicon, NVIDIA delivers integrated systems, such as the DGX SuperPOD and AI Factory reference architectures, which combine GPUs, NVLink switches, SDKs like TensorRT, and orchestration tools like NVIDIA Run:AI . These aren’t just hardware bundles; they’re turnkey solutions that enterprises can drop into production environments with minimal configuration . This vertical integration creates seamless workflows and performance optimisations that generic silicon providers can’t easily match. Competitors like AMD and Intel largely remain focused on component-level sales, often relying on third-party or open-source tooling to complete the developer stack. The result is a fragmented experience that can frustrate developers and delay deployments. NVIDIA’s approach, by contrast, offers plug-and-play performance for production AI environments, which shortens time-to-value and raises switching costs. While the technology integration is seamless, the developer experience of learning, troubleshooting, and optimising can heavily rely on informal, community support if they are not actively involved in a partner university program. Competitors exploring full-stack integrations can consider leveraging their more comprehensive developer program to support effective documentation, responsive support networks, and clear migration guides.   Strategic implications for technology vendors NVIDIA’s success despite low developer program satisfaction scores highlights a fundamental industry lesson: true developer loyalty stems not from polished portals or responsive forums, but from building a cohesive and indispensable ecosystem. This includes proprietary SDKs, full-stack integration, academic partnerships, and hands-on training, all of which create long-term reliance and lower the barrier to entry for developers. Developer programs support developers and encourage long-term engagement, but ecosystems draw people in. For silicon vendors and technology leaders seeking to expand their developer base, this means rethinking developer engagement as a long-term ecosystem investment rather than a series of touchpoints. A well-supported, fully integrated platform, even if it's not the most performant, can win developer mindshare by helping teams ship faster and with more confidence. For shareholders, the implication is clear: ecosystem depth is not just a differentiator, but a strategic advantage. Those who build ecosystems, not just programs, will define the next era of technological leadership. Do you know which are the drivers that make your company successful and your audience happy? Let's explore them together. Schedule a call with our experts . About the author Bleona Bicaj, Senior Market Research Analyst Bleona Bicaj is a behavioural specialist, enthusiastic about data and behavioural science. She holds a Master's degree from Leiden University in Economic and Consumer Psychology. She has more than 7 years of professional experience as an analyst in the data analysis and market research industry.

bottom of page