top of page

Search Results

931 results found with an empty search

Blog Posts (613)

  • The Two Branches of DevOps Standardisation

    Throughout the development world, we are seeing two competing approaches to DevOps maturity: developer empowerment or business focusing. Both models aim to ensure that their organisations are able to increase their developer velocity, ship securer code, and be able to respond to feedback and demands, but take diametrically opposed approaches to do so.  In this article, we explore both approaches: where each excels, what challenges they create, and how they manifest in real development teams. Drawing on data from SlashData's 30th Developer Nation Survey (which reached more than 10,000 developers globally in summer 2025), we'll show how these philosophical differences translate into concrete security practice adoption patterns, and why organisations should choose based on their specific context rather than industry trends.  Developer Empowerment, Autonomy and Visibility Those who follow the developer empowerment model focus on ensuring their developers are knowledgeable, informed, and have autonomy and visibility over their DevOps processes. Organisations adopting this approach typically value developer satisfaction and retention highly. This model hopes that recognition of experienced developers desire to control their toolchains and their resistance to imposed limitations will create happier developers who are willing to experiment freely. The organisations can provide guidance, approved vendor lists, or internal documentation, but ultimately they leave the decision to the ground-floor developers.  The challenge with this is consistency. Security practices may vary between teams, which can lead to blind spots. While individual developers, or teams, may have high levels of visibility into their processes and build up deep familiarity with security practices, the lack of consistency can lead to blind spots in the organisation-wide security posture. Adding to this challenge is that knowledge can become siloed within teams, with successful approaches not being shared with others. At its worst, developers who lack security experience can have their autonomy instead become a liability rather than an asset. However, while decentralised approaches to security risks gaps, it also allows developers to react very quickly to new vulnerabilities without having to wait on a central platform team.  In our current examination, this can include developers who are provided a curated list of tools for their  selection and configuration (34% of professional developers). This leads to a slightly higher adoption of IDE security checks (32%), pre-commit hooks (20%), and container-scanning(28% ) integrated into their CI/CD pipelines, as they are selecting the tools that they interact with during development.  Business Focus: Abstraction and Efficiency The other approach is business-focused, where the goal is to abstract away the concerns about security, infrastructure, deployment, and other DevOps processes behind an IDP or a controlled list of tooling configured for them (27% of professional developers). This aims to allow the developers to focus more on addressing business needs, and their core responsibilities, rather than having to consider wider aspects of the software development lifecycle. This approach emerges from different organisational priorities, including consistency at scale, meeting compliance requirements, or protecting specific business interests, even if it means constraining developers' choices. This can become especially true for companies with hundreds or thousands of developers, where complete heterogeneity of tooling can create maintenance headaches. In addition, organisations that want to prioritise their developer time on product differentiation, or need to onboard developers rapidly, a centralised process supports both of these. This aims to allow the developers to focus more on addressing business needs, and their core responsibilities, rather than having to consider wider aspects of the software development lifecycle. In practice, this can manifest as developers interacting with an IDP with abstracted interfaces. When a developer might deploy to staging, they may not be aware whether this instruction triggers Kubernetes, ECS, or Cloud Run behind the scenes. Within this approach, security checks happen automatically in the pipeline, where developers see the results but don’t necessarily configure these themselves. With these developers, we see higher rates of SCA (29%), DAST(26%), and IAST (27%) practices built into CI/CD pipelines because these happen behind-the-scenes for developers, which are benefitted by having highly centralised platforms.  However, despite the benefits to organisations, and developers, these systems risk creating ‘black box’ problems. If developers don’t understand what is happening behind the abstraction, they can become less effective at debugging, and have a shallower understanding of security practices. Additionally, platform teams can risk becoming bottlenecks, with every new tool or feature request platform team time. This can leave developers unable to work, or risk them engaging in shadow IT and compromising the goals of centralising security practices The False Choice Neither approach is inherently better or worse than the other. Every few years thought leaders emerge to declare that development teams should shift-left or shift-right  as the ‘correct’ way to do development, or to unlock previously unimaginable benefits. However, the reality is that simply shifting  doesn’t actually do anything, and it is instead the processes, practices, and culture within organisations and development teams that have the largest impact, and centralising or decentralising are just mechanisms to achieve this.  What matters instead is for organisations to consider other factors that will motivate them, and what capabilities it instead needs: faster feedback loops, comprehensive security coverage, developer satisfaction, or operational reliability. Some of these benefit from centralisation, and others from distribution, and organisations frequently blend aspects together to meet their specific needs.  What to consider when choosing a DevOps approach   Rather than asking 'which approach is better?', organisations should ask 'what does our context demand?'. Consider: Organisational size and growth trajectory: A 50-person startup might start with curated lists, knowing they'll need an IDP at 500 people Team security maturity: Less experienced teams may need more guardrails; senior teams may resent them Regulatory requirements: Financial services or healthcare often require centralised control and audit trails Cultural values: Does your organisation optimise for innovation speed or operational consistency? Platform team capacity: Building an IDP requires sustained investment—do you have the people and time? Your choice isn't permanent. Many organisations start with developer autonomy and gradually centralise as they scale. Others go the opposite direction, decentralising after realising their IDP became a bottleneck. The key is being intentional about the trade-offs you're making and regularly reassessing whether your approach still serves your needs. Our team of analysts can help you decide on the best option, using concrete data to help your decision-making. Let’s talk and find the solution that works for you.  About the author Liam Bollmann-Dodd Principal Market Research Consultant at SlashData Liam is a former experimental antimatter physicist, and he obtained a PhD in Physics while working at CERN. He is interested in the changing landscape of cloud development, cybersecurity, and the relationship between technological developments and their impact on society.

  • Happy New Year!

    With AI taking centre stage these days, we thought we'd take a moment to step out from behind the algorithms. And say something simple: Thank you. Thank you for trusting the brains and hearts behind SlashData to help you make sense of the ever-expanding universe of AI and data. 🎇 From all of us, wishing you a joyful, curious, and very Happy New Year! 🥳 Happy New Year from  Alex ,  Álvaro ,  Andreas ,  Berkol ,  Bleona ,  David ,  Evgenia ,  Jed ,  Liam ,  Maria ,  Máté ,  Mina ,  Natasa ,   Nikita ,  Petro ,  Sarah  and  Stathis ! ❤️

  • How to harness AI Agents without breaking security

    We are entering a new era in which AI doesn’t just generate content, it acts. AI agents, capable of perceiving their environment, making decisions, and taking autonomous actions, are beginning to operate across the enterprise. Unlike traditional Large Language Models (LLMs) that work within a confined prompt-response loop, agents can research information, call APIs, write and execute code, update records, orchestrate workflows, and even collaborate with other agents, all with little to no human supervision. The excitement and hype surrounding AI agents is understandable. When designed and implemented correctly, these agents can radically streamline operations, eliminate tedious manual tasks, accelerate service delivery, and redefine how teams collaborate. McKinsey predicts that agentic AI could unlock between $2.6 trillion and $4.4 trillion  annually across more than sixty enterprise use cases. Yet, this enthusiasm masks a growing and uncomfortable truth. Enterprises leveraging agentic AI face a fundamental tension:  the trade-off between utility and security . An agent can only deliver real value when it’s entrusted with meaningful control, but every additional degree of control carries its own risks. With agents capable of accessing sensitive systems and acting autonomously at machine speed, organisations risk creating a new form of insider threat  (on steroids), and many are not remotely prepared for the security risks that agentic AI introduces.  The vast majority of leaders with cybersecurity responsibilities ( 86% ) reported at least one AI-related incident from January 2024 to January 2025, and fewer than half ( 45% ) feel their company has the internal resources and expertise to conduct comprehensive AI security assessments. Rushing to deploy digital teammates into production before establishing meaningful security architecture has a predictable result. Gartner now forecasts that more than 40% of agentic AI projects will be cancelled by 2027 , citing inadequate risk controls as a key reason. This blog post covers the risks that pose the greatest challenges for organisations building or adopting AI agents today and how to minimise them, enabling technical leaders and developers to make informed, responsible decisions around this technology. Harness the power of agentic AI with our analysts' help. Talk to an analyst here . The dark side of AI agents Rogue actions and the observability gap Traditional software behaves predictably. Given the same inputs, it produces the same outputs. Understanding results and debugging is therefore a matter of tracing logic, replicating conditions, and fixing the underlying error. However, agentic AI breaks this paradigm. Agents do not follow deterministic paths, meaning their behaviour isn’t always repeatable even with identical inputs, and complex, emergent behaviours can arise that weren’t explicitly programmed . Worse, most systems that agents interact with today lack any understanding of why an agent took a particular action. Traditional observability wasn’t designed to  understand why a request happened, only that it did. This creates a profound observability gap, where organisations can’t understand or replay an agent’s decision sequence. A minor change in context, memory, or input phrasing can lead to an entirely different chain of tool calls and outputs. As a result, traditional debugging techniques collapse. When something goes wrong, teams are often left guessing whether the issue came from the underlying model, the agent design, an external dependency, a misconfigured tool, corrupted memory, or adversarial input.  This problem is exacerbated by the degree of autonomy an agent has, as the longer an agent operates independently and the more steps it takes without human oversight, the larger the gap between intention and action can become. Without robust audit logs designed for agentic systems, organisations can’t reliably answer fundamental questions such as: What did the agent do? Why did it choose those actions? What data did it access? Which systems did it interact with? Could the behaviour repeat? Expanded attack surface and agents as a new insider threat When you give an AI agent the ability to act, particularly across internal systems, you effectively create a new privileged user inside your organisation. Too often, this user is granted broad, overly generous permissions, disregarding the principle of least privilege, a cornerstone of cybersecurity. Teams often grant generous permissions because restrictions seem to “block the agent from being helpful”. However, as highlighted earlier in this post, every added degree of autonomy or access carries its own risks. Your “highly efficient digital teammate” can very quickly become a potent insider threat. Granting agents broad access and permissions to internal documents, systems, repositories, or databases dramatically expands an organisation's attack surface, especially when these agents interact with external services. If an attacker succeeds in injecting malicious instructions through poisoned data, manipulated content, compromised memory, tampered tools, or adversarial prompts, the agent can unknowingly carry out harmful actions on the attacker’s behalf. It may leak sensitive information, modify records, escalate privileges, execute financial transactions, trigger unwanted workflows, or expose data to external systems. The danger compounds in multi-agent environments, where one agent’s compromised output can cascade into others, amplifying the impact of even small vulnerabilities. Agentic drift Agents operate in dynamic environments, learn, adapt, and evolve. Over time, this evolution can lead to agentic drift . An agent that performs well today might degrade tomorrow, producing less accurate or entirely incorrect results. Many factors can influence this, such as updates to underlying models, changes to inputs, changes to business context, system integrations, or agent memory. Because drift often emerges gradually, organisations may not notice until the consequences are significant, especially for agents interacting with external stakeholders (e.g. customer service agents) or operating in multi-agent workflows, where drift can cause cascading failures. Moreover, because AI agents are inherently goal-driven, drift can emerge in which agents start optimising for the metrics they can observe, rather than the ones humans intended. This leads to specification gaming , where agents find undesirable shortcuts that technically satisfy the objective while undermining policy, ethics, or safety. For example, an agent tasked to “reduce task completion time” may quietly eliminate necessary review steps; an agent configured to “increase customer satisfaction” might disclose information it shouldn’t; or a coding agent tasked to “fix errors” might make changes that violate security or compliance constraints. How to build agents safely The risks of agentic AI are significant, but the solution is not to avoid agents altogether. The value is too great, and the competitive pressure is too high. Instead, organisations must treat agentic AI as a new class of enterprise technology, requiring its own security model, governance structures, and operational rigour. As the saying goes, “ a chain is only as strong as its weakest link ”. Don’t introduce a weaker one. To position your organisation to harness the full potential of agentic AI safely, it’s essential to understand how to mitigate these risks. Establish a rigid command hierarchy.  To ensure accountability, AI agents must operate under a clearly defined chain of command where human supervision is technically enforced. Every agent should have a designated controller(s) whose directives are distinguishable from other inputs. This distinction is crucial because agents process vast amounts of untrusted data (such as emails or web content) that can contain hidden instructions designed to hijack the system (prompt injection). Therefore, the security architecture must prioritise the controller’s voice and system prompts above all other noise. Furthermore, for high-stakes actions, such as deleting important datasets, sharing sensitive data, authorising financial transactions, or modifying security configurations, explicit human confirmation should always be required (“human-in-the-loop”). Enforce dynamic, context-aware limitations.  Security teams must move beyond broad, static permissions and instead enforce strict, purpose-driven limits on what agents can do. Agents’ capabilities must adapt dynamically to the specific context of the current workflow, extending the traditional principle of least privilege. For example, an agent tasked with doing online research should be technically blocked from deleting files or sharing data, regardless of its base privileges. To achieve this, organisations require robust authentication and authorisation systems designed specifically for AI agents, with secure, traceable credentials that allow administrators to review an agent’s scope and revoke permissions at any time. Ensure observability of reasoning and action.  Transparency is the only way to safely integrate autonomous agents into enterprise workflows. To ensure agents act safely, their operations must be fully visible and auditable. This requires implementing a logging architecture that captures more than just the final result. It must record the agent’s chain of thought, including the inputs received, reasoning steps, tools used, parameters passed, and outputs, enabling organisations to understand why an agent made a specific decision. Crucially, this data cannot remain buried in server logs; it should be displayed in an intuitive interface that allows controllers to inspect the agent's behaviour in real time. Organisations that fail to invest early in these foundations may find themselves facing a new generation of incidents, faster, more powerful, and more opaque than anything their current security posture was designed to handle.  The next wave of innovation will not be driven by models that generate text, but by systems that take action. Is your organisation ready for what those actions entail? At SlashData, we can help you navigate the challenges of implementing and scaling agentic AI systems by providing data-backed evidence and insights on how developers successfully create agentic AI workflows, avoiding common pitfalls along the way. About the author Alvaro Ruiz Cubero, Market Research Analyst, SlashData ​Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis.

View All

Other Pages (318)

  • AI Coding Tools Benchmark | Competitive Technology Landscape Tech Market Research

    How AI coding tools compare in terms of key performance indicators All Reports AI Coding Tools Benchmark How AI coding tools compare in terms of key performance indicators Access the Full Report About this Report Artificial Intelligence (AI) has now become a core infrastructure component of modern software engineering, reshaping everything from how code is written to how software teams deliver value. In this report, we benchmark the rapidly evolving landscape of AI coding assistants, agents, and AI-native IDEs – otherwise referred to as AI coding tools. By doing this, we provide a clear path to understanding not just which tools are leading, but how they are fundamentally changing how developers work. The goal is to offer buyers clarity on where AI delivers measurable impact, and to highlight for vendors where the strongest opportunities and gaps exist. This report presents data from our inaugural AI Coding Tools Benchmark. The results were collected in Q4 2025 from a global panel of 837 professional developers who use AI coding tools. We benchmarked 16 of the most prominent AI coding tools, selected for their market impact and technological relevance. They are, in alphabetical order: Aider, Amazon Q Developer, Claude Code, Cline, Cursor, Firebase Studio, Gemini Code Assist, GitHub Copilot, GitLab Duo, JetBrains AI, Mistral Code, OpenAI Codex, Replit, Sourcegraph Amp, Tabnine, and Windsurf. Key Questions Answered Which AI coding tools are developers actually using, and how deeply are they integrated into day-to-day workflows? How satisfied are developers with these tools, particularly on the coding tasks that matter most? How much measurable productivity uplift do these tools deliver – in terms of PR throughout and weekly time saved? How does developer trust translate into real behaviour – acceptance of AI-generated code versus ongoing manual review and oversight? Click to expand ACCESS THE FULL REPORT Methodology In Q4 2025, SlashData designed and ran a global, online survey to study how professional developers who rely on AI technologies for their coding work engage with AI coding assistants, agents, and AI-native IDEs. We conducted the analysis presented in this report based on data collected from 800+ respondents across more than 20 countries worldwide. Questions? Let's talk! Fill the form. Natasa and Petro will help you drive developer adoption: Name Email I have read and agree to SlashData's Privacy Policy and I want to be contacted. GET IN TOUCH WITH ME Contact us First name* Last name* Work Email* Company * Role* Message I agree to SlashData's Privacy Policy and I want to be contacted * SUBMIT

  • Competitive Technology Landscape | Tech Market Research | SlashData

    The Competitive Technology Landscape tracks the performance of competing technologies in awareness, adoption and developer satisfaction. Get developers to use your product The Competitive Technology Landscape looks at your offering, your competition and the market and helps you understand where you stand. What it tracks Awareness Adoption Developer satisfaction It allows you to understand which developers prefer what you are offering, the reasons why and how you can fine-tune it to invest in what matters most to developers. You can also track your offering’s awareness and benchmark it against the market. All these insights come directly from developers around the world. The Competitive Technology Landscape tracks the performance of competing technologies in What it answers How many developers are aware of, and how many are using your and your competitors’ solutions? Which solutions are developers adopting? Which are they abandoning? For what reasons is each solution rejected and adopted? Which tool aspects matter the most to developers? How do they score the solutions they use based on these attributes? How do competing solutions compare in terms of developer satisfaction score and NPS? What are the key weaknesses and strengths of each solution? Through this solution, you can answer questions such as Explore our latest research 10 December 2025 AI Coding Tools Benchmark MORE 7 October 2025 AI-assisted coding tools Competitive Technology Landscape Report Q3 2025 MORE 20 March 2025 AI-assisted coding tools Competitive Technology Landscape Report Q1 2025 MORE 1 September 2024 Cloud-based development environments Market Landscape Report Q3 2024 MORE 1 July 2024 Payment APIs Market Landscape Report Q1 2024 MORE 1 May 2024 CI/CD tools Market Landscape Report Q1 2024 MORE 1 November 2023 Test Automation/Management Tools Market Landscape Report Q3 2023 MORE 1 June 2023 3rd Party Payment APIs Market Landscape Report Q1 2023 MORE 1 June 2023 Application Security Testing Market Landscape Report Q1 2023 MORE 1 March 2023 Application Performance Monitoring Market Landscape Report Q3 2022 MORE Do you want to effectively talk to developers in a specific sector or understand their needs? LET'S TALK

  • Analyst Developer Insights | AI analysts and developer research | SlashData

    Developer Ecosystem insights are a collection of reports focused on analysing the technology industry and showcasing trends in Web apps, Mobile apps, Desktop apps, Cloud / backend services, AR/VR, Games, IoT, ML/AI & Data Science, Embedded software, Apps/extensions for 3rd-party platforms, DevOps and more! Explore our latest research Mobile Desktop Web Cloud/Backend AR/VR Games IoT ML/AI & Data Science Embedded Software 3rd-party Platforms DevOps Mobile 22 May 2025 How and where to reach developers? MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 October 2024 Landscape of network APIs MORE 1 June 2024 Landscape of mobile games development MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 January 2024 Mobile developers population forecast MORE 1 December 2023 Which content types do developers value? MORE 1 December 2023 Measuring developer productivity MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 April 2023 How and where to reach mobile developers MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE 6 November 2025 Building on the blockchain in 2025 MORE 22 May 2025 How and where to reach developers? MORE 4 February 2025 Understanding Progressive Web App Developers in 2025 MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Which content types do developers value? MORE 1 December 2023 Measuring developer productivity MORE 1 November 2023 Landscape of Web3 development MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 March 2023 A spotlight on blockchain developers MORE 1 March 2023 Observability in software development MORE 1 February 2023 The landscape of web development and framework usage MORE 1 February 2023 Applications and shift-left security MORE Web 31 October 2025 Data Residency Compliance Challenges and Organisational Responsibility MORE 22 May 2025 How and where to reach developers? MORE 8 May 2025 Multicloud adoption experience MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 June 2024 Hardware architecture - Optimised coding among backend developers MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 May 2024 Journey to cloud-native maturity MORE 1 February 2024 Segmenting the backend developer population MORE 1 December 2023 Which content types do developers value? MORE 1 December 2023 Measuring developer productivity MORE 1 November 2023 Landscape of Web3 development MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 August 2023 Backend developer population forecast 2024 MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 July 2023 Challenges in multi-cloud deployment MORE 1 June 2023 How and where to reach cloud developers MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE 1 December 2022 Emerging practices in MLops / DataOps MORE 1 November 2022 Landscape of cloud-native development MORE Cloud/Backend 30 October 2025 The State of AR/VR Development 2025 MORE 22 May 2025 How and where to reach developers? MORE 1 November 2024 Which technologies are used in AR and VR projects? MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 January 2024 How AR/VR practitioners monetise their projects MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 August 2023 AR/VR developers & creators population forecast MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE AR/VR 18 November 2025 The State of Game Development 2025 MORE 22 May 2025 How and where to reach developers? MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 October 2024 Landscape of network APIs MORE 1 June 2024 Game developer population forecast MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 July 2023 Landscape of game developers MORE 1 March 2023 A spotlight on blockchain developers MORE 1 March 2023 Observability in software development MORE 1 February 2023 Applications and shift-left security MORE Games 29 October 2025 IIoT accessibility MORE 22 May 2025 How and where to reach developers? MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 July 2024 Living on the edge MORE 1 June 2024 IoT developer population forecast MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 How and where to reach IoT developers MORE 1 December 2023 Which content types do developers value? MORE 1 December 2023 Networking in IoT applications MORE 1 December 2023 Measuring developer productivity MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 March 2023 A spotlight on blockchain developers MORE 1 March 2023 Wearable Device Developers and their Platform Choices MORE 1 March 2023 Observability in software development MORE 1 February 2023 Applications and shift-left security MORE IoT 20 November 2025 Agentic AI architectures: Adoption, use cases, protocols, and frameworks MORE 13 November 2025 Understanding the reluctance towards building generative AI applications MORE 22 May 2025 How and where to reach developers? MORE 29 April 2025 Benchmarking of fully-managed generative AI services/APIs MORE 30 January 2025 The state of machine learning and data science MORE 1 November 2024 Trust, risk, and security management in AI MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 October 2024 Landscape of network APIs MORE 1 May 2024 ML developer population forecast MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 February 2024 How and where to reach data scientists and ML/AI developers MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 May 2023 Types of data ML/AI devs work with MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE 1 December 2022 Emerging practices in MLops / DataOps MORE ML/AI & Data Science 22 May 2025 How and where to reach developers? MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 October 2024 Landscape of network APIs MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 May 2023 Embedded developers population forecast MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE Embedded Software 22 May 2025 How and where to reach developers? MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 October 2024 Landscape of network APIs MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Which content types do developers value? MORE 1 December 2023 Measuring developer productivity MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE 3rd-party Platforms 24 June 2025 Motivations behind various DevOps practices MORE 22 May 2025 How and where to reach developers? MORE 1 November 2024 Dependency Management in Software Development MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 June 2024 Software supply chain management in organisations MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 March 2023 Observability in software development MORE 1 March 2023 A spotlight on blockchain developers MORE 1 February 2023 Applications and shift-left security MORE 1 December 2022 Emerging practices in MLops / DataOps MORE DevOps Desktop 22 May 2025 How and where to reach developers? MORE 12 February 2025 The current landscape of Arm-based Windows application development MORE 1 October 2024 Landscape of network APIs MORE 1 October 2024 Developers’ experience with integrating AI functionality MORE 1 May 2024 How developers use AI-assisted development tools MORE 1 December 2023 Measuring developer productivity MORE 1 December 2023 Which content types do developers value? MORE 1 October 2023 How developers integrate generative AI into their apps MORE 1 July 2023 Who's integrating sustainable software engineering principles? MORE 1 July 2023 Concerns, challenges, and use of third-part APIs MORE 1 June 2023 Desktop developers population forecast MORE 1 March 2023 A spotlight on blockchain developers MORE 1 March 2023 Observability in software development MORE 1 February 2023 Applications and shift-left security MORE 1 December 2022 Emerging practices in MLops / DataOps MORE Dive into the trends Developer Ecosystem Insights is a collection of reports, focused on analysing and sharing insights and trends across these development areas: Web apps, Mobile apps, Desktop apps, Cloud / backend services, AR/VR, Games, IoT, ML/AI & Data Science, Embedded software, Apps/extensions for 3rd-party platforms, DevOps and more! The insights Developer Ecosystem Insights offer key insights on: Who and where developers are How to reach developers What motivates developers Developer communities How they are expected to evolve. What it answers SlashData’s Developer Ecosystem Insights offer key insights on: Developer Language usage and communities Where developers go for information What types of content do developers prefer The landscape of a technology area (ie Blockchain) Challenges/friction points in a specific technology area (ie Challenges in multi-cloud or Challenges in using 3rd party APIs) Population sizing Do you want to effectively talk to developers in a specific sector or understand their needs? LET'S TALK

View All
bottom of page