Search Results
928 results found with an empty search
Blog Posts (611)
- How to harness AI Agents without breaking security
We are entering a new era in which AI doesn’t just generate content, it acts. AI agents, capable of perceiving their environment, making decisions, and taking autonomous actions, are beginning to operate across the enterprise. Unlike traditional Large Language Models (LLMs) that work within a confined prompt-response loop, agents can research information, call APIs, write and execute code, update records, orchestrate workflows, and even collaborate with other agents, all with little to no human supervision. The excitement and hype surrounding AI agents is understandable. When designed and implemented correctly, these agents can radically streamline operations, eliminate tedious manual tasks, accelerate service delivery, and redefine how teams collaborate. McKinsey predicts that agentic AI could unlock between $2.6 trillion and $4.4 trillion annually across more than sixty enterprise use cases. Yet, this enthusiasm masks a growing and uncomfortable truth. Enterprises leveraging agentic AI face a fundamental tension: the trade-off between utility and security . An agent can only deliver real value when it’s entrusted with meaningful control, but every additional degree of control carries its own risks. With agents capable of accessing sensitive systems and acting autonomously at machine speed, organisations risk creating a new form of insider threat (on steroids), and many are not remotely prepared for the security risks that agentic AI introduces. The vast majority of leaders with cybersecurity responsibilities ( 86% ) reported at least one AI-related incident from January 2024 to January 2025, and fewer than half ( 45% ) feel their company has the internal resources and expertise to conduct comprehensive AI security assessments. Rushing to deploy digital teammates into production before establishing meaningful security architecture has a predictable result. Gartner now forecasts that more than 40% of agentic AI projects will be cancelled by 2027 , citing inadequate risk controls as a key reason. This blog post covers the risks that pose the greatest challenges for organisations building or adopting AI agents today and how to minimise them, enabling technical leaders and developers to make informed, responsible decisions around this technology. Harness the power of agentic AI with our analysts' help. Talk to an analyst here . The dark side of AI agents Rogue actions and the observability gap Traditional software behaves predictably. Given the same inputs, it produces the same outputs. Understanding results and debugging is therefore a matter of tracing logic, replicating conditions, and fixing the underlying error. However, agentic AI breaks this paradigm. Agents do not follow deterministic paths, meaning their behaviour isn’t always repeatable even with identical inputs, and complex, emergent behaviours can arise that weren’t explicitly programmed . Worse, most systems that agents interact with today lack any understanding of why an agent took a particular action. Traditional observability wasn’t designed to understand why a request happened, only that it did. This creates a profound observability gap, where organisations can’t understand or replay an agent’s decision sequence. A minor change in context, memory, or input phrasing can lead to an entirely different chain of tool calls and outputs. As a result, traditional debugging techniques collapse. When something goes wrong, teams are often left guessing whether the issue came from the underlying model, the agent design, an external dependency, a misconfigured tool, corrupted memory, or adversarial input. This problem is exacerbated by the degree of autonomy an agent has, as the longer an agent operates independently and the more steps it takes without human oversight, the larger the gap between intention and action can become. Without robust audit logs designed for agentic systems, organisations can’t reliably answer fundamental questions such as: What did the agent do? Why did it choose those actions? What data did it access? Which systems did it interact with? Could the behaviour repeat? Expanded attack surface and agents as a new insider threat When you give an AI agent the ability to act, particularly across internal systems, you effectively create a new privileged user inside your organisation. Too often, this user is granted broad, overly generous permissions, disregarding the principle of least privilege, a cornerstone of cybersecurity. Teams often grant generous permissions because restrictions seem to “block the agent from being helpful”. However, as highlighted earlier in this post, every added degree of autonomy or access carries its own risks. Your “highly efficient digital teammate” can very quickly become a potent insider threat. Granting agents broad access and permissions to internal documents, systems, repositories, or databases dramatically expands an organisation's attack surface, especially when these agents interact with external services. If an attacker succeeds in injecting malicious instructions through poisoned data, manipulated content, compromised memory, tampered tools, or adversarial prompts, the agent can unknowingly carry out harmful actions on the attacker’s behalf. It may leak sensitive information, modify records, escalate privileges, execute financial transactions, trigger unwanted workflows, or expose data to external systems. The danger compounds in multi-agent environments, where one agent’s compromised output can cascade into others, amplifying the impact of even small vulnerabilities. Agentic drift Agents operate in dynamic environments, learn, adapt, and evolve. Over time, this evolution can lead to agentic drift . An agent that performs well today might degrade tomorrow, producing less accurate or entirely incorrect results. Many factors can influence this, such as updates to underlying models, changes to inputs, changes to business context, system integrations, or agent memory. Because drift often emerges gradually, organisations may not notice until the consequences are significant, especially for agents interacting with external stakeholders (e.g. customer service agents) or operating in multi-agent workflows, where drift can cause cascading failures. Moreover, because AI agents are inherently goal-driven, drift can emerge in which agents start optimising for the metrics they can observe, rather than the ones humans intended. This leads to specification gaming , where agents find undesirable shortcuts that technically satisfy the objective while undermining policy, ethics, or safety. For example, an agent tasked to “reduce task completion time” may quietly eliminate necessary review steps; an agent configured to “increase customer satisfaction” might disclose information it shouldn’t; or a coding agent tasked to “fix errors” might make changes that violate security or compliance constraints. How to build agents safely The risks of agentic AI are significant, but the solution is not to avoid agents altogether. The value is too great, and the competitive pressure is too high. Instead, organisations must treat agentic AI as a new class of enterprise technology, requiring its own security model, governance structures, and operational rigour. As the saying goes, “ a chain is only as strong as its weakest link ”. Don’t introduce a weaker one. To position your organisation to harness the full potential of agentic AI safely, it’s essential to understand how to mitigate these risks. Establish a rigid command hierarchy. To ensure accountability, AI agents must operate under a clearly defined chain of command where human supervision is technically enforced. Every agent should have a designated controller(s) whose directives are distinguishable from other inputs. This distinction is crucial because agents process vast amounts of untrusted data (such as emails or web content) that can contain hidden instructions designed to hijack the system (prompt injection). Therefore, the security architecture must prioritise the controller’s voice and system prompts above all other noise. Furthermore, for high-stakes actions, such as deleting important datasets, sharing sensitive data, authorising financial transactions, or modifying security configurations, explicit human confirmation should always be required (“human-in-the-loop”). Enforce dynamic, context-aware limitations. Security teams must move beyond broad, static permissions and instead enforce strict, purpose-driven limits on what agents can do. Agents’ capabilities must adapt dynamically to the specific context of the current workflow, extending the traditional principle of least privilege. For example, an agent tasked with doing online research should be technically blocked from deleting files or sharing data, regardless of its base privileges. To achieve this, organisations require robust authentication and authorisation systems designed specifically for AI agents, with secure, traceable credentials that allow administrators to review an agent’s scope and revoke permissions at any time. Ensure observability of reasoning and action. Transparency is the only way to safely integrate autonomous agents into enterprise workflows. To ensure agents act safely, their operations must be fully visible and auditable. This requires implementing a logging architecture that captures more than just the final result. It must record the agent’s chain of thought, including the inputs received, reasoning steps, tools used, parameters passed, and outputs, enabling organisations to understand why an agent made a specific decision. Crucially, this data cannot remain buried in server logs; it should be displayed in an intuitive interface that allows controllers to inspect the agent's behaviour in real time. Organisations that fail to invest early in these foundations may find themselves facing a new generation of incidents, faster, more powerful, and more opaque than anything their current security posture was designed to handle. The next wave of innovation will not be driven by models that generate text, but by systems that take action. Is your organisation ready for what those actions entail? At SlashData, we can help you navigate the challenges of implementing and scaling agentic AI systems by providing data-backed evidence and insights on how developers successfully create agentic AI workflows, avoiding common pitfalls along the way. About the author Alvaro Ruiz Cubero, Market Research Analyst, SlashData Álvaro is a market research analyst with a background in strategy and operations consulting. He holds a Master’s in Business Management and believes in the power of data-driven decision-making. Álvaro is passionate about helping businesses tackle complex strategic business challenges and make strategic decisions that are backed by thorough research and analysis.
- Agentic AI has moved from lab to production, ChatGPT and GitHub Copilot are the leaders, says AI analyst firm SlashData
Manchester, 3/11/2025 SlashData has released new findings revealing the real-world adoption of AI in late 2025. As early adopters and reliable predictors of technology trends, developers provide a window into where AI is heading next. Based on their responses, SlashData highlights three trends transforming the AI landscape: Agentic AI goes mainstream, AI coding tools preferences, Gen AI adoption blockers. AI coding tools: ChatGPT and Copilot dominate ChatGPT (64%) and GitHub Copilot (49%) lead in adoption and satisfaction among professional developers using AI coding tools. JetBrains AI shows low adoption and high satisfaction, signalling a growth opportunity. Adoption varies by experience: “Satisfaction with ChatGPT drops notably among experienced developers, as they appear less happy with its accuracy, scalability, and ease of use compared to newcomers” says Bleona Bicaj, Senior Market Research Analyst at SlashData Agentic AI goes live: half of adopters already in production 50% of professional developers adopting AI functionality have already deployed Agentic AI into production, marking the end of the experimental era. Text generation, summarisation or translation (28%) is the top use case for Agentic AI. AR/VR and IoT projects lead adoption. Reliability and security concerns might be slowing the adoption of agentic AI in backend systems. “Large enterprises’ governance complexity may be neutralising their resource advantages in agentic AI deployment” says Alvaro Ruiz Cuber, Market Research Analyst at SlashData Data privacy & security fears slow down AI rollout Organisations face two core hurdles: privacy risks that delay approval and quality concerns that undermine developer trust as only 25% of professional developers are currently building applications powered by Generative AI. “Organisations must prioritise enterprise-level safeguards to prevent projects from stalling under compliance reviews.” urges Nikita Solodkov, Market Research and Statistics Consultant at SlashData Full analysis and 29 charts instantly available to all through the SlashData Research Space . The insights come from 12,000 developers surveyed in Q3 2025. The six State of Developer Nation reports cover AI, FinOps, Cloud and Language communities. About SlashData SlashData is an AI analyst firm. For 20 years, we have been working with top Tech brands like Google, Microsoft and Meta. We track software technology trends to empower industry leaders to make product and marketing investment decisions with clarity and confidence, and drive the world forward with technology.
- From Hype to Data in Q4 2025: 6 developer signals on Agentic AI, Cloud, FinOps and language communities to break through the noise
You don’t need another hype post. No one does. What the Tech world needs are the clear signals developers are actually sending: where adoption is real (and measurable), where it stalls, and how to present this at a board-level. Developer Signals, Not Vendor Noise The latest State of the Developer Nation (DN30) series from SlashData gives you that edge across: Agentic AI architectures being implemented The AI coding tools developers rely on The barriers to adopting Generative AI applications The current stage of Backend/Cloud Sizing the language communities FinOps in 2025 Responses from 12,000 developers are combined into 6 in-depth reports, filled with data and analyst commentary. The insights within, curated by our analysts, experts in their field, will help you make go/no-go decisions faster and with confidence. Think developer sentiment, adoption curves, regional differences, and tech maturity, not guesswork. Below is a quick, exec-ready tease of what’s inside each report and how to dig deeper. What’s New in AI, According to Developers AI coding tools: concentration + clear satisfaction leaders Only 20% of professional developers currently use AI-assisted coding tools, and usage is heavily concentrated in ChatGPT (~65% of AI-tool users) and GitHub Copilot (49%). 65% of AI tool users use ChatGPT Both also top satisfaction (CSAT 78 each), with JetBrains AI close behind on 76 despite only ~10% adoption — a classic high-satisfaction/low-awareness opportunity. Attribute-level scores explain why: ChatGPT leads on ease of use and setup; Copilot wins on integration and in-IDE workflow fit. Insights Source: Which AI coding tools do professional developers rely on? Agentic AI: single-agent now, multi-agent building blocks next Among developers who’ve implemented agentic AI in the past six months, 56% ship single-agent systems, while 44% use multi- or hybrid-agent designs. Text generation/summarisation/translation is the top use case (~28%), with multi-agent setups over-indexing on tasks like multimedia creation, web retrieval, and database querying — building blocks for orchestration. Adoption varies by context: immersive (AR/VR/games) and IoT projects lead; backend and web services lag, where reliability/security constraints make autonomous agents a tougher sell. Insights Source: The state of agentic AI adoption in software projects GenAI barriers: privacy first, then quality, skills and ROI 77% of developers not adding GenAI cite specific blockers. The top is data privacy/security (22%), with budget (16%), limited expertise (15%), output quality (14%), and integration complexity (13%) close behind. As company size rises, privacy and compliance hurdles climb too. Source: Barriers to adopting generative AI in applications Backend & Cloud: Hybrid Peaks Mid-Size; Private Cloud Scales with Risk Larger organisations are more likely to use private cloud, driven by security and compliance, while hybrid cloud adoption peaks in mid-sized companies and drops at the very large and very small. Multi-vendor strategies remain the norm across sizes; smaller firms average 3.8 cloud providers vs. 3.3 for enterprises. Optimisation over consolidation. Look at sector patterns: financial services lead on containers (40%) and orchestration (21%), while AI model/service companies top MLaaS usage (29%). One nuance worth watching: container usage dips at 501–1,000-employee “large businesses”. While we might generally expect container usage to increase as organisations grow and they have a greater need for the flexibility and scalability of containers, this low container adoption instead gives us insight into how platform teams are changing the developer experience and removing direct interaction with specific technologies. Insights Source: Benchmarking Backend and Cloud Technology Strategies FinOps: Wide Adoption, Clear Regional Spread Two in three developers say their teams practice FinOps (66%), with mid-sized organisations leading as cloud bills and complexity bite. Regionally, adoption is highest in the Greater China Area (88%) and strong in North America (73%), while South America trails at 22% — signalling big upside for early movers in emerging markets. Visibility (budget monitoring/reporting) is the common entry point. Insights source: State of FinOps in 2025 Programming Language Communities: Scale, Momentum, and Who to leads JavaScript remains the largest community (~26.9M) with Python (24.4M) now ahead of Java (23.1M). Over the last year, JavaScript usage dipped from 61% to 56% — maturity, not a collapse. Momentum stories: C++ adds 7.6M developers over two years, expanding across embedded, desktop, games, even web and ML. Ruby doubles to 4.9M in the same period. Experience curves matter: Python skews earlier-career; PHP and C# adoption rises with tenure: Languages often “learned on the job” inside established stacks. Insights Source: Sizing programming language communities Why this matters For CTOs & Heads of AI: De-risk platform bets. Align agentic AI architecture choices to today’s real use cases; prioritise privacy, evaluation pipelines, and governance to unblock GenAI adoption. For Product Managers, PMMs and DevRel: Position to developer reality. Back the tools and languages developers actually rate and use; target regions and segments where FinOps and cloud maturity shift the buying criteria. Next step: Talk to an analyst for a briefing and a go/no-go view for your roadmap or AI rollout. Or access all State of the Developer Nation insights if you want to drill into charts, regions, and cohorts yourself, in the SlashData Research Space : Which AI coding tools do professional developers rely on? The state of agentic AI adoption in software projects Sizing programming language communities State of FinOps in 2025 Benchmarking Backend and Cloud Technology Strategies Barriers to adopting generative AI in applications About the author Stathis Georgakopoulos, Product Marketing Manager at SlashData Stathis leads product marketing and loves building helpful content that turns complex research into practical decisions. He focuses on setting the table for launches and campaigns, and has a soft spot for content marketing.
Other Pages (317)
- AI & Developer Research Industry Reports | SlashData
Industry and technology market reports that are free to access and download and share key insights on the trending technology and software development trends. Insights with instant access for your decision-making . Analyst insights powered by developers around the world. Each insights report dives into a key trending topic. Explore our latest research 11 December 2025 AI Coding Tools Benchmark MORE 20 November 2025 Developers in the age of AI MORE 14 November 2025 Barriers to integrating generative AI in applications MORE 14 November 2025 The state of agentic AI adoption in software projects MORE 14 November 2025 Choosing the right AI coding tools for your team MORE 11 November 2025 CNCF Technology Radar Q3 2025 MORE 10 November 2025 State of Cloud Native Development Q3 2025 MORE 21 October 2025 The State of FinOps in 2025 MORE 21 October 2025 Sizing programming language communities MORE 21 October 2025 Benchmarking backend and cloud technology strategies MORE 12 September 2025 2025 Cloud Landscape in Europe and the US MORE 7 May 2025 Usage of AI assistance between DORA performance groups MORE 7 May 2025 Challenges organisations face in software development projects MORE 6 May 2025 The developers behind generative AI applications MORE 6 May 2025 Sizing programming language communities MORE 6 May 2025 How and why developers engage with emerging technologies MORE 6 May 2025 How technology practitioners use social media MORE 16 April 2025 The state of cloud operations and management in 2025 and the impact of AI MORE 4 April 2025 CNCF Technology Radar MORE 10 March 2025 Generative AI for Business: Success, Challenges and the Future MORE 11 February 2025 State of Development Environments MORE 1 December 2024 Profiling of technology professionals working at startups MORE 29 November 2024 CNCF Technology Landscape Radar MORE 1 November 2024 The rise of AI-chatbots for problem-solving MORE 1 November 2024 Network APIs: The new oil in the 5G economy MORE 1 November 2024 Sizing programming language communities Q3 2024 MORE 1 November 2024 What developers think about their teams MORE 1 November 2024 How developers build AI-enabled applications MORE 1 May 2024 How and why developers engage with emerging technologies MORE 1 May 2024 Threats in software supply chain management MORE 1 May 2024 How happy are developers with their jobs? MORE 1 May 2024 How developers interact with AI technologies MORE 1 May 2024 Profiling of new ML/AI developers MORE 1 May 2024 Sizing programming language communities Q1 2024 MORE 1 April 2024 State of Continuous Integration and Continuous Delivery Report 2024 MORE 13 March 2024 How Silicon Developers help developers build AI solutions MORE 1 February 2024 Maturity of Software Supply Chain Security Practices 2024 MORE 1 November 2023 25th edition - State of the Developer Nation MORE 1 September 2023 Developer Perceptions of Distributed Cloud MORE 1 September 2023 The State of WebAssembly 2023 MORE 1 July 2023 Designing for success MORE 1 June 2023 2023 state of data management solutions for digital natives MORE 1 June 2023 The state of developer happiness MORE 1 May 2023 24th edition - State of the Developer Nation MORE 1 May 2023 State of Continuous Delivery Report 2023 MORE 1 May 2023 Building and Developing on Salesforce Report 2023 MORE 1 February 2023 Securing the enterprise MORE 1 November 2022 NGINX State of App and API Delivery Report MORE 1 November 2022 Developers & Shift-left Security MORE 1 October 2022 23rd edition - State of the Developer Nation MORE Can’t find what you are looking for? Get in touch and we will be happy to help LET'S TALK Case Studies Explore real life scenarios, and how we helped our clients access key market information. We include the process, what SlashData brought to the table and the results they achieved. CASE STUDIES
- AI Coding Tools Benchmark | Free Industry Reports & This report is a lighter, free to everyone to access, version of a much more comprehensive benchmark of AI coding tools, and focuses on three prominent products: OpenAI Codex, Claude Code, and Cursor. All three are actively used by professional developers and represent distinct design philosophies: Codex as a cloud-first coding agent, Claude Code as a conversational and context‑rich partner, and Cursor as an AI‑native IDE. The analysis is based on a global online survey designed and distributed by SlashData in Q4 2025, reaching 800+ professional developers who write code with the help of AI coding assistants, agents, or AI‑native IDEs. Respondents to the survey reported which AI coding tools they use, how intensively they use them, how satisfied they are with them, and what productivity gains they see, both in terms of pull request (PR) throughput and time saved each week. Tech Market Research
Key Questions Answered Which AI coding tools are most widely adopted by professional developers, and what does this reveal about market maturity and competitive positioning? How deeply are developers integrating each tool into their daily workflows? How satisfied are developers with each tool in general and across core tasks in particular - writing new code, modifying existing code, and debugging - and how does this influence long‑term adoption? To what extent do AI coding tools improve measurable outcomes such as PR throughput and weekly time savings, and which tools deliver the strongest returns? How efficiently does each tool translate engagement into productivity? What does this mean for vendors aiming to improve their products and buyers seeking to maximise value from their engineering teams? Click to expand ACCESS THE FULL REPORT Methodology In Q4 2025, SlashData designed and ran a global, online survey to study how professional developers who rely on AI technologies for their coding work engage with AI coding assistants, agents, and AI-native IDEs. We conducted the analysis presented in this report based on data collected from 800+ respondents across more than 20 countries worldwide All SlashData surveys are monitored and cleaned to ensure the highest standards of retained responses. Our proprietary cleansing is designed to mitigate and remove opportunistic, fraudulent, and bot responses. Consisting of multiple criteria formulated around logic rules, speed, consistency, and response-taking behaviour, this holistic assessment is key to ensuring the highest degree of data quality. Contact us First name* Last name* Work Email* Company * Role* Message I agree to SlashData's Privacy Policy and I want to be contacted * SUBMIT All Reports Questions? Let's talk! Fill the form. Natasa and Petro will help you drive developer adoption: Name Email I have read and agree to SlashData's Privacy Policy and I want to be contacted. GET IN TOUCH WITH ME AI Coding Tools Benchmark How OpenAI Codex, Claude Code, and Cursor compare in terms of adoption, engagement, satisfaction, and productivity gains among software developers Access the Full Report About this Report This report is a lighter, free to everyone to access, version of a much more comprehensive benchmark of AI coding tools, and focuses on three prominent products: OpenAI Codex, Claude Code, and Cursor. All three are actively used by professional developers and represent distinct design philosophies: Codex as a cloud-first coding agent, Claude Code as a conversational and context‑rich partner, and Cursor as an AI‑native IDE. The analysis is based on a global online survey designed and distributed by SlashData in Q4 2025, reaching 800+ professional developers who write code with the help of AI coding assistants, agents, or AI‑native IDEs. Respondents to the survey reported which AI coding tools they use, how intensively they use them, how satisfied they are with them, and what productivity gains they see, both in terms of pull request (PR) throughput and time saved each week.
- Free Resources and Data | SlashData
Giving back is in our DNA, so we always serve the world through our strength: data and insights. Oh no, we can’t find the page you are looking for! Tell us what you were looking for, or start from scratch. Contact us TAKE ME HOME




