top of page

Building trust in AI: How technology managers tackle security and risk management

  • Writer: Bleona Bicaj
    Bleona Bicaj
  • Apr 3
  • 5 min read

Updated: Apr 4

AI is transforming industries at an incredible pace, but with its power comes significant security risks. From adversarial attacks to data breaches, companies must be prepared to protect the AI-powered applications they build. Yet, how do technology managers approach security and risk management in AI? Which practices are becoming standard, and who is leading the charge?


Our recent research sheds light on how organisations secure their AI systems according to technology professionals in leadership positions, revealing some notable gaps. This blog post is based on a bigger report that talks about trust, risk, and security management in AI overall. The blog narrows down the focus, diving deeper into data collected from 569 professionals in management positions within tech companies, namely tech/engineering team leads, CIOs / CTOs / IT managers, and CEO/management. They answered questions about trust, risk, and security management in AI in the 27th edition of our global Developer Nation Survey, which was fielded in Q3 2024.


How are organisations protecting their AI-powered applications?


AI security risks range from adversarial attacks and data breaches to model manipulation. To mitigate these threats, organisations deploy various protective measures. Companies are mainly investing in AI-specific security tools and technologies (33%) and encryption tailored for AI data (31%) to stay ahead of potential threats.


Regular AI security audits (29%), staff training on AI security risks (29%), and data privacy management for AI (28%) are also common practices among organisations. However, not every organisation has made AI security a priority. While 82% of technology professionals report their company uses at least one mitigation strategy, 10% admit they have no AI-specific risk management in place, and another 8% simply don’t know what their company is doing to address security risks.

Nearly one in five technology leaders either have no AI risk strategy or don’t know if their organisation has one
A data graph showing practices used to protect AI-powered applications against AI-specific security threats

Who is driving AI security efforts within organisations?


Security is no longer just the responsibility of IT teams. CIOs, CTOs, IT managers, and senior executives (including CEOs) report equal adoption rates of AI-specific security practices, 86% and 85%, respectively. This suggests that AI security is recognised as both a technical challenge and a business priority at the leadership level. However, tech and engineering team leads lag behind, with only 72% reporting the implementation of AI security practices within their organisations. This gap indicates a disconnect between leadership’s security policies and awareness at the development level rather than differing priorities. Team leads may have less visibility into company-wide AI security strategies, which could explain the lower reported adoption.

Among technology professionals, tech and engineering team leads report lower awareness of AI security practices within their organisations
A data graph showing the share of technology leaders who report that their organisation uses AI-specific security practices by role

Company size matters: Are smaller firms falling behind in AI security?


Company size plays a significant role in how AI security is handled. Managers in large enterprises (i.e., companies of more than 1,000 employees), with their expansive resources and dedicated security teams, report the highest adoption rate of AI security practices, with 90% implementing such measures. Managers in medium-sized businesses (51-1,000 employees) follow closely at 86%, but those in small businesses (up to 50 employees) lag far behind at just 64%.


This gap isn’t just about awareness - it’s about priorities and resources. Large enterprises are far more likely than small businesses to conduct AI-specific penetration testing (32% vs. 10%), regular security audits (34% vs. 18%), and threat intelligence and risk assessments (28% vs. 14%). With tighter budgets and fewer specialised security personnel, smaller companies often struggle to allocate resources for AI-specific protections, relying instead on broader cybersecurity measures that may not fully address AI-related risks.


However, medium-sized companies take the lead over both small and large companies when it comes to employing certain AI security practices. They lead in the adoption of AI-specific security tools, with 40% using them compared to 33% of large companies and just 20% of small businesses. Similarly, 35% of medium-sized businesses have data privacy management solutions tailored for AI, surpassing large enterprises at 27% and small businesses at 16%. This suggests that medium-sized companies, while not having the vast resources of large corporations, may be more agile in adopting emerging security technologies, striking a balance between strategy and execution.

While large enterprises have the budgets and teams to prioritise AI-specific protections, small businesses struggle to keep up, leaving them more vulnerable to AI-related threats

A data graph showing the share of technology leaders who report that their organisation uses AI-specific security practices by company size

How does AI security vary across development types?


AI security is far from uniform across industries. Each sector faces unique challenges shaped by the nature of its AI applications, the volume of data it processes, and the potential risks associated with AI-driven automation. While some industries have embraced AI security as a fundamental requirement, others are lagging, either due to a lack of awareness, lower perceived risks, or resource constraints.


At the forefront of AI security adoption are managers involved in consumer electronics (96%), augmented reality (95%), and industrial IoT (95%) projects. Managers in these industries prioritise security not just because of regulatory pressures but also due to the inherent risks associated with their AI-driven operations. Consumer electronics and IoT devices, which process vast amounts of real-time personal and behavioural data, place a heavy focus on robust encryption and access control. In fact, 45% of managers in this sector report implementing these protective measures to prevent data breaches and adversarial attacks.


Augmented reality (including mixed reality applications) goes even further, with 59% of managers reporting using encryption measures tailored specifically for AI data. This emphasis likely stems from the fact that AR systems often involve real-time spatial data processing, biometrics, and interactive user engagement, making them highly sensitive to security threats.


However, not all sectors demonstrate the same level of urgency when it comes to AI security. Backend services fall significantly behind, with only 69% of managers working in backend reporting that their organisation has AI-specific security measures in place. Of the rest, 16% are unsure whether their company has any AI security practices at all, and a notable 15% confirm that their organisation has no such measures in place. This lack of adoption suggests that backend service providers may still be relying on traditional cybersecurity approaches, underestimating the distinct vulnerabilities that AI-powered applications introduce.

Industries handling sensitive consumer data, like consumer electronics and IoT, lead in AI security adoption. However, backend services may be underestimating AI-specific risks by relying on traditional cybersecurity measures.

A data graph showing the share of technology leaders who report that their organisation uses AI-specific security practices by type of development project

How does experience impact AI security awareness?


One of the more unexpected findings in AI security management is that less-experienced managers are more likely to implement AI security measures in their companies than their seasoned counterparts. Managers with less than two years of experience in software development report the highest adoption rate within their organisations, at 90%, while those with over a decade of experience drop to 74%. 


This decline could indicate that organisations with more experienced managers rely more on traditional cybersecurity approaches rather than AI-specific frameworks. While awareness levels remain consistent across experience groups, companies led by seasoned professionals may be slower to adapt their security strategies to evolving AI threats. As AI risks become more sophisticated, ensuring that security measures keep pace will require continuous evaluation and adaptation at the organisational level.

Organisations with more experienced managers may be slower to adopt AI-specific security frameworks, potentially relying more on traditional cybersecurity approaches

A data graph showing the share of technology leaders who report that their organisation uses AI-specific security practices by experience in software development

Want to dig deeper? This post only scratches the surface of how tech leaders approach AI security and risk management. For a more comprehensive view, check out our full report on Trust, Risk, and Security Management in AI. You'll find deeper insights into how organisations build trustworthy AI systems and where critical gaps still exist. You can also explore AI and related topics more on SlashData’s blog.


Questions or feedback? We’d love to hear from you. Whether you're looking to collaborate, dig into our data, or simply want to chat, feel free to contact us.


About the author

Bleona Bicaj, Senior Market Research Analyst

Bleona Bicaj is a behavioral specialist, enthusiastic about data and behavioral science. She holds a Master's degree from Leiden University in Economic and Consumer Psychology. She has more than 6 years of professional experience as an analyst in the data analysis and market research industry.

Get all the new insights in your inbox

Access more insights tailored to your needs

bottom of page