Welcome to Nationwide Report®
Saturday, June 7, 2025
ADVERTISEMENT

Love and hate: tech pros overwhelmingly like AI agents but view them as a growing security risk

0
SHARES
4
VIEWS

Read More


  • Nearly half of IT teams don’t fully know what their AI agents are accessing daily
  • Enterprises love AI agents, but also fear what they’re doing behind closed digital doors
  • AI tools now need governance, audit trails, and control just like human employees

Despite growing enthusiasm for agentic AI across businesses, new research suggests that the rapid expansion of these tools is outpacing efforts to secure them.

A SailPoint survey of 353 IT professionals with enterprise security responsibilities has revealed a complex mix of optimism and anxiety over AI agents.

The survey reports 98% of organizations intend to expand their use of AI agents within the coming year.

AI Agents adoption outpaces security readiness

AI agents are being integrated into operations that handle sensitive enterprise data, from customer records and financials to legal documents and supply chain transactions – however, 96% of respondents said they view these very agents as a growing security threat.

One core issue is visibility: only 54% of professionals claim to have full awareness of the data their agents can access – which leaves nearly half of enterprise environments in the dark about how AI agents interact with critical information.

Compounding the problem, 92% of those surveyed agreed that governing AI agents is crucial for security, but just 44% have an actual policy in place.

Furthermore, eight in ten companies say their AI agents have taken actions they weren’t meant to – this includes accessing unauthorized systems (39%), sharing inappropriate data (33%), and downloading sensitive content (32%).

Even more troubling, 23% of respondents admitted their AI agents have been tricked into revealing access credentials, a potential goldmine for malicious actors.

One notable insight is that 72% believe AI agents present greater risks than traditional machine identities.

Part of the reason is that AI agents often require multiple identities to function efficiently, especially when integrated with high-performance AI tools or systems used for development and writing.

Calls for a shift to an identity-first model are growing louder, but SailPoint and others argue that organizations need to treat AI agents like human users, complete with access controls, accountability mechanisms, and full audit trails.

AI agents are a relatively new addition to the business space, and it will take time for organizations to fully integrate them into their operations.

“Many organizations are still early in this journey, and growing concerns around data control highlight the need for stronger, more comprehensive identity security strategies,” SailPoint concluded.

You might also like

This post was originally authored and published by from Tech Radar via RSS Feed. Join today to get your news feed on Nationwide Report®.

Daily Challenge

Featured