Agentic AI Presents New Cybersecurity Concerns Amid Ramped-Up Adoption
There are concerns that agentic AI could circumvent current security tools and practices, some experts say.
(Ian Swanson, CEO, Protect AI)
Agentic AI is to 2025 what GenAI was to 2024. It’s the latest artificial intelligence trend, and organizations are adopting agentic AI as part of their digital transformation goals.
According to the 2025 Connectivity Benchmark Report by MuleSoft and Deloitte Digital, 93 percent of IT leaders report intentions to introduce autonomous AI agents within the next two years, and nearly half have already done so.
While many extol the benefits that agent-based AI can provide, such as giving midmarket organizations the same complex technology and capabilities that have typically only been available to large enterprises, some are sounding the alarm about the security problems agentic AI introduces.
OpenAI researchers released a whitepaper about the security implications of agentic AI in December 2023. They acknowledged the transformational power of agentic AI: “AI researchers and companies have recently begun to develop increasingly agentic AI systems ... that adaptably pursue complex goals using reasoning and with limited direct supervision.”
But they also issued caution about agentic AI security:
“As the agenticness of AI systems increases, hard-coded restrictions may cease to be as effective, especially if a given AI system was not trained to follow these restrictions and thus may seek to achieve its goals by having the disallowed actions occur. An AI agent could circumvent a hard-coded restriction by causing another party to take the action on the system’s behalf, while hiding the resulting potential impact from the user. For instance, an agent could send an email—an allowed action—to a non-user human that convinces said human to take the disallowed action. System deployers can bound this problem by ensuring strong network controls to limit agents’ interactions with the outside world except through monitored channels. Agentic systems could also be sandboxed in order to prevent systems with cybersecurity capabilities from escaping these controls (especially during development when a system’s capabilities are uncertain), but current sandboxing systems may not be well-suited to effectively bound highly-capable AI agents,” the researchers concluded in their whitepaper.
Malwarebytes also warned that “new AI agents could hold people for ransom in 2025.”
Additionally, more tech products securing agentic AI are coming to the market. Recently, Splashtop, which offers endpoint security, announced its latest offering , Autonomous Endpoint Management, to help combat threats related to agentic AI.
MES Computing spoke with Ian Swanson, the CEO of Protect AI, which offers AI and machine learning security, about agentic AI and cybersecurity. Swanson is also the former global head of AI/ML at AWS and Oracle, and has testified before Congress about securing AI systems.
Can you describe agentic AI?
We can think of AI agents as ... a form of artificial intelligence that’s carrying out tasks and kind of going through systems and processes that can live anywhere.
What are some of the security concerns around agentic AI?
If we are allowing AI in these agentic AI workflows to kind of automate the decision-making process, there could be exploits at that point.
I’ll give a risk an example, and perhaps a mitigation. Risk is that an agent makes an uncontrolled or unexpected decision that might lead to a security failure. Example, that could be an AI agent is carrying out automated incident response tasks and it incorrectly shuts down a critical production server, and it causes downtime, so the AI thought something wrong was happening, but it made an unexpected decision, and maybe it shut down something that was super critical.
Now, the mitigation there is you can have [a] human in the loop, but you can also have tools like what Protect AI offers that can monitor these, that can put checks, that could put balances in there to make sure that [AI is] acting appropriately. So again, the risk was uncontrolled or unexpected decisions by the AI, and we have to figure out how we best mitigate that so it doesn’t do something that it should not do.
Is there any advice that you can offer midmarket IT leaders about securing agentic AI?
What I would first say is agentic AI is very new, even the largest companies in the world, the biggest banks in the world, are just starting to build out, deploy and put in production agentic AI workflows. So that’s first, you know, number one.
I would say with that, the midmarket needs to be become very educated on what is agentic AI. What are the risks to AI? And then you need to start the second step is you need to have visibility. If you get these systems in place where you’re running automated processes being run by artificial intelligence, we need to have visibility. We need to have a ledger. We need to know the actions that [it] is carrying out. That’s if we don’t have visibility, and we don’t have the ability to audit, then we can’t really understand if it’s doing what it should.
So, my first advice for the midmarket is, number one, understand what agentic AI is. Number two, I would test, and I would evaluate, and I would make sure that I have number one, visibility, number two, auditability, and then number three, built-in security controls,