We’re building AI security on the fly

I spent the first part of the week at the RSA cybersecurity conference in San Francisco, and after talking to a bunch of smart people and watching a slew of panels featuring CISOs from some major enterprise organizations, I’ve concluded that AI security is very much a work in progress.
As we enter an agentic world, where software could potentially undertake multiple tasks, cross systems and share data, we still need to figure out how to keep companies safe from the myriad of problems that could arise such as prompt injection attacks, privilege escalation, malicious code generation and data leaks.
The fact is that there are no easy answers. Cybersecurity companies are working hard to come up with solutions, of course, and there were announcements aplenty flooding RSA, but I learned that we’re essentially still building AI security infrastructure as we go.
Permissions challenges
For starters, when you have agents operating autonomously, there are going to be huge identity and authorization problems. That’s because unlike traditional software, which typically operates with static and isolated access rights, AI agents could dynamically delegate tasks to other agents. The question becomes, how do you track and control these permissions as agents interact and move around an organization.
Phil Venables, who until recently served as CISO for Google Cloud, addressed this issue during a panel discussion this week. He pointed out that while agent-based systems offer efficiency, their ability to delegate tasks introduces a number of possible cascading risks.

“So an agent will have an identity and have a set of permissions that are granted to it by the person or thing that's driving that agent,” he said. “But then, unlike other privileged protocols, that agent is then going to have to have the ability to delegate to another agent, which may still delegate to another agent because these are networks.” Venables admitted that everyone is still trying to figure out how to make this work in a secure way.
Reducing complexity
One idea is to reduce the level of complexity related to these agents by limiting the functionality they have. Lee Klarich, chief product officer at Palo Alto Networks, thinks that the hype might be getting ahead of the reality of how agents are likely to work in the enterprise. He believes agents operating inside large organizations are going to require some level of predictability and trust before companies will deploy them widely.
“We need to take the promise of AI and the coolness of agentic, but we need to be able to combine it with automation, integrations and other things like that in order to actually deploy something that works and that will be trusted,” he told FastForward.
He gives the example of quarantining a host after a security incident where you write an automation that describes how to do it, then point the AI to the script. “Then I say to the AI agent, anytime you want to quarantine a host, execute this automated action,” he said. In that way, every time the agent executes this action, quarantining a host, it does it in the same way. “I'm still leveraging AI to come up with the plan. The plan is that I need to respond to this security incident, where one of the steps might be to quarantine a host,” he said.
Taking smaller bites
That ties into the way Salesforce is thinking about agents. Salesforce chief trust officer Brad Arkin, who had previous stints at Cisco and Adobe, says the goal for his team when creating agents, is breaking down a complex workflow into more manageable actions, and turning those smaller actions into agents, rather than the entire workflow.
“So we are starting small and achievable, so you can then build out from there. But there's an art to figuring out where you start and which piece you snap off of the broader workflow to implement first,” he said. Once they figure that out and achieve success, they can build from there, but keeping it small helps them maintain more control over what the agent can do safely and successfully.

To say it’s early days for this technology would be a gross understatement. What I learned this week is that the brightest minds in tech are still trying to figure this all out, and these are big, complex problems that will take time to solve. Yet without trustworthy security frameworks in place, this simply won’t work. It can’t.
As Venables pointed out in his panel, however, it’s like the early days of any new technology over the last few decades, and we'll figure it out. “It's like the beginning of the web. It's like the beginning of mobile. It's like the very beginning of the cloud. There's a lot of good stuff going on, but everybody's still building the frameworks,” he said.
~Ron