Security leaders know AI is inevitable, but struggle with trust issues

While artificial intelligence is deep in the hype cycle, enterprise security leaders recognize it’s rapidly becoming an operational reality inside their organizations, and they have to deal with it. That was the message at a CISO panel sponsored by boldstart, ivp and Acrew Capital this week in San Francisco ahead of the annual RSA security conference.
Panelists included Elastic CISO Mandy Andress, Cloudflare CISO Grant Bourzikas and former Atlassian CISO Bala Sathiamurthy. Boldstart partner Shomik Ghosh and Acrew Capital’s Asad Khaliq hosted.
AI is clearly here to stay, and Cloudflare’s Bourzikas said companies have little choice but to start embracing it. “You're going to fall behind your competition because they're going to use it,” he said.
A real dichotomy comes into play, however, because even though these security pros recognize that AI isn’t going anywhere, they have to figure out how to make it work from a security perspective. That could manifest itself in several ways from using AI to help enhance their security tooling to helping their companies implement AI solutions operationally to defending against bad guys who will be using AI to up their game.
All of this is made more difficult by the fact that the target is constantly moving with new tools and techniques coming at us at a ridiculous pace. Meta CISO Josh Saxe speaking at another RSA event this week said that the rapid rate of change makes it particularly challenging for security professionals to keep companies safe. “It is very hard to know how we can [define] AI security today, let alone how AI will be if we're going to skate towards where the puck is going a year from now, given how fast the field is changing.”
Protecting the code base
It’s no coincidence that AI coding tools like Cursive and Windsurf are growing so fast. Developers are embracing the productivity increase that comes from coding with the help of AI, a phenomenon that Sathiamurthy likens to Shadow IT, the idea that people are going to use these tools, whether sanctioned or not. He says it’s up to the CISO to set boundaries around usage without blocking them altogether
“You know that [some] people are taking your code base and dumping it into Cursor, and if you go and try to stop it, it's going to be really hard as CISO. So we have to find ways to enforce boundaries and have mechanisms like hosting a Cursor server,” he said
Andress is being cautious with AI coding tool usage at Elastic, and is shying away from the idea of Vibe Coding. “Elastic is a heavily, heavily engineering-driven organization. There is no desire for Vibe Coding. We're not engineering if we're Vibe Coding," Andress said.
Are we ready for automated remediation?
But AI isn’t only showing up in how developers code. It’s also starting to play a bigger role in how companies respond to incidents. CISOs need to think about whether automated remediation or using rules-based approaches is better, but Sathiamurthy says it’s not an either/or question.
“I think what's going to happen is AI will remove all the current work, but humans will still steer the output. So I would think it will be important to know what we should delegate to auto remediation and what we should not, and I think that's going to evolve over time,” he said.

Bourzikas sees a multi-model approach to automated remediation, while figuring out how to connect the right data to the models if you hope to get to a point where it’s giving an answer within an acceptable probability that it’s right, whatever that range is for your organization or particular use case.
“We have enough data on the internet that we could train anything. I think we have to get to a place where that data is cleaner and we can build those pipelines and then start connecting data. So if you have a SOC (security operations center) use case and you think it could be suspicious, we'll go pull more data and analyze it,” he said.
Looking beyond marketing hype
CISOs clearly need help when it comes to providing AI security, and they may look to vendors, especially startups with innovative ideas. But they are wary of hype and overreach. “If you reach out basically saying that your product can fix all of our problems and do everything that we need to do – that turns me off," Andress said.
It’s hard for CISOs to know if tools work the way they are being pitched. Sathiamurthy says he looks to peers for help. “If I'm looking for anything, I'm talking to a CISO who is already either using it or knows about it because their input is most valuable,” he said.
We have enough data on the internet that we could train anything. I think we have to get to a place where that data is cleaner and we can build those pipelines and then start connecting data.
For Andress it comes down to trust. “Can I trust the vendors and companies that I'm working with, and will they help my organization be successful in whatever security practices we're trying to implement,” she said.
Bourzikas speaks to 150-200 CISOs a year as part of his job, so he has a good sense of how the AI is evolving, and it comes down to basics and reducing complexity. “I think that probably the number one thing I hear is about reducing complexity. Even my company is reducing complexity and trying to shrink the number of vendors in the environment. I think that will be a very common theme in the coming year,” he said.
AI is here to stay and it will be up to CISOs to find ways to bring it into the organization safely, regardless of the use case. That was the loud and clear message from these security professionals at this week’s panel.
Featured photo by Stephen Harlan on Unsplash