AI security risks

Today’s AI coding tools enable just about anyone to develop applications. Of course, we’ve had this capability since the 1980s with early tools like HyperCard giving non-programmers the ability to build simple applications. But it was only over the last decade that low-code and no-code platforms began to take off, making it much easier for people without formal software development training to create fully functional applications.
On its face, it seems like a good thing to democratize a skill that has historically required deep training. But with the rise of AI coding tools making it even easier to create applications than ever, it raises the question: Just because you can develop software, should you? The problem is that inexperienced programmers don’t know what they don’t know. Sometimes even highly skilled users can encounter problems with AI coding.
As an example, last year, I had the pleasure of interviewing AI and robotics pioneer Rodney Brooks, who is currently CTO and cofounder at warehouse robotics startup, Robust.ai. He also spent 10 years running MIT CSAIL, MIT’s AI lab, and he helped cofound iRobot, the inventors of the Roomba robot vacuum. In other words, he has an incredible depth of knowledge around AI and robotics.
He told me a story of how he was working with an LLM to program what he characterized as “some obscure stuff where he needed help,” so he asked a coding tool for assistance. He said in some instances he found the tools extremely useful, but he also encountered some troubling errors when it turned out the LLM had made up a non-existent library.

“So I tried to compile it, and it said that it couldn’t find this particular procedure. So I asked, ‘where is this procedure,’ and it said, ‘Oh, I'm sorry, it doesn't exist.’ So it hallucinated the perfectly named procedure, which fit with all the other procedures, only it didn’t exist,” he told me.
But worse than that, he related that some of these fake libraries had started to proliferate in the real world where hackers filled them with malicious code and put them on GitHub for unsuspecting and inexperienced programmers to find and download.
Recognizing that things change quickly when it comes to LLMs and coding tool technologies, I wondered if there have been security improvements with these coding tools since May 2024 when I interviewed Brooks. I found a report published this year by Veracode, a cybersecurity company, looking at the security posture of these tools. It was a good news/bad news kind of situation.
Secure or not secure
What Veracode found was that the majority of AI-generated code, or 55%, created in its tests, was secure. That's inarguably good news, but it also means on the flip side that 45% was not. “The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45% of cases,” the company noted in a statement.
That’s all well and good if you go in with your eyes wide open, and have tools and procedures in place in your coding pipeline (assuming you have one) that can test for vulnerabilities in both AI-generated and human-generated code. (It’s worth noting that boldstart portfolio company Snyk has coding security tools designed to catch this kind of problem.)

The problem in my view comes when you have hobbyists potentially building applications, who are not aware and build away blindly. We’ve seen hallucinations bite various professions including lawyers, who have cited hallucinated cases in court, or research papers with hallucinated references.
We all should be aware by now that LLMs are tools that can assist us, but are not infallible. They make mistakes and make things up. If you don’t go in knowing that, you could be unwittingly putting your company at risk as you create content of any kind, whether that be a report, a picture or a piece of software.
A recent Stack Overflow developer survey found that 46% of developers distrusted AI tools. “More developers actively distrust the accuracy of AI tools (46%) than trust it (33%), and only a fraction (3%) report "highly trusting" the output,” the report found. Interestingly, it also found that the more experienced the programmer, the higher the level of distrust.
“Experienced developers are the most cautious, with the lowest "highly trust" rate (2.6%) and the highest "highly distrust" rate (20%), indicating a widespread need for human verification for those in roles with accountability,” according to the report. Given what we know, that’s probably a reasonable stance.
Houston, do we have a problem?
The risk posed by an amateur coder is a potent example of a much larger challenge for businesses: the rise of 'Shadow AI,' where employees use powerful, unsanctioned tools without IT oversight. While it’s difficult to find documented issues of hobbyists or non-professional programmers introducing malicious code into an organization that results in real data loss, a recent report from IBM on the consequences of Shadow AI tells us that there can be a real dollar cost associated with people using unsanctioned tools inside companies.
When employees bring their own AI tools, the risks rise, and according to the IBM report, when a breach happens, companies with a high level of Shadow AI often have to deal with a much costlier aftermath, says Troy Bettencourt, global partner and head of IBM X-Force, who helped author the report.
Whether the breach happens because of bad code or some other usage, the end results could still be painful for companies. “Once attackers gain access, it’s no longer just about visibility—it’s about losing control. And without that control, our most sensitive data could be exposed in systems we don’t even know exist,” Bettencourt said.

One way to solve that problem is by providing tools employees want to use, while putting policies in place to guide AI usage. “We also saw a striking number of companies reporting that they had no AI governance policies. Creating these policies is critical to keep shadow solutions at bay,” he said.
Qualtrics CISO Assaf Keren says security pros need to stay on top of this, and it’s a huge challenge, whether it involves coding or any other generative AI use case. “Security teams need to know which tools and features employees want to use, which ones people are already using and how frequently, and they need to have a strong understanding of any new capabilities that are coming,” Keren said. Once they have that visibility and understanding, he says that the focus can shift to building the pathways and guardrails that make it easier for people to use these tools within a set of established rules.
None of this is easy, of course, but when you have people using these tools without controls, especially building applications, things can and will go wrong. The paradox here is that the easier these tools are to use, the bigger the security challenge there is likely to be.
~Ron
Featured photo by Hiroshi Kimura on Unsplash