Why Oracle’s AI chief builds AI internally first

Back in the mid-2000s, as we began to see the shift to cloud computing, many companies were slow to adjust. Some took years. Heck, I remember Andy Jassy giving a keynote address at AWS re:Invent in 2019, well into the cloud computing era, where he implored his customers to move faster. “It’s easy to go a long time dipping your toe in the water if you don’t have an aggressive goal,” he said.
That was then. Today, we have AI and there is a lot of pressure out there to move faster, much faster. Yet moving quickly is not something enterprise IT departments generally feel comfortable doing. Roger Barga, the head of artificial intelligence and machine learning at Oracle, who previously spent seven years at AWS, understands the similarities and the differences between the two tectonic technological shifts.
The difference this time is that you can’t wait a decade to see how everything shakes out. “The pace of change that I'm seeing happen right now is orders of magnitude faster than I've ever seen before, and it's forcing companies to move at a speed which they are both not accustomed to and uncomfortable with,” Barga told FastForward.
I spoke to Barga at the MIT Sloan CIO Symposium in Cambridge, MA in May about the pace of change and how Oracle is helping customers navigate it.
The dogfooding principle
Barga believes that AI projects don’t stall because of the technology itself, but because they fail to take into account the audience that is using them. “Successful projects fit naturally into existing workflows and augment what people are already doing, rather than disrupting or completely transforming their work,” Barga said.
One way Oracle does it is by building AI functionality into tools their customers already use. “They're already using our Oracle databases and NetSuite, so we’re putting the AI there to meet them where they're at,” he said.
He says that Oracle also uses the same tools internally that it sells to customers. It starts by building for that internal audience first and working out the kinks before delivering the products to customers.

“Throughout my career, I have built services that first served our internal needs. You have a very active audience willing to give you feedback and engage with you very early on,” he said. Employees tend to give very frank feedback and they help the development teams learn a lot before they ship to customers.
“Just as we're building for, supporting, and listening to our internal customers, we’ve found that what they need often aligns with what our external customers want too,” he said.
Of course, before you roll anything out, you need to make sure it’s secure.
The security and governance question
Barga acknowledges that enterprise IT leaders are in a tough position. They still have to play by certain rules. As I wrote in an April ForwardThinking commentary, in spite of AI, there are no shortcuts when it comes to IT fundamentals. “Your company still has to adhere to the same fundamental principles of enterprise computing, whether it’s agents or any other shiny new technology, and there’s just no getting around that.”
If you’re training these models on your company’s most critical data, you have to take precautions, and CIOs certainly recognize this. It’s one of those pesky problems that tends to slow large companies down. You can’t simply forge ahead because the madding crowd is doing it, but you can’t stand still either.
Throughout my career, I have built services that first served our internal needs. You have a very active audience willing to give you feedback and engage with you very early on.
Barga says one way to protect against going too fast is by investing in “durable infrastructure,” the underlying tooling that protects all of this information, and this is especially true with the growing use of agents.
“For me, it's about centralization, a single agent control surface with guardrails. The guardrails should ideally be agentic in themselves. They should have corporate best practices for what data should be going out, what services should be consulted as part of a workflow,” he said. That means if someone tries to make use of a document that has been marked as mission critical, there will be processes in place to protect the document and make sure it’s appropriate to be used, that PII isn’t exposed, that it adheres to internal policy and complies with legal and regulatory requirements.
CIOs are being put in a difficult position when it comes to implementing AI inside their organizations. They can’t say no to everything as some did with the advent of cloud computing, but they have to be pragmatic about the need to move quickly, while protecting the enterprise. It’s a tough balance, but finding an effective and safe way to implement this technology may be the best shot at staying competitive.
Featured photo courtesy of Oracle.