Red Hat’s CTO sees AI as next step for company’s open approach
Chris Wright has been working at Red Hat in various roles for more than 20 years. When he started, there was just a single product: Red Hat Enterprise Linux (RHEL). But throughout Wright’s tenure over the past two decades, the company has expanded far beyond that. Today, while RHEL remains a core product, the company’s focus has shifted to hybrid cloud, and increasingly, AI.
The hybrid strategy began to take shape about a decade ago as customers started distributing workloads across multiple cloud providers and on-prem data centers. Red Hat positioned itself to help manage the complexity associated with that.
“I'd say, starting about 10 years ago, we thought of the world as being hybrid, and we started articulating this vision for open hybrid cloud — with open meaning open source, open ecosystems and open APIs — and hybrid cloud meaning cloud infrastructure that spans enterprise data centers [and multicloud deployments],” Wright told FastForward.
IBM acquired Red Hat in 2018 for $34 billion, and although it operates as a quasi-independent business unit inside Big Blue, it still has to coordinate with the mothership when it comes to setting strategy and determining how it fits into the broader corporate agenda.
That requires a nuanced leadership approach that takes into account the needs of a shifting market increasingly focused on AI, IBM’s place in that, and how Red Hat can take advantage of its strengths to deliver products and services that make everyone happy.
Taking AI personally
Every company has had to get up to speed when it comes to AI, and Red Hat is no different, especially with IBM making it such a strong focus. The company has been exploring multiple ways to get there.
As a leader charged with incorporating AI into the way people work, Wright takes a hands-on approach, playing with various tools before expecting his staff to start incorporating them into their workflows. “So from a personal point of view, I like to stay connected to the leading edge of technology. So that means I play with tools and technology, and stay abreast of what's happening,” he said.
Our view is that inferencing is the production runtime for AI. A model doesn't do something useful for you until it’s got an API and it’s serving content. That content is served through inferencing.
For staff, they started with education, then let engineers get comfortable working with AI in low-stakes situations. “We started with learning and experimentation, then we moved into applying that to something practical in our daily workflow, so that you're actually taking what you learned and putting it into use,” Wright said.
Don’t reinvent the wheel
Unlike a lot of companies that are layering AI tooling on top of their existing platform, Wright says Red Hat is taking a more pragmatic approach. He says the idea is to apply the same set of tools people have already been using to implement AI.
“We're very interested in leveraging all of the capabilities that sit lower in the stack, whether it's Linux or Kubernetes, and extending them to meet AI requirements,” he said. “We're just extending what's already there in terms of API capabilities and that gateway API while adding some notions of what it means to support AI traffic through that same gateway,” he said.

To that end, they’re building support for agents, MCP servers and model delivery. MCP servers let AI agents securely access and interact with data, files, APIs and other systems. They see this coming together in the Red Hat toolset around inferencing. “Our view is that inferencing is the production runtime for AI. A model doesn't do something useful for you until it’s got an API and it’s serving content. That content is served through inferencing,” he said.
Red Hat tries to use open source solutions whenever possible. For inferencing, it has adopted an open source tool called vLLM to help run AI models quickly and efficiently, whether on one computer or across many. vLLM is designed to handle lots of requests from different users at the same time, making it a good fit for complex systems, especially ones that use Kubernetes to manage apps across servers.
Working with IBM
Whatever it does, Red Hat has to coordinate with IBM, but it still operates reasonably independently. Wright describes the relationship as almost like a partnership with an ISV. “We have a very strong relationship with IBM, because they have a whole product portfolio that runs on top of RHEL and OpenShift, so that kind of relationship is really important,” he said.
In general engineers at Red Hat aren’t interacting with IBM on a regular basis, but with AI it works a little differently where Red Hat is trying to take advantage of IBM’s expertise.“We have a much more collaborative development model when it comes to AI. We work together, we look at different research activities together. We've moved work from IBM research into Red Hat and productized it,” he said.
Red Hat is in an unusual position, being part of a big company that lets it take advantage of that on several levels from sales channels to research and tooling, but it still needs to maintain its independence for its customers. While AI has changed things, Red Hat sticks to the tried and true principles of open source, hybrid and containerization, while making that work within an AI framework.
Featured image courtesy of Red Hat.