Subscribe to our blog

We all want to know that the work we do makes a difference. Right now, the Red Hat OpenShift AI team is making their mark in an industry that is in the middle of a rapid evolution. They’re solving complex problems, innovating in new ways and making a tangible impact on users and businesses by helping them solve one of their biggest artificial intelligence (AI) and machine learning (ML) headaches - deploying AI/ML in their business in a way that’s safer, more secure and adds value.

Getting AI models deployed and into production requires a lot of different moving parts. This team helps businesses navigate this journey - making AI model training, deployment and monitoring more seamless and scalable across the hybrid cloud and the edge. We sat down with Engineering Director, Catherine Weeks, to dive into what the team is working on and the exciting open roles they have available.

Tell us a bit about your journey at Red Hat so far, and how that led you into the world of AI.

I’m currently an engineering director in the OpenShift AI team, and have been at Red Hat for 11 years.

My journey began in user experience, where I was brought on as the first UX professional for the middleware team. The role evolved as Red Hat expanded its user experience focus, and a dedicated user experience engineering team emerged, which I joined and eventually managed.

Because I have a background in computer science, I’d always approached UX from a technical perspective, which later influenced my decision to shift towards an engineering role. I joined an engineering team focused on OpenShift as a senior manager, and then later moved into a director role, where I led an engineering team focused on delivering OpenShift user consoles.

This team operated separately from the core OpenShift team, and as the company's strategy evolved, I advocated for the reintegration of our team with their respective products (effectively phasing out my own role!). I started to explore where I could contribute next within Red Hat, and this happened to coincide with the emerging AI boom. Having previously collaborated with a senior director in OpenShift AI, I took the opportunity to join his team. I wanted to support the team's growth and help navigate the boom we expected to see in the AI sector.

How long have you been on the team, and what has been your most rewarding experience working on OpenShift AI?

I joined the team less than a year ago in June of 2023, and my arrival coincided with an increased focus on OpenShift AI as a product offering. Due to that, I knew that the team’s fast growth and the subsequent change management was going to be both challenging and rewarding, and so far, that has been true.

We have grown from about 40 people to over 100 since I joined the team. This meant we had to rethink pretty much everything—from how we organize as teams, to how we handle our work, and what we expect from everyone, especially our management team, and our technical leads and architects. We've been navigating through this while quickly scaling up our product from the early adoption phase to a more mature stage.

But the best part of this journey has got to be the energy and enthusiasm I'm surrounded by every day in this team - it’s just incredible. It makes the challenging parts worth it.

With the rapid advancements in the field, how do you stay innovative and adaptable in your role?

As I was relatively new to the AI world when I stepped into the team, I had to quickly get up to speed with the industry, the technologies, and the market dynamics. Given the pace of innovation in the industry, our plans can shift at a moment's notice. This is really exciting and, admittedly, also overwhelming.

Luckily, we have a lot of team members who are really plugged in to what's happening in the industry, and so we lean heavily on our collective knowledge. I also spend a lot of time consuming articles, YouTube videos, press announcements and anything else I can to absorb more information around where things are heading in the AI space.

What advice would you give to someone wanting to move into the AI space at Red Hat, based on your own experiences and lessons learned?

The process of building and managing AI models, compared to traditional applications, is not actually as different as people think. A lot of the techniques used to organize and deploy these models - on a scalable, secure, production-ready infrastructure - are actually the same. And I think this is Red Hat's value here. We know how to take application workloads from early development through production-ready code, and help companies make the journey to deploying them in production, in a way that’s enterprise hardened and ready. We now need to do that for models, which is exactly what we’re doing with OpenShift AI.

So, what I’m saying is that moving into the AI space doesn’t necessarily mean you need to start from square one. The skills and knowledge you've acquired, whether as a developer or through industry experience, can now be applied to manage a new type of workload. The nuances for model workloads such as challenging access to GPU compute can be tackled within that larger framework.

How would you then explain in brief what OpenShift AI is, and how it helps our customers?

OpenShift AI allows organizations to take their data, use it to train models, serve those models into production and scale their usage alongside the applications they’re deploying. So really, in short, we’re enabling our customers to create, build and deploy AI models.

I want to give two examples of use cases we’re seeing in the industry. Firstly, there’s a machine learning use case where, for example, a financial institution wants to assess whether fraudulent charges are taking place. They can train their model on what fraudulent charges typically look like and use machine learning models to predict future fraudulent charges far more accurately than a static program. Many customers are building models that help them solve problems more effectively in this way.

The second use case, and where a lot of the industry buzz is happening right now, is around foundation models and large language models (LLMs). Many of our customers are exploring how they can utilize these kinds of models to better their businesses. They have knowledge about their business that nobody else has, and might want to use it to enhance their customer support function, for example. But they aren’t sure how to take that leap towards training a model with their company's knowledge, and using it to answer questions about their products or services. And so that's where you could also use OpenShift AI to help with the myriad of ways you can retrain or supplement  the foundation model with your data to meet the needs of your particular business.

Why should someone work on AI at Red Hat as opposed to elsewhere?

Easy! Red Hat has always believed in open, and we think the future of AI also needs to be open.

We are actively working on creating open source tooling so our customers can enjoy the same security and support when they work with AI models as they do on, say, Red Hat Enterprise Linux (RHEL).

We know what a thriving, open community truly should look like and what you need to do in order to nurture such a community. So I think Red Hat is uniquely positioned to be successful in this space.

Paint us a picture of what a typical day on the OpenShift AI team looks like. What are some of the features the team is currently working on?

We release OpenShift AI every couple of weeks, so our teams are constantly innovating on new features that get released really quickly - and we get feedback on them from our customers pretty quickly too. A typical day really involves building the latest features that we're focusing on in typical agile sprint cycles. Often, we're collaborating with IBM and IBM Research on the ideation and development of those features.

The world is changing so quickly with AI that we're constantly building these big segments of functionality to add into the product, and evolving the core components of the OpenShift AI experience as well.

For example, when our customers are working with models, we want to make it a very connected experience. So we're building a model registry that connects the different parts of the experience and carries a model through that flow - from getting a model ready for usage to serving it out to production clusters to be used with applications.

We’re also developing what’s known as a "feature store", working upstream with a community called Feast. This will allow businesses to create data sets, tune those data sets for their business and store them for future use, saving them a lot of time.

Are there any particular skills, specializations or certifications that are particularly valuable for candidates interested in working in the OpenShift AI team?

Right now, we're looking for both engineers and quality assurance engineers with experience building out MLops workflows. We're keen on having these two groups collaborate closely as a single team. It's crucial for us to find individuals who are not only excited about building our products but are also passionate about ensuring the quality of what we create is exceptional.

You would need to be energized by working with emerging technologies in a fast moving industry, and excited about building the software that "makes AI happen". We don’t have anyone telling us "this is how it's supposed to be done" - we’re figuring it out ourselves. We would especially love to talk to people that have taken the initiative and applied AI technologies into their current company, or even into a project of their own.

Because OpenShift AI sits on top of the OpenShift platform, OpenShift certifications and training are also invaluable for anyone on our team.

What's the culture and working environment like in the team?

This team is pretty fast paced and high pressure - but at the same time, the work that we’re doing has a high scope for impact and is highly visible within Red Hat. We also have a lot of very smart, energetic engineers and a collaborative atmosphere that allows for some fun in between the hard work. We have a lot of open channels and sharing. We try to do "innovation days" when we can, where we pull out of our normal work and just have fun, collaborate as a team, try to use our product and build things. While we've grown significantly, we're still in a place where any one individual can come in and really make a big impact on this team.

Ready to make your mark on the OpenShift AI team? Explore our open roles here.


About the authors

Catherine Weeks is a Director of Engineering in the OpenShift AI team at Red Hat. With a background in user experience design, she skillfully combines technical expertise with a user-centric approach. Catherine is dedicated to delivering cutting-edge software in the volatile AI/ML space, striking a balance between innovation and enterprise-grade delivery. Catherine has over 15 years of experience in various roles in the software industry, and a bachelor’s degree in computer science from Northeastern University.

Read full bio

Holly is a Program Manager on Red Hat's Talent Attraction & Experience (TA&E) team, where she is responsible for building and promoting the company's talent brand across the Europe, Middle East and Africa (EMEA) region. With past experience in employer branding and digital marketing spanning several industries, including professional services, hospitality and now tech, Holly develops and executes creative campaigns that showcase Red Hat as an employer of choice. Holly and the TA&E team are also passionate about amplifying the voices of Red Hat’s talented associates, helping to highlight the unique culture and opportunities that Red Hat offers.

Outside of work, she is currently focused on expanding her coding skills (when she’s not gaming, running, thrift shopping or watching cat videos, that is). Holly is based in Cape Town, South Africa.

Read full bio

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech