RHADS for AI Engineers

Quick Logout Links

Prior to starting this module, click the following two links and make sure you log out of previous sessions to prevent errors related to performing actions as the wrong user.

Red Hat AI

Red Hat offers products to build, deploy, and consume models and AI applications in Kubernetes and OpenShift. Red Hat AI allows organizations to standardize AI across their teams with best practices and industry standards, from AI ideation to prototyping to development to the production environment.

Podman AI Lab

Learn more about Red Hat AI

How RHADS enables AI tools for AI Engineers

Red Hat Developer Hub integrations with other tools

Red Hat Developer Hub integrates with over 100 plugins, including many other integrations such as accessing Red Hat OpenShift Dev Spaces from the component’s overview. Next, we’ll explore key integrations for building AI applications and models including security best practices to build, deploy AI applications.

Red Hat Developer Hub integration with OpenShift AI

Red Hat Developer Hub can integrate with OpenShift AI through Software Templates. These can be created to facilitate the model development lifecycle and ensure best practices are in place from the project inception, such as security and scalability. It can be used to provision Notebooks, create Serving Runtimes, create an inference server, Pipeline configurations, and more. Additionally, from the component’s overview, AI Engineers can access many OpenShift AI features.

Other Developer Tools

Introduction to Podman Desktop

When building an AI application, it’s often helpful if developers can access their tools locally. Thanks to the Podman AI Lab extension, developers can leverage different models, try recipes, playgrounds, and more to learn, experiment, and develop AI applications.

Podman AI Lab

Podman AI Lab Extension

In the following image, you will explore some of the critical aspects of this:

Podman AI Lab

RH OpenShift Dev Spaces with AI assistant

Red Hat OpenShift Dev Spaces leverages extensions to connect Software and AI Engineers with diverse AI assistants. These assistants streamline software development by reducing repetitive code and accelerating troubleshooting. Furthermore, these AI assistants can integrate with MCP (Model Context Protocol) Servers, enabling them to interact intelligently with external systems, APIs, and third-party tools

Introduction to Llama Stack Operator

The Llama Stack Operator enables access to Llama Stack functionality on a Kubernetes cluster - in our case, that’s OpenShift. The team will be able to leverage LLama Stack to build their AI applications on OpenShift, simplifying the developer experience and standardizing AI application development.

The Llama Stack is a community project responsible for creating and managing the llama-stack server. Learn more by viewing the LLama Stack Operator GitHub

Including MCP integrations

The Llama Stack Operator organizations can access many MCP servers through MCP clients.