RHADS End-to-End: Production-Ready Development Lifecycle

What is 'end product'?

So far in this course we have looked at the individual components of the composite solution. This module will explain how they all work together, and more importantly why this is a crucial product set for end customers.

The Modern Developer’s End Product

In modern DevSecOps, the developer’s end product is clearly defined: code committed to a Git repository. This standardization comes with a critical separation of responsibilities:

  • Inner Loop (Developers): Code generation, functional testing, and delivery via Git

  • Outer Loop (Platform Engineers): Builds, security scanning, non-functional testing, and deployment

This separation made it critical to apply and assess security much earlier in the software lifecycle, not as a final gate.

The Shift-Left Approach

In traditional Waterfall development, security was checked last—if at all. When security checks failed late in the cycle, entire development processes had to restart from the beginning.

The shift-left approach brings security much earlier into the software development cycle. Red Hat Developer Hub and Trusted Software Supply Chain provide secure tooling to detect potential security issues as developers type their code, with TSSC recording these issues as part of the standard build approach.

How RHADS Components Work Together

During Development:

Red Hat Developer Hub provides the configurable Internal Developer Portal (IDP), including direct linkage to in-browser editors via OpenShift Dev Spaces. Red Hat Advanced Developer Suite provides extensions for these editors to highlight potential CVE exploit points, so developers can see issues as they develop.

During Build:

Trusted Software Supply Chain provides pre-built Tekton pipelines that transform source code into container images while applying security protocols. This "security-gate" approach means if any checks fail, the build fails. TSSC also generates SBOMs (Software Bills of Material) which record exactly what components were used to create the artifacts.

During Deployment:

Templates handle application scaffolding: generating code repositories, webhooks (so any update to the repository triggers the TSSC pipeline), and all deployable components for the application on OpenShift using ArgoCD.

Rather than interact directly with the cluster, all components are applied via GitOps. This allows templates to scaffold not only the developer environment (Git repositories and in-browser editors) but also the TSSC pipelines and end deployment components.

Complete Automation

The entire process, from the moment a developer submits code (the "End-Product") to final deployment post-build and security check, is fully automated.

Why do we need RHDH and TSSC?

Trusted Software Supply Chain provides pre-defined Tekton pipelines for security checks and build components to generate container images with SBOMs. However, these pipelines need to run somewhere (OpenShift) and be properly configured.

Red Hat Trusted Profile Analyzer (TPA) generates and maintains SBOMs, while Red Hat Trusted Artifact Signer (TAS) provides cryptographic signing and verification. Together, TSSC, TAS, and TPA provide the complete secure process—but they need to be parameterized appropriately.

Red Hat Developer Hub solves this with templates that walk users through a sequential set of steps to scaffold environments. RHADS provides an opinionated approach using these templates, making it significantly easier for Platform Engineers to set up the process and reducing cognitive load for Developers. Developers are largely insulated from the complexity, focusing on writing code rather than managing infrastructure.

Positioning the technical components

From a technical sales perspective, Red Hat Advanced Developer Suite is an easy pitch because it’s positioned to deliver clear value to two key personas:

  • Developers: Red Hat Developer Hub is positioned to provide a simplified way to write software without getting bogged down in build and delivery processes—delivering a smooth, self-service experience.

  • Operations Teams: Red Hat Advanced Developer Suite is positioned to deliver secured software rigorously tested for exploits before reaching final testing phases, directly addressing the shift-left security imperative.

Put simply, TSSC, TAS, and TPA are positioned to provide automated build and security scanning, SBOM generation and maintenance, and cryptographic signing—the stamp of security approval.

Most customers already perform these functions, but the diversity of solutions makes maintaining build pipelines a headache. Red Hat Advanced Developer Suite is positioned to solve this complexity through clear interaction points: developers work with extensible templates, while Platform Engineers maintain those templates and pre-generated Tekton pipelines. For most organizations, the out-of-the-box functionality provides a solid, secure development lifecycle.

Understanding the process

Scaffolding with templates and the TSSC pipelines

The key component is the Red Hat Developer Hub template—a configurable object that defines actions to be executed sequentially. If all actions complete successfully, the template succeeds; if any fail, it stops.

Templates leverage Backstage plugins for extensibility. Plugins provide visual components for the portal, API endpoints, and the aforementioned template actions. These actions are what make templates powerful.

Let’s look at an example template to show how they are used - this is the template will we execute as part of the hands-on lab:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: quarkus-stssc-template
  title: Securing a Quarkus Service Software Supply Chain (Tekton)
  description: Create a Quarkus Service built with Red Hat Trusted Application Pipeline on Tekton
  tags:
    - recommended
    - java
    - quarkus
    - maven

The template starts with metadata for display and selection. Tags are rendered on the selection "tile" (shown when choosing a template) in the portal. Like all objects, the definition has metadata (shown above) and a specification.

spec:
  owner: tssc
  type: service

The specification defines the object’s owner and type. Next, we have the parameters section:

  parameters:
    - title: Provide Information for Application
      required:
        - name
        - javaPackageName
      properties:
        name:
          title: Name
          type: string
          description: Unique name of the component
          default: my-quarkus-tkn
          ui:field: EntityNamePicker
          maxLength: 23
        groupId:
          title: Group Id
          type: string
          default: redhat.rhdh
          description: Maven Group Id
        artifactId:
          title: Artifact Id
          type: string
          default: my-quarkus-tkn
          description: Maven Artifact Id
        javaPackageName:
          title: Java Package Name
          default: org.redhat.rhdh
          type: string
          description: Name for the java package. eg (com.redhat.blah)
        description:
          title: Description
          type: string
          description: Help others understand what this website is for.
          default: A cool quarkus app
    - title: Provide Image Registry Information
      required:
        - imageHost
        - imageOrganization
      properties:
        imageHost:
          title: Image Registry
          type: string
          default: Quay
          enum:
            - Quay
        imageOrganization:
          title: Organization
          type: string
          description: Name of the Quay Organization
          default: tssc

When instantiating a template, Red Hat Developer Hub parses parameters and renders them as "wizard" pages — each title: group (like "Provide Information for Application" in the above snippet) becomes a separate form page. Parameters can be optional or mandatory with default values. These parameters are passed into action calls in each step:

steps:
    - id: fetch-provision-data
      name: Fetch Provision Data
      action: catalog:fetch
      input:
        entityRef: component:default/provisioning-data

    - id: template
      name: Fetch Skeleton + Template
      action: fetch:template
      input:
        url: ./skeleton
        values:
          name: ${{ parameters.name }}
          namespace: tssc-app
          description: ${{ parameters.description }}
          groupId: ${{ parameters.groupId }}
          artifactId: ${{ parameters.artifactId }}
          javaPackageName: ${{ parameters.javaPackageName }}
          owner: user:default/${{ user.entity.metadata.name }}
          cluster: ${{ steps["fetch-provision-data"].output.entity.metadata.labels["ocp-apps-domain"] }}
          gitlabHost: gitlab-gitlab.${{ steps["fetch-provision-data"].output.entity.metadata.labels["ocp-apps-domain"] }}
          quayHost: quay-${{ steps["fetch-provision-data"].output.entity.metadata.labels["guid"] }}.${{ steps["fetch-provision-data"].output.entity.metadata.labels["ocp-apps-domain"] }}
          destination: ${{ parameters.repoOwner }}/${{ parameters.name }}
          quayDestination: ${{ parameters.imageOrganization}}/${{ parameters.name }}
          port: 8080
          verifyCommits: ${{ parameters.repoVerifyCommits }}

Every step has an action: field referencing either a built-in action (fetch:template, catalog:fetch) or one provided by a plugin. For example, publish:gitlab pushes files to GitLab via the GitLab plugin.

Templates define outputs that are rendered on the portal, displaying links to created entities or custom text once the template has been processed (or "instantiated"):

output:
    links:
      - title: Source Repository
        url: ${{ steps['publish-gitlab-source'].output.remoteUrl }}
      - title: GitOps Repository
        url: ${{ steps['publish-gitlab-gitops'].output.remoteUrl }}
      - title: Open Component in catalog
        icon: catalog
        entityRef: ${{ steps['register-source'].output.entityRef }}
      - title: Open GitOps Resource in catalog
        icon: catalog
        entityRef: ${{ steps['register-gitops'].output.entityRef }}

Each step can produce output, and the scaffolder in turn exposes variables from step outputs for use in subsequent steps. Everything in double curly-braces is an expression that gets evaluated—for example, ${{ steps['publish-gitlab-source'].output.remoteUrl }} becomes the actual repository URL. Learn more about Nunjucks expressions (the expression language used by Red Hat Developer Hub).

Behind the scenes, templates work with a temporary directory. Actions like catalog:fetch and fetch:template copy files there, then publishing actions push them to repositories (e.g. in GitLab in this example).

The fetch:template action retrieves all needed files, including YAML definitions for TSSC pipelines and deployed applications. The template acts as a scaffolder with no direct knowledge of ArgoCD, Tekton, or OpenShift objects. Later, argocd:create-resources actions instantiate components from the scaffolded repository, parameterized for unique pipeline and ArgoCD application creation.

Simply put: templates marshal, scaffold, and deploy files while plugin actions handle the actual pipeline creation and execution.

The Tekton pipelines are created by ArgoCD Applications and contain the secure pipelines provided by RHADS. Customers can extend these pipelines with additional security checks as needed. The hands-on lab shows where and how to customize them, though most customer security needs are met by the default TSSC pipelines.

Tying the loop: hooking code updates to pipeline

Templates scaffold applications and use ArgoCD to set up release gates (dev, pre-prod, production) configurable through ArgoCD application definitions and overlays. Combined with real-time code updates via DevSpaces plugins, this provides the framework for development and staging—but one vital component remains.

Tekton creates PipelineRuns (actual pipeline executions) through webhook endpoints using EventListeners. RHADS provides out-of-the-box Git repository integrations (GitHub, GitLab) that set up webhook endpoints and triggers for scaffolded repositories. These triggers cause new `PipelineRun`s to be created and executed when code is changed by developers.

When the template is instantiated, it creates code repositories, OpenShift environments, and repository triggers that automatically run the build pipeline on every commit. When developers commit code, the pipeline triggers and repeats the entire secure build process.

This guarantees automatic rebuilds with security checks, SBOMs, and signing via TPA and TAS—making life easier for both developers and operations teams.

Now that you have a good understanding of how the components work together, let’s move on to the hands-on lab and see it in action.