This repository contains Kubernetes/OpenShift deployment configurations for the OSAC platform, providing a fulfillment service framework for clusters and virtual machines.
Note: Throughout this guide,
<project-name>refers to your unique OSAC installation name, which is used as both the namespace and resource prefix. Replace it with your chosen project name (e.g.,user1,team-a, etc.).
OSAC (OpenShift Self-service Advanced Clusters) provides a streamlined, self-service framework for provisioning and managing OpenShift clusters and virtual machines. This installer repository contains the Kubernetes/OpenShift deployment configurations needed to deploy OSAC components on your infrastructure.
For detailed architecture, workflows, and design documentation, please refer to the OSAC documentation repository.
The OSAC platform provides:
- Self-service provisioning for clusters and virtual machines through a governed API
- Template-based automation using Red Hat Ansible Automation Platform
- Multi-hub support allowing multiple infrastructure hubs to be managed by a single fulfillment service
- API access via both gRPC and REST interfaces for integration with custom tools
This installer uses Kustomize to manage deployments, making it easy to customize for different environments.
The OSAC platform relies on three core components to deliver governed self-service:
-
Fulfillment Service: The API and frontend entry point used to manage user requests and map them to specific templates.
-
OSAC Operator: An OpenShift operator residing on the Hub cluster (ACM/OCP-Virt). It orchestrates the lifecycle of clusters and VMs by coordinating between the Fulfillment Service and the automation backend.
-
Automation Backend (AAP): Leverages the Red Hat Ansible Automation Platform to store and execute the custom template logic required for provisioning.
System Requirements This solution requires the following platforms to be installed and operational:
- Red Hat OpenShift Advanced Cluster Management (RHACM)
- Red Hat OpenShift Virtualization (OCP-Virt) - Optional: Only required for VM as a Service (VMaaS) support
- Red Hat Ansible Automation Platform (AAP)
- ESI (Elastic System Infrastructure) - Required for bare metal provisioning
Configuration Manifests
The /prerequisites directory contains additional manifests required to configure the
target Hub cluster.
β οΈ Important: Cluster-Wide Impact If you are using a shared cluster or are not the primary administrator, do not apply these manifests without consultation. These files modify cluster-wide settings. Please coordinate with the appropriate cluster administrators before proceeding.
| Category | Requirement | Notes / Details |
|---|---|---|
| Platform | Red Hat OpenShift Container Platform (OCP) 4.17 or later | Must have cluster admin access to the hub cluster. |
| Operators | Red Hat Advanced Cluster Management (RHACM) 2.18+ Red Hat OpenShift Virtualization (OCP-Virt) 4.17+ Red Hat Ansible Automation Platform (AAP) 2.5+ |
These must be installed and running prior to OSAC installation. |
| CLI Tools | oc (OpenShift CLI) v4.17+kubectl (optional)kustomize v5.xgit |
Ensure all CLIs are available in your PATH. |
| Container Registry Access | registry.redhat.io and quay.io |
Verify credentials and pull secrets are valid in the target cluster namespace. |
| Network / DNS | Ingress route configured for OSAC services | Required for external access to fulfillment API and AAP UI. |
| Authentication / IDM | Organization Identity Provider (e.g., Keycloak, LDAP, RH-SSO) | Used for tenant and user identity mapping. |
| Storage | Dynamic storage class available (e.g., ocs-storagecluster-cephfs, lvms-storage) |
Required for persistence of operator and AAP components. |
| Permissions | Cluster-admin access to deploy operators and create CRDs | Limited access users can only deploy into namespaces configured by the admin. |
| License Files | license.zip (AAP subscription) |
Must be placed under overlays/<your-overlay>/files/license.zip. |
| Internet Access | Outbound access to GitHub (for fetching submodules, releases) | Required during installation and updates. |
OSAC uses Kustomize for installation. This approach allows you to easily override and customize your deployment to meet specific needs. Multiple OSAC installations can be deployed on the same cluster, each in its own project namespace.
To manage dependencies, the OSAC-installer repository uses Git submodules to import the required manifests from each component. This ensures component versions are pinned and compatible with the installer.
Although the development overlay will work out of the box, we recommend customizing your overlay by creating a new project-specific configuration. This is especially important when deploying on shared clusters to avoid resource name collisions.
Use Kustomize to manage your environment-specific configurations.
-
Choose a Project Name: Select a unique name for your OSAC installation (e.g.,
user1,team-a,dev-env). This will be used as your namespace and resource prefix. In the examples below, we'll use<project-name>as a placeholder. -
Initialize the Overlay: Duplicate the development template with your project name:
$ cp -r overlays/development overlays/<project-name>
-
Populate Required Files: Ensure your new directory structure matches the following:
overlays/<project-name>/ βββ kustomization.yaml # Edit this to configure your deployment βββ prefixTransformer.yaml # Edit this to set resource name prefix βββ files/ βββ license.zip # REQUIRED: Your AAP license file -
Update Critical Configuration: You must update these two configuration values to match your
<project-name>:- In
kustomization.yaml: Update thenamespacefield to<project-name> - In
prefixTransformer.yaml: Update theprefixfield to<project-name>-
These changes ensure your installation uses a unique namespace and prevents resource name conflicts with other OSAC installations.
- In
-
Apply Additional Customizations: Modify other settings in your overlay folder as needed (images, patches, etc.).
For more information on structuring overlays and patches, please consult the official Kustomize documentation.
Download AAP license manifest from Red Hat Customer Portal
Once you have customized your overlay, deploy the OSAC components to your cluster.
# Deploy using your project-specific overlay
$ oc apply -k overlays/<project-name>
# Monitor pod creation and startup
$ watch oc get -n <project-name> podsSeveral pods restart during initialization. The OpenShift job named aap-bootstrap
restarts several times before completing. This is expected behavior.
Once the aap-bootstrap job completes, OSAC is ready for use.
Alternative: Install with Wait Option
# Wait for all deployments to be ready (blocking command)
$ oc wait --for=condition=Available deployment --all -n <project-name> --timeout=600sTo install the CLI and register a hub, follow these steps:
Download the latest release and make it executable.
# Adjust URL for the latest version as needed
$ curl -L -o fulfillment-cli \
https://github.com/innabox/fulfillment-cli/releases/latest/download/fulfillment-cli-linux-amd64
$ chmod +x fulfillment-cli
# Optional: Move to your path
$ sudo mv fulfillment-cli /usr/local/bin/Authenticate with the fulfillment API. You will need the route address and a valid token generation script.
$ fulfillment-cli login \
--address <your-fulfillment-route-url> \
--token-script "oc create token fulfillment-controller -n <project-name> \
--duration 1h --as system:admin" \
--insecureTip: Retrieve your route URL using:
oc get routes -n <project-name>
To allow the OSAC operator to communicate with the fulfillment service, you must
obtain the kubeconfig and register the hub. The script located at
scripts/create-hub-access-kubeconfig.sh demonstrates how to generate the kubeconfig
for a hub.
# Generate the kubeconfig
$ ./scripts/create-hub-access-kubeconfig.sh
# Register the Hub
$ fulfillment-cli create hub \
--kubeconfig=kubeconfig.hub-access \
--id <hub-name> \
--namespace <project-name>Note: Refer to
base/fulfillment-service/hub-access/README.mdfor more information
Once configured, you can use the fulfillment CLI to manage clusters and virtual machines. For detailed usage instructions and command reference, see the fulfillment-cli documentation.
After deployment, you can access the AAP web interface to monitor jobs and manage automation:
$ oc get route -n <project-name> | grep innabox-aapAAP routes will contain 'innabox-aap' in the name.
Note: The main AAP URL will be something like:
https://innabox-aap-<project-name>.apps.your-cluster.com
# Extract the admin password
$ oc extract secret/innabox-aap-admin-password -n <project-name> --to -- Open the AAP controller URL in your browser
- Username:
admin - Password: (from the previous step)
From the AAP web interface, you can:
- Monitor cluster provisioning jobs and their status
- View automation execution logs and troubleshoot failures
- Manage job templates and automation workflows
- Configure additional automation tasks
- View inventory and host information
- cert-manager not ready: Ensure cert-manager operator is installed and running
- Certificate issues: Check cert-manager logs and certificate status
- ImagePullBackOff errors: Verify registry credentials in
files/quay-pull-secret.jsonand image string
# Check certificate status
$ oc describe certificate -n <project-name>
# Check certificate issuer status
$ oc describe issuer -n <project-name>
# Check pod events
$ oc describe pod -n <project-name> <pod-name>
# Check service endpoints
$ oc get endpoints -n <project-name>
# Check secrets
$ oc get secrets -n <project-name>
# View component logs
$ oc logs -n <project-name> deployment/fulfillment-service -c server --tail=100
$ oc logs -n <project-name> deployment/<project-name>-controller-manager --tail=100
# Get all events in namespace
$ oc get events -n <project-name> --sort-by=.metadata.creationTimestampFor issues and questions:
- Check the troubleshooting section above
- Review component logs for error messages
- Verify prerequisites are properly installed
- Open issues in the respective component repositories
This project is licensed under the Apache License 2.0.