×

The Assisted Installer and OCI overview

You can run cluster workloads on Oracle® Cloud Infrastructure (OCI) infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Both Red Hat and Oracle test, validate, and support running OCI in an OKD cluster on OCI.

The Assisted Installer supports the OCI platform, and you can use the Assisted Installer to access an intuitive interactive workflow for the purposes of automating cluster installation tasks on OCI.

Using the Assisted Installer to install an OKD cluster on OCI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OCI provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources.

The steps for provisioning OCI resources are provided as an example only. You can also choose to create the required resources through other methods; the scripts are just an example. Installing a cluster with infrastructure that you provide requires knowledge of the cloud provider and the installation process on OKD. You can access OCI Resource Manager configurations to complete these steps, or use the configurations to model your own custom script.

Follow the steps in the Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer document to understand how to use the Assisted Installer to install a OKD cluster on OCI. This document demonstrates the use of the OCI Cloud Controller Manager (CCM) and Oracle’s Container Storage Interface (CSI) objects to link your OKD cluster with the OCI API.

To ensure the best performance conditions for your cluster workloads that operate on OCI, ensure that volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides guidance for selecting the VPUs needed for specific performance needs:

  • Test or proof of concept environment: 100 GB, and 20 to 30 VPUs.

  • Basic environment: 500 GB, and 60 VPUs.

  • Heavy production environment: More than 500 GB, and 100 or more VPUs.

Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation).

If you are unfamiliar with the OKD Assisted Installer, see "Assisted Installer for OKD".

Creating OCI resources and services

Create Oracle® Cloud Infrastructure (OCI) resources and services so that you can establish infrastructure with governance standards that meets your organization’s requirements.

Prerequisites
Procedure
  1. Log in to your Oracle Cloud Infrastructure (OCI) account with administrator privileges.

  2. Download an archive file from an Oracle resource. The archive file includes files for creating cluster resources and custom manifests. The archive file also includes a script, and when you run the script, the script creates OCI resources, such as DNS records, an instance, and so on. For more information, see Configuration Files (Oracle documentation).

Using the Assisted Installer to generate an OCI-compatible discovery ISO image

Generate a discovery ISO image and upload the image to Oracle® Cloud Infrastructure (OCI), so that the agent can perform hardware and network validation checks before you install an OKD cluster on OCI.

From the OCI web console, you must create the following resources:

  • A compartment for better organizing, restricting access, and setting usage limits to OCI resources.

  • An object storage bucket for safely and securely storing the discovery ISO image. You can access the image at a later stage for the purposes of booting the instances, so that you can then create your cluster.

Prerequisites
Procedure
  1. From the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, generate the discovery ISO image by completing all the required Assisted Installer steps.

    1. In the Cluster Details step, complete the following fields:

      Field Action required

      Cluster name

      Specify the name of your cluster, such as ocidemo.

      Base domain

      Specify the base domain of the cluster, such as splat-oci.devcluster.openshift.com. Provided you previously created a compartment on OCI, you can get this information by going to DNS managementZonesList scope and then selecting the parent compartment. Your base domain should show under the Public zones tab.

      OpenShift version

      Specify OpenShift 4.15 or a later version.

      CPU architecture

      Specify x86_64 or Arm64.

      Integrate with external partner platforms

      Specify Oracle Cloud Infrastructure.

      After you specify this value, the Include custom manifests checkbox is selected by default.

    2. On the Operators page, click Next.

    3. On the Host Discovery page, click Add hosts.

    4. For the SSH public key field, add your SSH key from your local system.

      You can create an SSH authentication key pair by using the ssh-keygen tool.

    5. Click Generate Discovery ISO to generate the discovery ISO image file.

    6. Download the file to your local system.

  2. Upload the discovery ISO image to the OCI bucket. See Uploading an Object Storage Object to a Bucket (Oracle documentation).

    1. You must create a pre-authenticated request for your uploaded discovery ISO image. Ensure that you make note of the URL from the pre-authenticated request, because you must specify the URL at a later stage when you create an OCI stack.

Provisioning OCI infrastructure for your cluster

By using the Assisted Installer to create details for your OKD cluster, you can specify these details in a stack. A stack is an OCI feature where you can automate the provisioning of all necessary OCI infrastructure resources, such as the custom image, that are required for installing an OKD cluster on OCI.

The Oracle® Cloud Infrastructure (OCI) Compute Service creates a virtual machine (VM) instance on OCI. This instance can then automatically attach to a virtual network interface controller (vNIC) in the virtual cloud network (VNC) subnet. On specifying the IP address of your OKD cluster in the custom manifest template files, the OCI instance can communicate with your cluster over the VNC.

Prerequisites
  • You uploaded the discovery ISO image to the OCI bucket. For more information, see "Using the Assisted Installer to generate an OCI-compatible discovery ISO image".

Procedure
  1. Complete the steps for provisioning OCI infrastructure for your OKD cluster. See Creating OpenShift Container Platform Infrastructure Using Resource Manager (Oracle documentation).

  2. Create a stack, and then edit the custom manifest files according to the steps in the Editing the OpenShift Custom Manifests (Oracle documentation).

Completing the remaining Assisted Installer steps

After you provision Oracle® Cloud Infrastructure (OCI) resources and upload OKD custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI.

Prerequisites
  • You created a resource stack on OCI that includes the custom manifest configuration files and OCI Resource Manager configuration resources. See "Provisioning OCI infrastructure for your cluster".

Procedure
  1. From the Red Hat Hybrid Cloud Console web console, go to the Host discovery page.

  2. Under the Role column, select either Control plane node or Worker for each targeted hostname.

    Before, you can continue to the next steps, wait for each node to reach the Ready status.

  3. Accept the default settings for the Storage and Networking steps, and then click Next.

  4. On the Custom manifests page, in the Folder field, select manifest. This is the Assisted Installer folder where you want to save the custom manifest file.

    1. In the File name field, enter a value such as oci-ccm.yml.

    2. From the Content section, click Browse, and select the CCM manifest from your drive located in custom_manifest/manifests/oci-ccm.yml.

  5. Expand the next Custom manifest section and repeat the same steps for the following manifests:

    • CSI driver manifest: custom_manifest/manifests/oci-csi.yml

    • CCM machine configuration: custom_manifest/openshift/machineconfig-ccm.yml

    • CSI driver machine configuration: custom_manifest/openshift/machineconfig-csi.yml

  6. From the Review and create page, click Install cluster to create your OKD cluster on OCI.

After the cluster installation and initialization operations, the Assisted Installer indicates the completion of the cluster installation operation. For more information, see "Completing the installation" section in the Assisted Installer for OKD document.

Additional resources

Verifying a successful cluster installation on OCI

Verify that your cluster was installed and is running effectively on Oracle® Cloud Infrastructure (OCI).

Procedure
  1. From the Hybrid Cloud Console, go to Clusters > Assisted Clusters and select your cluster’s name.

  2. Check that the Installation progress bar is at 100% and a message displays indicating “Installation completed successfully”.

  3. To access the OKD web console, click the provided Web Console URL.

  4. Go to the Nodes menu page.

  5. Locate your node from the Nodes table.

  6. From the Overview tab, check that your node has a Ready status.

  7. Select the YAML tab.

  8. Check the labels parameter, and verify that the listed labels apply to your configuration. For example, the topology.kubernetes.io/region=us-sanjose-1 label indicates in what OCI region the node was deployed.

Troubleshooting the installation of a cluster on OCI

If you experience issues with using the Assisted Installer to install an OKD cluster on Oracle® Cloud Infrastructure (OCI), read the following sections to troubleshoot common problems.

The Ingress Load Balancer in OCI is not at a healthy status

This issue is classed as a Warning because by using the Resource Manager to create a stack, you created a pool of compute nodes, 3 by default, that are automatically added as backend listeners for the Ingress Load Balancer. By default, the OKD deploys 2 router pods, which are based on the default values from the OKD manifest files. The Warning is expected because a mismatch exists with the number of router pods available, two, to run on the three compute nodes.

Example of an warning message that is under the Backend set information tab on OCI
Figure 1. Example of a Warning message that is under the Backend set information tab on OCI:

You do not need to modify the Ingress Load Balancer configuration. Instead, you can point the Ingress Load Balancer to specific compute nodes that operate in your cluster on OKD. To do this, use placement mechanisms, such as annotations, on OKD to ensure router pods only run on the compute nodes that you originally configured on the Ingress Load Balancer as backend listeners.

OCI create stack operation fails with an Error: 400-InvalidParameter message

On attempting to create a stack on OCI, you identified that the Logs section of the job outputs an error message. For example:

Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements
Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements
Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn

Go to the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, and check the Cluster name field on the Cluster Details step. Remove any special characters, such as a hyphen (-), from the name, because these special characters are not compatible with the OCI naming conventions. For example, change oci-demo to ocidemo.