OpenShift Origin can be configured to access an Azure infrastructure, including using Azure disk as persistent storage for application data. After Azure is configured properly, some additional configurations need to be completed on the OpenShift Origin hosts.


Configuring Azure for OpenShift Origin requires the following role:


To create and manage all types of Azure resources.

The Azure configuration file

Configuring OpenShift Origin for Azure requires the /etc/azure/azure.conf file, on each node host.

If the file does not exist, create it, and add the following:

tenantId: <> (1)
subscriptionId: <> (2)
aadClientId: <> (3)
aadClientSecret: <> (4)
aadTenantId: <> (5)
resourceGroup: <> (6)
cloud: <> (7)
location: <> (8)
vnetName: <> (9)
securityGroupName: <> (10)
primaryAvailabilitySetName: <> (11)
1 The AAD tenant ID for the subscription that the cluster is deployed in.
2 The Azure subscription ID that the cluster is deployed in.
3 The client ID for an AAD application with RBAC access to talk to Azure RM APIs.
4 The client secret for an AAD application with RBAC access to talk to Azure RM APIs.
5 Ensure this is the same as tenant ID (optional).
6 The Azure Resource Group name that the Azure VM belongs to.
7 The specific cloud region. For example, AzurePublicCloud.
8 The compact style Azure region. For example, southeastasia (optional).
9 Virtual network containing instances and used when creating load balancers.
10 Security group name associated with instances and load balancers.
11 Availability set to use when creating resources such as load balancers (optional).

The NIC used for accessing the instance must have an internal-dns-name set or the node will not be able to rejoin the cluster, display build logs to the console, and will cause oc rsh to not work correctly.

Configuring for Azure during an advanced installation

During advanced installations, Azure can be configured using the following parameters, which are configurable in the inventory file.

# Cloud Provider Configuration
# Note: You may make use of environment variables rather than store
# sensitive configuration within the ansible inventory.
# For example:
openshift_cloudprovider_azure_client_id="{{ lookup('env','AZURE_CLIENT_ID') }}"
openshift_cloudprovider_azure_client_secret="{{ lookup('env','AZURE_CLIENT_SECRET') }}"
openshift_cloudprovider_azure_tenant_id="{{ lookup('env','AZURE_TENANT_ID') }}"
openshift_cloudprovider_azure_subscription_id="{{ lookup('env','AZURE_SUBSCRIPTION_ID') }}"

When Ansible configures Azure, the following additional files are created for you:

  • /etc/origin/cloudprovider/azure.conf

Manually configuring master hosts for Azure

Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments and controllerArguments sections:

      - "azure"
      - "/etc/azure/azure.conf"
      - "azure"
      - "/etc/azure/azure.conf"

When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml should be in /etc/origin/master instead of /etc/.

Manually configuring node hosts for Azure

  1. Edit or create the node configuration file on all nodes (/etc/origin/node/node-config.yaml by default) and update the contents of the kubeletArguments section:

        - "azure"
        - "/etc/azure/azure.conf"

    When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml should be in /etc/origin/node instead of /etc/.

Applying manual configuration changes

Start or restart OpenShift Origin services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Origin services:

# systemctl restart origin-master-api origin-master-controllers
# systemctl restart origin-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OpenShift Origin service.

    # systemctl restart origin-node
  5. Add back any labels on each node that you previously had.