When new versions of OpenShift Origin are released, you can upgrade your existing cluster to apply the latest enhancements and bug fixes. For OpenShift Origin, see the Releases page on GitHub to review the latest changes.
Unless noted otherwise, node and masters within a major version are forward and backward compatible across one minor version, so upgrading your cluster should go smoothly. However, you should not run mismatched versions longer than necessary to upgrade the entire cluster.
The OpenShift Origin 3.9 release includes a merge of features and fixes from Kubernetes 1.8 and 1.9. As a result, the upgrade process from OpenShift Origin 3.7 completes with the cluster fully upgraded to OpenShift Origin 3.9, seemingly "skipping" the 3.8 release. Technically, the OpenShift Origin 3.7 cluster is first upgraded to 3.8-versioned packages, and then the process immediately continues upgrading to OpenShift Origin 3.9 automatically. Your cluster should only remain at 3.8-versioned packages for as long as it takes to successfully complete the upgrade to OpenShift Origin 3.9.
There are two methods available for performing OpenShift Origin cluster upgrades: automated or manual.
The automated upgrade method uses Ansible playbooks to automate the tasks needed to upgrade a OpenShift Origin cluster. You can use the inventory file that you used during inital installation to run the upgrade playbooks. Using this method allows you to choose between either upgrade strategy: in-place upgrades or blue-green deployments.
The manual upgrade method breaks down the steps that happen during an automated Ansible-based upgrade and provides the equivalent commands to run manually. Using this method describes the in-place upgrade strategy.
When using the automated upgrade method, there are two strategies you can take for performing the OpenShift Origin cluster upgrade: in-place upgrades or blue-green delpoyments. When using the manual upgrade method, an in-place upgrade is described.
With in-place upgrades, the cluster upgrade is performed on all hosts in a single, running cluster: first masters and then nodes. Pods are evacuated off of nodes and recreated on other running nodes before a node upgrade begins; this helps reduce downtime of user applications.
The blue-green deployment upgrade method follows a similar flow to the in-place method: masters and etcd servers are still upgraded first, however a parallel environment is created for new nodes instead of upgrading them in-place.
This method allows administrators to switch traffic from the old set of nodes (e.g., the "blue" deployment) to the new set (e.g., the "green" deployment) after the new deployment has been verified. If a problem is detected, it is also then easy to rollback to the old deployment quickly.