Creating an environment-wide backup involves copying important data to assist with restoration in the case of crashing instances, or corrupt data. After backups have been created, they can be restored onto a newly installed version of the relevant component.

Perform a back up on a regular basis to prevent data loss.

Creating a master host backup

The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.

OpenShift Origin files

The master instances run important services, such as the API, controllers. The /etc/origin/master directory stores many important files:

  • The configuration, the API, controllers, services, and more

  • Certificates generated by the installation

  • All cloud provider-related configuration

  • Keys and other authentication files, such as htpasswd if you use htpasswd

  • And more

The OpenShift Origin services can be customized to increase the log level, use proxies, and so on. The configuration files are stored in the /etc/sysconfig directory.

Because the masters are also unschedulable nodes, back up the entire /etc/origin directory.

Procedure
  1. Create a backup of the master host configuration files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    $ sudo cp -aR /etc/sysconfig/atomic-* ${MYBACKUPDIR}/etc/sysconfig/

    On a single master cluster installation the configuration file is stored in the /etc/sysconfig/atomic-openshift-master, whereas in a multi-master environment /etc/sysconfig/atomic-openshift-master-api, and /etc/sysconfig/atomic-openshift-master-controllers are used.

    At the time of writing, the /etc/origin/master/ca.serial.txt file is generated onto only the first master listed in the Ansible host inventory. This is being investigated to be fixed for future OpenShift Origin releases in the 1469358 bugzilla. If deprecating the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

  2. Other important files that need to be considered when planning a backup include:

    File

    Description

    /etc/cni/*

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/*

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    Create a backup of those files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
        ${MYBACKUPDIR}/etc/sysconfig/
    $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
        ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  3. If a package is accidentally removed, or a file included in an rpm package should be restored, having a list of rhel packages installed on the system can be useful.

    If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}
    $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  4. If using the previous steps, the following files should now be present in the backup directory:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    etc/sysconfig/atomic-openshift-master
    etc/sysconfig/atomic-openshift-master-api
    etc/sysconfig/atomic-openshift-master-controllers
    etc/sysconfig/atomic-openshift-node
    etc/sysconfig/flanneld
    etc/sysconfig/iptables
    etc/sysconfig/docker-network
    etc/sysconfig/docker-storage
    etc/sysconfig/docker-storage-setup
    etc/sysconfig/docker-storage-setup.rpmnew
    etc/origin/master/ca.crt
    etc/origin/master/ca.key
    etc/origin/master/ca.serial.txt
    etc/origin/master/ca-bundle.crt
    etc/origin/master/master.proxy-client.crt
    etc/origin/master/master.proxy-client.key
    etc/origin/master/service-signer.crt
    etc/origin/master/service-signer.key
    etc/origin/master/serviceaccounts.private.key
    etc/origin/master/serviceaccounts.public.key
    etc/origin/master/openshift-master.crt
    etc/origin/master/openshift-master.key
    etc/origin/master/openshift-master.kubeconfig
    etc/origin/master/master.server.crt
    etc/origin/master/master.server.key
    etc/origin/master/master.kubelet-client.crt
    etc/origin/master/master.kubelet-client.key
    etc/origin/master/admin.crt
    etc/origin/master/admin.key
    etc/origin/master/admin.kubeconfig
    etc/origin/master/etcd.server.crt
    etc/origin/master/etcd.server.key
    etc/origin/master/master.etcd-client.key
    etc/origin/master/master.etcd-client.csr
    etc/origin/master/master.etcd-client.crt
    etc/origin/master/master.etcd-ca.crt
    etc/origin/master/policy.json
    etc/origin/master/scheduler.json
    etc/origin/master/htpasswd
    etc/origin/master/session-secrets.yaml
    etc/origin/master/openshift-router.crt
    etc/origin/master/openshift-router.key
    etc/origin/master/registry.crt
    etc/origin/master/registry.key
    etc/origin/master/master-config.yaml
    etc/origin/generated-configs/master-master-1.example.com/master.server.crt
    ...[OUTPUT OMITTED]...
    etc/origin/cloudprovider/openstack.conf
    etc/origin/node/system:node:master-0.example.com.crt
    etc/origin/node/system:node:master-0.example.com.key
    etc/origin/node/ca.crt
    etc/origin/node/system:node:master-0.example.com.kubeconfig
    etc/origin/node/server.crt
    etc/origin/node/server.key
    etc/origin/node/node-dnsmasq.conf
    etc/origin/node/resolv.conf
    etc/origin/node/node-config.yaml
    etc/origin/node/flannel.etcd-client.key
    etc/origin/node/flannel.etcd-client.csr
    etc/origin/node/flannel.etcd-client.crt
    etc/origin/node/flannel.etcd-ca.crt
    etc/pki/ca-trust/source/anchors/openshift-ca.crt
    etc/pki/ca-trust/source/anchors/registry-ca.crt
    etc/dnsmasq.conf
    etc/dnsmasq.d/origin-dns.conf
    etc/dnsmasq.d/origin-upstream-dns.conf
    etc/dnsmasq.d/node-dnsmasq.conf
    packages.txt

    If needed, the files can be compressed to save space:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host running the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

The script can be executed on every master host with:

$ mkdir ~/git
$ cd ~/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git
$ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
$ ./backup_master_node.sh -h

Creating a node host backup

Creating a backup of a node host is a different use case from backing up a master host. Because master hosts contain many important files, creating a backup is highly recommended. However, the nature of nodes is that anything special is replicated over the nodes in case of failover, and they typically do not contain data that is necessary to run an environment. If a backup of a node contains something necessary to run an environment, then a creating a backup is recommended.

The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.

OpenShift Origin files

Node instances run applications in the form of pods, which are based on containers. The /etc/origin/ and /etc/origin/node directories house important files, such as:

  • The configuration of the node services

  • Certificates generated by the installation

  • Cloud provider-related configuration

  • Keys and other authentication files, such as the dnsmasq configuration

The OpenShift Origin services can be customized to increase the log level, use proxies, and more, and the configuration files are stored in the /etc/sysconfig directory.

Procedure
  1. Create a backup of the node configuration files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    $ sudo cp -aR /etc/sysconfig/atomic-openshift-node ${MYBACKUPDIR}/etc/sysconfig/
  2. OpenShift Origin uses specific files that must be taken into account when planning the backup policy, including:

    File

    Description

    /etc/cni/*

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/*

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    To create those files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
        ${MYBACKUPDIR}/etc/sysconfig/
    $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
        ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  3. If a package is accidentally removed, or a file included in an rpm package should be restored, having a list of rhel packages installed on the system can be useful.

    If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}
    $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  4. The following files should now be present in the backup directory:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    etc/sysconfig/atomic-openshift-node
    etc/sysconfig/flanneld
    etc/sysconfig/iptables
    etc/sysconfig/docker-network
    etc/sysconfig/docker-storage
    etc/sysconfig/docker-storage-setup
    etc/sysconfig/docker-storage-setup.rpmnew
    etc/origin/node/system:node:app-node-0.example.com.crt
    etc/origin/node/system:node:app-node-0.example.com.key
    etc/origin/node/ca.crt
    etc/origin/node/system:node:app-node-0.example.com.kubeconfig
    etc/origin/node/server.crt
    etc/origin/node/server.key
    etc/origin/node/node-dnsmasq.conf
    etc/origin/node/resolv.conf
    etc/origin/node/node-config.yaml
    etc/origin/node/flannel.etcd-client.key
    etc/origin/node/flannel.etcd-client.csr
    etc/origin/node/flannel.etcd-client.crt
    etc/origin/node/flannel.etcd-ca.crt
    etc/origin/cloudprovider/openstack.conf
    etc/pki/ca-trust/source/anchors/openshift-ca.crt
    etc/pki/ca-trust/source/anchors/registry-ca.crt
    etc/dnsmasq.conf
    etc/dnsmasq.d/origin-dns.conf
    etc/dnsmasq.d/origin-upstream-dns.conf
    etc/dnsmasq.d/node-dnsmasq.conf
    packages.txt

    If needed, the files can be compressed to save space:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host running the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

The script can be executed on every master host with:

$ mkdir ~/git
$ cd ~/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git
$ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
$ ./backup_master_node.sh -h

etcd backup

etcd is the key value store for all object definitions, as well as the persistent master state. Other components watch for changes, then bring themselves into the desired state.

OpenShift Origin versions prior to 3.5 use etcd version 2 (v2), while 3.5 and later use version 3 (v3). The data model between the two versions of etcd is different. etcd v3 can use both the v2 and v3 data models, whereas etcd v2 can only use the v2 data model. In an etcd v3 server, the v2 and v3 data stores exist in parallel and are independent.

For both v2 and v3 operations, you can use the ETCDCTL_API environment variable to use the proper API:

$ etcdctl -v
etcdctl version: 3.2.5
API version: 2
$ ETCDCTL_API=3 etcdctl version
etcdctl version: 3.2.5
API version: 3.2

See Migrating etcd Data (v2 to v3) section in the OpenShift Origin 3.7 documentation for information about how to migrate to v3.

The etcd backup process is composed of two different procedures:

  • Configuration backup: Including the required etcd configuration and certificates

  • Data backup: Including both v2 and v3 data model.

You can perform the data backup process on any host that has connectivity to the etcd cluster, where the proper certificates are provided, and where the etcdctl tool is installed.

The backup files must be copied to an external system, ideally outside the OpenShift Origin environment, and then encrypted.

etcd configuration backup

The etcd configuration files to be preserved are all stored in the /etc/etcd directory of the instances where etcd is running. This includes the etcd configuration file (/etc/etcd/etcd.conf) and the required certificates for cluster communication. All those files are generated at installation time by the Ansible installer.

For each etcd member of the cluster, back up the etcd configuration.

$ ssh master-0
# mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
# cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/

The certificates and configuration files on each etcd cluster member are unique.

etcd data backup

Prerequisites

The OpenShift Origin installer creates aliases to avoid typing all the flags named etcdctl2 for etcd v2 tasks and etcdctl3 for etcd v3 tasks.

However, the etcdctl3 alias does not provide the full endpoint list to the etcdctl command, so the --endpoints option with all the endpoints must be provided.

Before backing up etcd:

  • etcdctl binaries should be available or, in containerized installations, the rhel7/etcd container should be available

  • Ensure connectivity with the etcd cluster (port 2379/tcp)

  • Ensure the proper certificates to connect to the etcd cluster

    1. To ensure the etcd cluster is working, check its health.

      • If you use the etcd v2 API, run the following command:

        # etcdctl --cert-file=/etc/etcd/peer.crt \
                  --key-file=/etc/etcd/peer.key \
                  --ca-file=/etc/etcd/ca.crt \
                  --peers="https://*master-0.example.com*:2379,\
                  https://*master-1.example.com*:2379,\
                  https://*master-2.example.com*:2379"\
                  cluster-health
        member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
        member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
        member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
        cluster is healthy
      • If you use the etcd v3 API, run the following command:

        # ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
                  --key=/etc/etcd/peer.key \
                  --cacert="/etc/etcd/ca.crt" \
                  --endpoints="https://*master-0.example.com*:2379,\
                    https://*master-1.example.com*:2379,\
                    https://*master-2.example.com*:2379"
                    endpoint health
        https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
        https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
        https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
    2. Check the member list.

      • If you use the etcd v2 API, run the following command:

        # etcdctl2 member list
        2a371dd20f21ca8d: name=master-1.example.com peerURLs=https://192.168.55.12:2380 clientURLs=https://192.168.55.12:2379 isLeader=false
        40bef1f6c79b3163: name=master-0.example.com peerURLs=https://192.168.55.8:2380 clientURLs=https://192.168.55.8:2379 isLeader=false
        95dc17ffcce8ee29: name=master-2.example.com peerURLs=https://192.168.55.13:2380 clientURLs=https://192.168.55.13:2379 isLeader=true
      • If you use the etcd v3 API, run the following command:

        # etcdctl3 member list
        2a371dd20f21ca8d, started, master-1.example.com, https://192.168.55.12:2380, https://192.168.55.12:2379
        40bef1f6c79b3163, started, master-0.example.com, https://192.168.55.8:2380, https://192.168.55.8:2379
        95dc17ffcce8ee29, started, master-2.example.com, https://192.168.55.13:2380, https://192.168.55.13:2379
Procedure

While the etcdctl backup command is used to perform the backup, etcd v3 has no concept of a backup. Instead, you either take a snapshot from a live member with the etcdctl snapshot save command or copy the member/snap/db file from an etcd data directory.

The etcdctl backup command rewrites some of the metadata contained in the backup, specifically, the node ID and cluster ID, which means that in the backup, the node loses its former identity. To recreate a cluster from the backup, you create a new, single-node cluster, then add the rest of the nodes to the cluster. The metadata is rewritten to prevent the new node from joining an existing cluster.

  1. Back up the etcd data:

    • If you use the v2 API, take the following actions:

      1. Stop all etcd services:

        # systemctl stop etcd.service
      2. Create the etcd data backup and copy the etcd db file:

        # mkdir -p /backup/etcd-$(date +%Y%m%d)
        # etcdctl2 backup \
            --data-dir /var/lib/etcd \
            --backup-dir /backup/etcd-$(date +%Y%m%d)
        # cp /var/lib/etcd/member/snap/db /backup/etcd-$(date +%Y%m%d)
    • If you use the v3 API, run the following command:

      # mkdir -p /backup/etcd-$(date +%Y%m%d)
      # etcdctl3 snapshot save */backup/etcd-$(date +%Y%m%d)*/db
      Snapshot saved at /backup/etcd-<date>/db
      # systemctl stop etcd.service
      # etcdctl2 backup \
          --data-dir /var/lib/etcd \
          --backup-dir /backup/etcd-$(date +%Y%m%d)
      # systemctl start etcd.service

      The etcdctl snapshot save command requires the etcd service to be running.

      In these commands, a /backup/etcd-<date>/ directory is created, where <date> represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.

      In the case of an all-in-one cluster, the etcd data directory is located in the /var/lib/origin/openshift.local.etcd directory.

Creating a project backup

Creating a backup of all relevant data involves exporting all important information, then restoring into a new project.

Currently, a OpenShift Origin project back up and restore tool is being developed by Red Hat. See the following bug for more information:

Back up a project

Procedure
  1. To list all the relevant data to backup:

    $ oc get all
    NAME         TYPE      FROM      LATEST
    bc/ruby-ex   Source    Git       1
    
    NAME               TYPE      FROM          STATUS     STARTED         DURATION
    builds/ruby-ex-1   Source    Git@c457001   Complete   2 minutes ago   35s
    
    NAME                 DOCKER REPO                                     TAGS      UPDATED
    is/guestbook         10.111.255.221:5000/myproject/guestbook         latest    2 minutes ago
    is/hello-openshift   10.111.255.221:5000/myproject/hello-openshift   latest    2 minutes ago
    is/ruby-22-centos7   10.111.255.221:5000/myproject/ruby-22-centos7   latest    2 minutes ago
    is/ruby-ex           10.111.255.221:5000/myproject/ruby-ex           latest    2 minutes ago
    
    NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
    dc/guestbook         1          1         1         config,image(guestbook:latest)
    dc/hello-openshift   1          1         1         config,image(hello-openshift:latest)
    dc/ruby-ex           1          1         1         config,image(ruby-ex:latest)
    
    NAME                   DESIRED   CURRENT   READY     AGE
    rc/guestbook-1         1         1         1         2m
    rc/hello-openshift-1   1         1         1         2m
    rc/ruby-ex-1           1         1         1         2m
    
    NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    svc/guestbook         10.111.105.84    <none>        3000/TCP            2m
    svc/hello-openshift   10.111.230.24    <none>        8080/TCP,8888/TCP   2m
    svc/ruby-ex           10.111.232.117   <none>        8080/TCP            2m
    
    NAME                         READY     STATUS      RESTARTS   AGE
    po/guestbook-1-c010g         1/1       Running     0          2m
    po/hello-openshift-1-4zw2q   1/1       Running     0          2m
    po/ruby-ex-1-build           0/1       Completed   0          2m
    po/ruby-ex-1-rxc74           1/1       Running     0          2m
  2. Export the project objects into a project.yaml file in yaml format:

    $ oc export all -o yaml > project.yaml

    Or, in json:

    $ oc export all -o json > project.json
  3. The above creates a yaml or json file with the project content. This, however, does not export all objects, such as role bindings, secrets, service accounts, or persistent volume claims. To export these, run:

    $ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas pvcs templates cronjobs statefulsets hpas deployments replicasets poddisruptionbudget endpoints
    do
      oc export $object -o yaml > $object.yaml
    done
  4. Some exported objects can rely on specific metadata or references to unique IDs in the project. This is a limitation on the usability of the recreated objects.

    When using imagestreams, the image parameter of a deploymentconfig can point to a specific sha checksum of an image in the internal registry that would not exist in a restored environment. For instance, running the sample "ruby-ex" as oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git creates an imagestream ruby-ex using the internal registry to host the image:

    $ oc get dc ruby-ex -o jsonpath="{.spec.template.spec.containers[].image}"
    10.111.255.221:5000/myproject/ruby-ex@sha256:880c720b23c8d15a53b01db52f7abdcbb2280e03f686a5c8edfef1a2a7b21cee

    If importing the deploymentconfig as it is exported with oc export it fails if the image does not exist.

    To create those exports, use the project_export.sh in the openshift-ansible-contrib repository, which creates all the project objects in different files. The script creates a directory on the host running the script with the project name and a json file for every object type in that project.

    The code in the openshift-ansible-contrib repository referenced below is not explicitly supported by Red Hat but the Reference Architecture team performs testing to ensure the code operates as defined and is secure.

    The script runs on Linux and requires jq and the oc commands installed and the system should be logged in to the OpenShift Origin environment as a user that can read all the objects in that project.

    $ mkdir ~/git
    $ cd ~/git
    $ git clone https://github.com/openshift/openshift-ansible-contrib.git
    $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
    $ ./project_export.sh <projectname>

    For example:

    $ ./project_export.sh myproject
    Exporting namespace to project-demo/ns.json
    Exporting rolebindings to project-demo/rolebindings.json
    Exporting serviceaccounts to project-demo/serviceaccounts.json
    Exporting secrets to project-demo/secrets.json
    Exporting deploymentconfigs to project-demo/dc_*.json
    Patching DC...
    Exporting buildconfigs to project-demo/bcs.json
    Exporting builds to project-demo/builds.json
    Exporting imagestreams to project-demo/iss.json
    Exporting imagestreamtags to project-demo/imagestreamtags.json
    Exporting replicationcontrollers to project-demo/rcs.json
    Exporting services to project-demo/svc_*.json
    Exporting pods to project-demo/pods.json
    Exporting podpreset to project-demo/podpreset.json
    Exporting configmaps to project-demo/cms.json
    Exporting egressnetworkpolicies to project-demo/egressnetworkpolicies.json
    Exporting rolebindingrestrictions to project-demo/rolebindingrestrictions.json
    Exporting limitranges to project-demo/limitranges.json
    Exporting resourcequotas to project-demo/resourcequotas.json
    Exporting pvcs to project-demo/pvcs.json
    Exporting routes to project-demo/routes.json
    Exporting templates to project-demo/templates.json
    Exporting cronjobs to project-demo/cronjobs.json
    Exporting statefulsets to project-demo/statefulsets.json
    Exporting hpas to project-demo/hpas.json
    Exporting deployments to project-demo/deployments.json
    Exporting replicasets to project-demo/replicasets.json
    Exporting poddisruptionbudget to project-demo/poddisruptionbudget.json
  5. Once executed, review the files to verify that the content has been properly exported:

    $ cd <projectname>
    $ ls -1
    bcs.json
    builds.json
    cms.json
    cronjobs.json
    dc_ruby-ex.json
    dc_ruby-ex_patched.json
    deployments.json
    egressnetworkpolicies.json
    endpoint_external-mysql-service.json
    hpas.json
    imagestreamtags.json
    iss.json
    limitranges.json
    ns.json
    poddisruptionbudget.json
    podpreset.json
    pods.json
    pvcs.json
    rcs.json
    replicasets.json
    resourcequotas.json
    rolebindingrestrictions.json
    rolebindings.json
    routes.json
    secrets.json
    serviceaccounts.json
    statefulsets.json
    svc_external-mysql-service.json
    svc_ruby-ex.json
    templates.json
    $ less bcs.json
    ...

    If the original object does not exist, empty files will be created when exporting.

  6. If using imagestreams, the script modifies the deploymentconfig to use the image reference instead the image sha, creating a different json file than the exported using the _patched appendix:

    $ diff dc_hello-openshift.json dc_hello-openshift_patched.json
    45c45
    <             "image": "docker.io/openshift/hello-openshift@sha256:42b59c869471a1b5fdacadf778667cecbaa79e002b7235f8091540ae612f0e14",
    ---
    >             "image": "hello-openshift:latest",

The script does not support multiple container pods currently, use it with caution.

Restore project

To restore a project, create the new project, then restore any exported files with oc create -f pods.json. However, restoring a project from scratch requires a specific order, because some objects are dependent on others. For example, the configmaps must be created before any pods.

Procedure
  1. If the project has been exported as a single file, it can be imported as:

    $ oc new-project <projectname>
    $ oc create -f project.yaml
    $ oc create -f secret.yaml
    $ oc create -f serviceaccount.yaml
    $ oc create -f pvc.yaml
    $ oc create -f rolebindings.yaml

    Some resources can fail to be created (for example, pods and default service accounts).

  2. If the project was initially exported using the project_export.sh script, the files are located in the projectname directory, and can be imported using the same project_import.sh script that performs the oc create process in the proper order:

    $ mkdir ~/git
    $ cd ~/git
    $ git clone https://github.com/openshift/openshift-ansible-contrib.git
    $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
    $ ./project_import.sh <projectname_path>

    For example:

    $ ls ~/backup/myproject
    bcs.json           dc_guestbook_patched.json        dc_ruby-ex_patched.json  pvcs.json          secrets.json
    builds.json        dc_hello-openshift.json          iss.json                 rcs.json           serviceaccounts.json
    cms.json           dc_hello-openshift_patched.json  ns.json                  rolebindings.json  svcs.json
    dc_guestbook.json  dc_ruby-ex.json                  pods.json                routes.json        templates.json
    
    $ ./project_import.sh ~/backup/myproject
    namespace "myproject" created
    rolebinding "admin" created
    rolebinding "system:deployers" created
    rolebinding "system:image-builders" created
    rolebinding "system:image-pullers" created
    secret "builder-dockercfg-mqhs6" created
    secret "default-dockercfg-51xb9" created
    secret "deployer-dockercfg-6kvz7" created
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "builder" already exists
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "default" already exists
    Error from server (AlreadyExists): error when creating "myproject//serviceaccounts.json": serviceaccounts "deployer" already exists
    error: no objects passed to create
    service "guestbook" created
    service "hello-openshift" created
    service "ruby-ex" created
    imagestream "guestbook" created
    imagestream "hello-openshift" created
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    error: no objects passed to create
    error: no objects passed to create
    buildconfig "ruby-ex" created
    build "ruby-ex-1" created
    deploymentconfig "guestbook" created
    deploymentconfig "hello-openshift" created
    deploymentconfig "ruby-ex" created
    replicationcontroller "ruby-ex-1" created
    Error from server (AlreadyExists): error when creating "myproject//rcs.json": replicationcontrollers "guestbook-1" already exists
    Error from server (AlreadyExists): error when creating "myproject//rcs.json": replicationcontrollers "hello-openshift-1" already exists
    pod "guestbook-1-c010g" created
    pod "hello-openshift-1-4zw2q" created
    pod "ruby-ex-1-rxc74" created
    Error from server (AlreadyExists): error when creating "myproject//pods.json": object is being deleted: pods "ruby-ex-1-build" already exists
    error: no objects passed to create

    AlreadyExists errors can appear, because some objects as serviceaccounts and secrets are created automatically when creating the project.

  3. If you are using buildconfigs, the builds are not triggered automatically and the applications are not executed:

    $ oc get bc
    NAME      TYPE      FROM      LATEST
    ruby-ex   Source    Git       1
    $ oc get pods
    NAME                      READY     STATUS    RESTARTS   AGE
    guestbook-1-plnnq         1/1       Running   0          26s
    hello-openshift-1-g4g0j   1/1       Running   0          26s

    To trigger the builds, run the oc start-build command:

    $ for bc in $(oc get bc -o jsonpath="{.items[*].metadata.name}")
    do
        oc start-build ${bc}
    done

    The pods will deploy once the build completes.

  4. To verify the project was restored:

    $ oc get all
    NAME         TYPE      FROM      LATEST
    bc/ruby-ex   Source    Git       2
    
    NAME               TYPE      FROM          STATUS                    STARTED              DURATION
    builds/ruby-ex-1   Source    Git           Error (BuildPodDeleted)   About a minute ago
    builds/ruby-ex-2   Source    Git@c457001   Complete                  55 seconds ago       12s
    
    NAME                 DOCKER REPO                                     TAGS      UPDATED
    is/guestbook         10.111.255.221:5000/myproject/guestbook         latest    About a minute ago
    is/hello-openshift   10.111.255.221:5000/myproject/hello-openshift   latest    About a minute ago
    is/ruby-22-centos7   10.111.255.221:5000/myproject/ruby-22-centos7   latest    About a minute ago
    is/ruby-ex           10.111.255.221:5000/myproject/ruby-ex           latest    43 seconds ago
    
    NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
    dc/guestbook         1          1         1         config,image(guestbook:latest)
    dc/hello-openshift   1          1         1         config,image(hello-openshift:latest)
    dc/ruby-ex           1          1         1         config,image(ruby-ex:latest)
    
    NAME                   DESIRED   CURRENT   READY     AGE
    rc/guestbook-1         1         1         1         1m
    rc/hello-openshift-1   1         1         1         1m
    rc/ruby-ex-1           1         1         1         43s
    
    NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    svc/guestbook         10.111.126.115   <none>        3000/TCP            1m
    svc/hello-openshift   10.111.23.21     <none>        8080/TCP,8888/TCP   1m
    svc/ruby-ex           10.111.162.157   <none>        8080/TCP            1m
    
    NAME                         READY     STATUS      RESTARTS   AGE
    po/guestbook-1-plnnq         1/1       Running     0          1m
    po/hello-openshift-1-g4g0j   1/1       Running     0          1m
    po/ruby-ex-1-h99np           1/1       Running     0          42s
    po/ruby-ex-2-build           0/1       Completed   0          55s

    The services and pods IPs are different, because they are assigned dynamically at creation time.

Creating a PVC backup

This topic describes how to synchronize persistent data from inside of a container to a server and then restore the data onto a new persistent volume claim.

Depending on the provider that is hosting the OpenShift Origin environment, the ability to launch third party snapshot services for backup and restore purposes also exists. As OpenShift Origin does not have the ability to launch these services, this guide does not describe these steps.

Consult any product documentation for the correct backup procedures of specific applications. For example, copying the mysql data directory itself would not be a usable backup. Instead, run the specific backup procedures of the associated application and then synchronize any data. This includes using snapshot solutions provided by the OpenShift Origin hosting platform.

Backup persistent volume claims

Procedure
  1. View the project and pods:

    $ oc get pods
    NAME           READY     STATUS      RESTARTS   AGE
    demo-1-build   0/1       Completed   0          2h
    demo-2-fxx6d   1/1       Running     0          1h
  2. Describe the desired pod to find the volumes currently being used by a persistent volume:

    $ oc describe pod demo-2-fxx6d
    Name:			demo-2-fxx6d
    Namespace:		test
    Security Policy:	restricted
    Node:			ip-10-20-6-20.ec2.internal/10.20.6.20
    Start Time:		Tue, 05 Dec 2017 12:54:34 -0500
    Labels:			app=demo
    			deployment=demo-2
    			deploymentconfig=demo
    Status:			Running
    IP:			172.16.12.5
    Controllers:		ReplicationController/demo-2
    Containers:
      demo:
        Container ID:	docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a
        Image:		docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Image ID:		docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Port:		8080/TCP
        State:		Running
          Started:		Tue, 05 Dec 2017 12:54:52 -0500
        Ready:		True
        Restart Count:	0
        Volume Mounts:
          */opt/app-root/src/uploaded from persistent-volume (rw)*
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro)
        Environment Variables:	<none>
    ...omitted...

    The above shows that the persistent data is currently located in the /opt/app-root/src/uploaded directory.

  3. Copy the data locally:

    $ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app
    receiving incremental file list
    uploaded/
    uploaded/ocp_sop.txt
    uploaded/lost+found/
    
    sent 38 bytes  received 190 bytes  152.00 bytes/sec
    total size is 32  speedup is 0.14

    The ocp_sop.txt file has been pulled down to the local system to be backed up by backup software or to another backup mechanism.

    The steps above can also be used in the event that a pod starts without needing to use a pvc, but then decides a pvc is necessary. This would allow for the data to be preserved and the restore procedures to be used to populate the new storage.

Restore persistent volume claims

This topic describes two methods for restoring data. The first involves deleting the file, then placing the file back in the expected location. The second example shows migrating persistent volume claims. The migration would occur in the event that the storage needs to be moved or in a disaster scenario when the backend storage no longer exists.

Check with the restore procedures for the specific application on any steps required to restore data to the application.

Restoring files to an existing PVC
Procedure
  1. Delete the file:

    $ oc rsh demo-2-fxx6d
    sh-4.2$ ls */opt/app-root/src/uploaded/*
    lost+found  ocp_sop.txt
    sh-4.2$ *rm -rf /opt/app-root/src/uploaded/ocp_sop.txt*
    sh-4.2$ *ls /opt/app-root/src/uploaded/*
    lost+found
  2. Replace the file from the server containing the rsync backup of the files that were in the pvc:

    $ oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/
  3. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory:

    $ oc rsh demo-2-fxx6d
    sh-4.2$ *ls /opt/app-root/src/uploaded/*
    lost+found  ocp_sop.txt
Restoring data to a new PVC

The following steps assume that a new pvc has been created.

Procedure
  1. Overwrite the currently defined claim-name:

    $ oc volume dc/demo --add --name=persistent-volume \
    		--type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
  2. Validate that the pod is using the new PVC:

    $ oc describe dc/demo
    Name:		demo
    Namespace:	test
    Created:	3 hours ago
    Labels:		app=demo
    Annotations:	openshift.io/generated-by=OpenShiftNewApp
    Latest Version:	3
    Selector:	app=demo,deploymentconfig=demo
    Replicas:	1
    Triggers:	Config, Image(demo@latest, auto=true)
    Strategy:	Rolling
    Template:
      Labels:	app=demo
    		deploymentconfig=demo
      Annotations:	openshift.io/container.demo.image.entrypoint=["container-entrypoint","/bin/sh","-c","$STI_SCRIPTS_PATH/usage"]
    		openshift.io/generated-by=OpenShiftNewApp
      Containers:
       demo:
        Image:	docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Port:	8080/TCP
        Volume Mounts:
          /opt/app-root/src/uploaded from persistent-volume (rw)
        Environment Variables:	<none>
      Volumes:
       persistent-volume:
        Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        *ClaimName:	filestore*
        ReadOnly:	false
    ...omitted...
  3. Now that the new pvc is being used by the deployment configuration, use oc rsync to place the files onto the new pvc:

    $ oc rsync uploaded demo-3-2b8gs:/opt/app-root/src/
    sending incremental file list
    uploaded/
    uploaded/ocp_sop.txt
    uploaded/lost+found/
    
    sent 181 bytes  received 39 bytes  146.67 bytes/sec
    total size is 32  speedup is 0.15
  4. Validate that the file is back on the pod by using oc rsh to connect to the pod and view the contents of the directory.

    $ oc rsh demo-3-2b8gs
    sh-4.2$ ls /opt/app-root/src/uploaded/
    lost+found  ocp_sop.txt