Overview

OpenShift Origin can be configured to access local volumes for application data.

Local volumes are persistent volumes (PV) representing locally-mounted file systems. In the future, they may be extended to raw block devices.

Local volumes are different from a hostPath. They have a special annotation that makes any pod that uses the PV to be scheduled on the same node where the local volume is mounted.

In addition, local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.

The local volume provisioner allows using local storage within OpenShift Origin and supports:

  • Volumes

  • PVs

Local volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Mount Local Volumes

All local volumes must be manually mounted before they can be consumed by OpenShift Origin as PVs.

All volumes must be mounted into the /mnt/local-storage/<storage-class-name>/<volume> path. The administrators are required to create the local devices as needed (by using any method such as a disk partition or an LVM), create suitable file systems on these devices, and mount them using a script or /etc/fstab entries.

Example /etc/fstab entries
# device name   # mount point                  # FS    # options # extra
/dev/sdb1       /mnt/local-storage/ssd/disk1 ext4     defaults 1 2
/dev/sdb2       /mnt/local-storage/ssd/disk2 ext4     defaults 1 2
/dev/sdb3       /mnt/local-storage/ssd/disk3 ext4     defaults 1 2
/dev/sdc1       /mnt/local-storage/hdd/disk1 ext4     defaults 1 2
/dev/sdc2       /mnt/local-storage/hdd/disk2 ext4     defaults 1 2

All volumes must be accessible to processes running within Docker containers. Change the labels of mounted file systems to allow that:

---
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
---

Configure Local Provisioner

OpenShift Origin depends on an external provisioner to create PVs for local devices and to clean them up when they are not needed (to enable reuse).

  • The local volume provisioner is different from most provisioners and does not support dynamic provisioning.

  • The local volume provisioner requires that the administrators preconfigure the local volumes on each node and mount them under discovery directories. The provisioner then manages the volumes by creating and cleaning up PVs for each volume.

This external provisioner should be configured using a ConfigMap to relate directories with StorageClasses. This configuration must be created before the provisioner is deployed.

(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example: oc new-project local-storage

kind: ConfigMap
metadata:
  name: local-volume-config
data:
    storageClassMap: |
        local-ssd: (1)
            hostDir:  /mnt/local-storage/ssd (2)
            mountDir: /mnt/local-storage/ssd (3)
        local-hdd:
            hostDir: /mnt/local-storage/hdd
            mountDir: /mnt/local-storage/hdd
1 Name of the StorageClass.
2 Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
3 Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host and mountDir can be omitted in this case.

With this configuration, the provisioner creates:

  • One PV with StorageClass local-ssd for every mounted subdirectory in /mnt/local-storage/ssd.

  • One PV with StorageClass local-hdd for every mounted subdirectory in /mnt/local-storage/hdd.

Syntax of the ConfigMap has changed between OpenShift Origin 3.9 and 3.10. Since this feature is in Technology Preview, the ConfigMap is not converted automatically during update.

Deploy Local Provisioner

Before starting the provisioner, mount all local devices and create a ConfigMap with storage classes and their directories.

Install the local provisioner from the local-storage-provisioner-template.yaml file.

  1. Create a service account that allows running pods as a root user, use hostPath volumes, and use any SELinux context to be able to monitor, manage, and clean local volumes:

    $ oc create serviceaccount local-storage-admin
    $ oc adm policy add-scc-to-user privileged -z local-storage-admin

    To allow the provisioner pod to delete content on local volumes created by any pod, root privileges and any SELinux context are required. hostPath is required to access the /mnt/local-storage path on the host.

  2. Install the template:

    $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
  3. Instantiate the template by specifying values for configmap, account, and provisioner_image parameters:

    $ oc new-app -p CONFIGMAP=local-volume-config \
      -p SERVICE_ACCOUNT=local-storage-admin \
      -p NAMESPACE=local-storage \
      -p PROVISIONER_IMAGE=quay.io/external_storage/local-volume-provisioner:v1.0.1 \
      local-storage-provisioner
  4. Add the necessary storage classes:

    oc create -f ./storage-class-ssd.yaml
    oc create -f ./storage-class-hdd.yaml
    storage-class-ssd.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: local-ssd
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    storage-class-hdd.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: local-hdd
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer

    See the template for other configurable options. This template creates a DaemonSet that runs a pod on every node. The pod watches directories specified in the ConfigMap and creates PVs for them automatically.

    The provisioner runs as root to be able to clean up the directories when a PV is released and all data needs to be removed.

Adding New Devices

Adding a new device is semi-automatic. The provisioner periodically checks for new mounts in configured directories. The administrator needs to create a new subdirectory there, mount a device there, and allow pods to use the device by applying SELinux label:

$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/

Omitting any of these steps may result in the wrong PV being created.

Raw Block Devices

It is possible to statically provision also raw block devices using the local volume provisioner. This feature is disabled by default and requires additional configuration.

  1. Enable the BlockVolume feature gate on all masters. Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add BlockVolume=true under the apiServerArguments and controllerArguments sections:

    apiServerArguments:
       feature-gates:
       - BlockVolume=true
    ...
    
     controllerArguments:
       feature-gates:
       - BlockVolume=true
    ...
  2. Enable the feature gate on all nodes by editing the node configuration ConfigMap:

    $ oc edit configmap node-config-compute --namespace openshift-node
    $ oc edit configmap node-config-master --namespace openshift-node
    $ oc edit configmap node-config-infra --namespace openshift-node

    Ensure all the configmaps contain BlockVolume=true in the feature-gates array of the kubeletArguments:

    Example node configmap feature-gates setting
    kubeletArguments:
       feature-gates:
       - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,BlockVolume=true
  3. Restart the master. The nodes should be restarted automatically after the configuration change (it may take several minutes).

Prepare the Block Devices

Before starting the provisioner all the block devices that should be available to the pods need to be linked to the /mnt/local-storage/<storage class> directory structure. Example: to make a device /dev/dm-36 available:

  1. Create a directory for its StorageClass in /mnt/local-storage:

    $ mkdir -p /mnt/local-storage/block-devices
  2. Create a symbolic link that would point to the device:

    $ ln -s /dev/dm-36 dm-uuid-LVM-1234

    It is a good practice to use the same name for the symbolic link as the link from /dev/disk/by-uuid or /dev/disk/by-id directory to avoid possible name conflicts.

  3. Create or update the ConfigMap configuring the provisioner:

    kind: ConfigMap
    metadata:
      name: local-volume-config
    data:
        storageClassMap: |
            block-devices: (1)
                hostDir:  /mnt/local-storage/block-devices (2)
                mountDir: /mnt/local-storage/block-devices (3)
    1 Name of the StorageClass.
    2 Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
    3 Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host and mountDir can be omitted in this case.
  4. Change the SELinux label of the device and the /mnt/local-storage/:

    $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
    $ chcon unconfined_u:object_r:svirt_sandbox_file_t:s0 /dev/dm-36
  5. Create StorageClass for the block-devices:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: block-devices
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer

Now the block device /dev/dm-36 is ready to be used by the provisioner and provisioned as a PV.

Deploy the Provisioner

The deployment of the provisioner is similar as with the usual filesytem-type of volumes. There are two differences: the provisioner needs to be run in a privileged container and have access to the /dev filesystem from the host. Download the template from the local-storage-provisioner-template.yaml file and make the following changes:

  1. Set the privileged attribute of the securityContext of the container spec to true:

    ...
      containers:
    ...
        name: provisioner
    ...
          securityContext:
            privileged: true
    ...
  2. Ensure the host /dev/ filesystem would be mounted into the container using hostPath:

    ...
      containers:
    ...
        name: provisioner
    ...
        volumeMounts:
        - mountPath: /dev
          name: dev
    ...
        volumes:
        - hostPath:
            path: /dev
         name: dev
    ...
  3. Create the template from the modified yaml file:

    $ oc create -f local-storage-provisioner-template.yaml
  4. Start the provisioner the same way as in the case without the block devices support:

    $ oc new-app -p CONFIGMAP=local-volume-config \
      -p SERVICE_ACCOUNT=local-storage-admin \
      -p NAMESPACE=local-storage \
      -p PROVISIONER_IMAGE=quay.io/external_storage/local-volume-provisioner:v1.0.1 \
      local-storage-provisioner

Using the raw block PV

To use the block device in the pod, create a PVC with volumeMode: Block and the storageClass of the block device:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  storageClassName: block-devices
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi

An example pod using the block device PVC:

apiVersion: v1
kind: Pod
metadata:
  name: busybox-test
  labels:
    name: busybox-test
spec:
  restartPolicy: Never
  containers:
    - resources:
        limits :
          cpu: 0.5
      image: gcr.io/google_containers/busybox
      command:
        - "/bin/sh"
        - "-c"
        - "while true; do date; sleep 1; done"
      name: busybox
      volumeDevices:
        - name: vol
          devicePath: /dev/xvda
  volumes:
      - name: vol
        persistentVolumeClaim:
          claimName: block-pvc

Note the volume is not mounted in the pod but exposed as the /dev/xvda block device.