How To Deploy Kubernetes With NetApp Trident Persistent Storage

As folks adopt DevOps principals they are using common applications to help them get there. One of those is Docker and usually in the same sentence Kubernetes is mentioned next. To review, Docker is essentially a wrapper for Linux containers (LXC), which similar to FreeBSD jails or Solaris Zones, provides a method for applications (and their dependencies) to be isolated in separate namespaces all while sharing the host system’s kernel. Docker containers are extremely portable as they just need the host server to have a LXC-compatible kernel and the Docker application installed. Kubernetes takes this concept to the next level by automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. For a more detailed intro on what Kubernetes is check out the “Sources” section below.
Now to the meat of the post, what is NetApp Trident and where does it fit in to the Docker/Kubernetes equation? Well according to NetApp Trident’s GitHub page, “Trident provides storage orchestration for Kubernetes, integrating with its Persistent Volume framework to act as an external provisioner for NetApp ONTAP, SolidFire, and E-Series systems. Additionally, through its REST interface, Trident can also provide storage orchestration for non-Kubernetes deployments.” In other words, Trident allows one to attach persistent storage from NetApp FAS, E-Series, or SolidFire system(s) to containers allowing applications such as databases to easily operate in a containerized environment. Below are the steps I compiled needed to not only stand up a small 3-node Kubernetes cluster but also to deploy the NetApp Trident plugin:

  • Kubernetes Host Preparation
    • Provision 3+ CentOS 7.x VMs with the latest updates (minimal package bundle recommended)
      • Server #1 – Master
      • Server #2-3 – Slave (aka minions)
    • Disable the firewalld service
      # systemctl stop firewalld && systemctl disable firewalld
    • Disable SELinux
      # setenforce permissive
      # vi /etc/sysconfig/selinux
      NOTE: Change "enforcing" to "disabled" and reboot
    • Set default version of NFS equal to version 3
      # sed -i 's/# Defaultvers=4/Defaultvers=3/g' /etc/nfsmount.conf
    • Define the following yum repos
      • docker
        # cat << EOF > /etc/yum.repos.d/docker.repo
        name=Docker Repository
        NOTE: Due to compatability concerns it is recommended to use the CentOS-mainline version of Docker (currently 1.12 as of 2017-08-12)
      • kubernetes
        # cat << EOF > /etc/yum.repos.d/kubernetes.repo
    • Install the following RPM packages
      # yum install -y docker kubelet kubeadm iptables-services
    • Enable the following services to start at boot
      # systemctl enable docker && systemctl enable kubelet
    • Start the docker service
      # systemctl start docker
    • Login to the target NetApp ONTAP cluster and create an export policy that will be used by all the Kubernetes cluster nodes
      # export-policy create -vserver -policy prod_kubernetes_nodes
      # export-policy rule create -vserver -policy prod_kubernetes_nodes -rorule any -rwrule any -superuser any -protocol nfs -clientmatch <Kubernetes_Node#1_Storage_IP>
      # export-policy rule create -vserver -policy prod_kubernetes_nodes -rorule any -rwrule any -superuser any -protocol nfs -clientmatch <Kubernetes_Node#2_Storage_IP>
      # export-policy rule create -vserver -policy prod_kubernetes_nodes -rorule any -rwrule any -superuser any -protocol nfs -clientmatch <Kubernetes_Node#3_Storage_IP>
  • Kubernetes Cluster Setup / Initialization
    • Setup the master node
      • Login to the Kubernetes Master server via SSH
      • Initialize the cluster using kubeadm
        # kubeadm init --pod-network-cidr=
        # mkdir -p $HOME/.kube
        # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        # chown $(id -u):$(id -g) $HOME/.kube/config
        NOTE: Save the "kubeadm join" command as the token will be used to join the other nodes to the new cluster.
        # kubectl get pods --all-namespaces
        NOTE: All should be in a "Running" state, if they are not (i.e. ContainerCreating) run the following commands:
            # kubectl describe pods | grep -i ContainerCreating
            # kubectl apply -f
            # kubectl get pods --all-namespaces
      • Install a pod network (flannel)
        # kubectl apply -f
    • Setup the minion nodes
      • Login to the Kubernetes Slave servers (minions) via SSH
      • Join each of these servers to the cluster
        # kubeadm join --token=blahblahblah
      • Navigate to the SSH session for the master server and verify the state of the cluster
        # kubectl get nodes
        # kubectl get pods --all-namespaces
        NOTE: All should be in a "Running" state
        # kubectl describe svc
    • Dashboard (GUI) Setup
      • Create a service account and role binding for the user that will be used to access the dashboard
        # vi dashboard_service_account.yaml
        apiVersion: v1
        kind: ServiceAccount
        name: admin-user
        namespace: kube-system
        # vi dashboard_service_account_role_binding.yaml
        kind: ClusterRoleBinding
        name: admin-user
        kind: ClusterRole
        name: cluster-admin
        - kind: ServiceAccount
        name: admin-user
        namespace: kube-system
        # kubectl create -f dashboard_service_account.yaml
        # kubectl create -f dashboard_service_account_role_binding.yaml
      • Record the value after “token:” for the user we just created which we will use when logging into the dashboard
        # kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
      • Deploy the “dashboard” pod which provides a GUI to manage the cluster
        # kubectl apply -f
      • From your workstation, you can access the GUI by running the following steps
        • NOTE: These steps assume the “kubectl” binary along with the “/etc/kubernetes/admin.conf” file from the Master server has been download to the workstation and stored under $HOME/.kube/ and has the filename of “config”
        • Setup a proxy session which will allow the API server to be accessed for authentication
          # kubectl proxy
        • Open up a web browser from your workstation and browse to the following URL
          • Setup a proxy session which will allow the API server to be accessed for authentication
            # http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
            NOTE: You will need to select "Token" and enter in the token value that we captured a few steps back.
  • NetApp Trident Storage Orchestration
    • From the master node download and extract the latest version of the NetApp Trident tarball from here
    • Change the current path to the NetApp Trident directory
      # cd trident-installer
    • Copy “sample-input/backend-ontap-nas.json” to “setup” and modify it as needed
      # cp sample-input/backend-ontap-nas.json setup/backend.json
      # vi setup/backend.json
       "version": 1,
       "storageDriverName": "ontap-nas",
       "managementLIF": "",
       "dataLIF": "",
       "svm": "mn1_prod_svm_files",
       "username": "svc-netappdvp",
       "password": "PASSWORD_GOES_HERE"
      NOTE: See the "" link below for details on how to create an SVM account for Trident.
    • While still logged in to the aforementioned NetApp system via SSH, temporarily set the default export-policy rule to allow RW/Root to all clients (Required by Trident install)
      # export-policy rule modify -vserver -policy default -rorule any -rwrule any -superuser any -protocol any -ruleindex 1 -clientmatch
    • Start trident the installer
      # ./tridentctl install --dry-run -n trident
      # ./tridentctl install -n trident
    • Verify trident deployed successfully
      # kubectl logs -n trident trident-launcher -f
      # kubectl get pods --all-namespaces --watch=true
    • Copy the “tridentctl” binary to a path in the $PATH directory
      # cp tridentctl /usr/local/bin/
      # chmod +x /usr/local/bin/tridentctl
    • Register the backend with Trident
      # tridentctl create backend -n trident -f setup/backend.json
    • Login to the aforementioned NetApp system via SSH and revert the default export-policy rule to allow ONLY RO to all clients
      # export-policy rule modify -vserver -policy default -rorule any -rwrule none -superuser none -protocol any -ruleindex 1 -clientmatch # vol modify -vserver -volume trident_trident 
    • Copy “sample-input/storage-class-basic.yaml.templ” to “setup/storage_class-default_dburkland.yaml” and modify it as needed
      # cp sample-input/storage-class-basic.yaml.templ setup/storage_class-default_dburkland.yaml
      # vi setup/storage_class-default_dburkland.yaml
      kind: StorageClass
       name: default-dburkland
       backendType: "ontap-nas"
    • Create the new storage class using the file created in the previous step
      # kubectl create -f setup/storage_class-default_dburkland.yaml
    • Define an example persistent volume claim (PVC)
      # vi setup/pvc-dburkland_harvest_carbon.yaml
      kind: PersistentVolumeClaim
      apiVersion: v1
       name: var-lib-carbon
       annotations: default-dburkland "prod_kubernetes_nodes" "default-dburkland"
       - ReadWriteMany
       storage: 5Gi
    • Create an example PVC
      # kubectl create namespace harvest
      # kubectl create -f setup/pvc-dburkland_harvest_carbon.yaml
      # kubectl get pvc --all-namespaces
      • NOTE: If you are interested in deploying an example application that utilizes NetApp Trident click here
    • Troubleshooting Commands
      # kubectl get namespace
      # kubectl get pods --all-namespaces
      # kubectl get pvc --all-namespaces
      # kubectl describe
      # kubectl logs
  • Sources