百度智能云

All Product Document

          Cloud Container Engine

          Use Cloud File Storage (CFS)

          Preparation

          The user can mount the CFS instance in the container after the following preparation is made.

          • Register account, and complete identity verification.
          • Create an available container cluster.
          • Activate CFS service.
          • Create CFS instances and mount points.

          Create Container Cluster

          1.Create a container cluster. Refer to Create Cluster for operation steps.

          2.Download the command-line client kubectl and connect to the cluster. Refer to Connect to the Kubernetes Cluster through Kubectl.

          Create CFS Instance and Mount Point

          1.Create a CFS instance. For operation steps, please see Create File System

          2.Add CFS mount point. For operation steps, please see Add Mount Point

          Note: The cfs instance and mount point created must be in the same vpc as the cluster node.

          3.Get CFS mount address. For operation steps, please see Get Mount Address

          This operation assumes that CFS mount point address is cfs-test.agile.com

          Operation Guide

          Static PV/PVC Mounting CFS

          1.Create PV and PVC resources in the cluster.

          Using kubectl, execute kubectl create -f pv-cfs.yaml to complete PV creation.

          The corresponding pv-cfs.yaml file is as follows:

          apiVersion: v1 
          kind: PersistentVolume 
          metadata: 
            name: pv-cfs 
          spec: 
            capacity: 
              storage: 8Gi 
            accessModes: 
              - ReadWriteMany 
            persistentVolumeReclaimPolicy: Retain 
            mountOptions: 
              - hard 
              - nfsvers=4.1 
              - nordirplus 
            nfs: 
              path: / 
              server: cfs-test.agilecloud.com 

          Note: The server field in yaml corresponds to the CFS mount point address.

          Note: the path field in yaml corresponds to the CFS mount directory, which needs to exist in advance before mounting

          After creating a PV, enter kubectl get pvto see a PV in available status, as shown below:

          $ kubectl get pv 
          NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE 
          pv-cfs    8Gi        RWX            Retain           Available                                      3s 

          Build a PVC that can be bound to PV

          Using kubectl, execute kubectl create -f pvc-cfs.yaml to complete the creation of PVC

          The correspondingpvc-cfs.yaml file is as follows:

          kind: PersistentVolumeClaim 
          apiVersion: v1 
          metadata: 
            name: pvc-cfs 
          spec: 
            accessModes: 
              - ReadWriteMany 
            resources: 
              requests: 
                storage: 8Gi 

          PVC is in pending state before binding.

          $ kubectl get pvc 
          NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
          pvc-cfs   Pending                                                      2s                                                  2s 

          After binding, the PV and PVC status changes to Bound.

          $ kubectl get pv 
          NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM             STORAGECLASS   REASON    AGE 
          pv-cfs    8Gi        RWX            Retain           Bound     default/pvc-cfs                            36s 
          $ kubectl get pvc 
          NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
          pvc-cfs   Bound     pv-cfs    8Gi        RWX                           1m 

          For more descriptions of PV and PVC settings and fields, see K8S Official Document

          2.Mount PVC in pod.

          Specify the corresponding PVC name in Pod Spec, and use kubectl to execute kubectl create -f demo-cfs-rc.yaml to complete the creation of rc.

          The correspondingdemo-cfs-rc.yaml file is as follows:

          apiVersion: v1 
          kind: ReplicationController 
          metadata: 
            name: nginx 
          spec: 
            replicas: 20 
            selector: 
              app: nginx 
            template: 
              metadata: 
                name: nginx 
                labels: 
                  app: nginx 
              spec: 
                containers: 
                - name: nginx 
                  image: nginx 
                  ports: 
                  - containerPort: 80 
                  volumeMounts: 
                  - mountPath: "/cfs-volume" 
                    name: mycfs 
                volumes: 
                - name: mycfs 
                  persistentVolumeClaim: 
                    claimName: pvc-cfs 

          After the Pod is created, you can read and write the/cfs-volume path in the container to access the content on the corresponding CFS storage.

          Since accessModes is specified asReadWriteManywhen creating PV and PVC, the PVC can be read and written by Pod on multiple nodes.

          3.Release PV and PVC resources.

          After the use of storage resources is completed, PVC and PV resources can be released.

          Use the following command to release PVC.

          $ kubectl delete -f  pvc-cfs.yaml 

          After PVC is released, the PV state bound to it will change to Release as follows:

          NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM             STORAGECLASS   REASON    AGE 
          pv-cfs    8Gi        RWX            Retain           Released   default/pvc-cfs                            16m 

          Enter the following command to release the PV resource.

          $ kubectl delete -f  pv-cfs.yaml 

          Dynamic PV/PVC mounting CFS

          1.Create StorageClass and Provisioner.

          dynamic-cfs-template.yaml is a yaml file template, which contains the cluster resource information to be created.

          The contents of the dynamic-cfs-template.yaml file are as follows:

          kind: ClusterRole 
          apiVersion: rbac.authorization.k8s.io/v1 
          metadata: 
            name: nfs-client-provisioner-runner 
          rules: 
            - apiGroups: [""] 
              resources: ["persistentvolumes"] 
              verbs: ["get", "list", "watch", "create", "delete"] 
            - apiGroups: [""] 
              resources: ["persistentvolumeclaims"] 
              verbs: ["get", "list", "watch", "update"] 
            - apiGroups: ["storage.k8s.io"] 
              resources: ["storageclasses"] 
              verbs: ["get", "list", "watch"] 
            - apiGroups: [""] 
              resources: ["events"] 
              verbs: ["create", "update", "patch"] 
          ---
          kind: ClusterRoleBinding 
          apiVersion: rbac.authorization.k8s.io/v1 
          metadata: 
            name: run-nfs-client-provisioner 
          subjects: 
            - kind: ServiceAccount 
              name: nfs-client-provisioner 
              namespace: kube-system 
          roleRef: 
            kind: ClusterRole 
            name: nfs-client-provisioner-runner 
            apiGroup: rbac.authorization.k8s.io 
          ---
          kind: Role 
          apiVersion: rbac.authorization.k8s.io/v1 
          metadata: 
            name: leader-locking-nfs-client-provisioner 
            namespace: kube-system 
          rules: 
            - apiGroups: [""] 
              resources: ["endpoints"] 
              verbs: ["get", "list", "watch", "create", "update", "patch"] 
          ---
          kind: RoleBinding 
          apiVersion: rbac.authorization.k8s.io/v1 
          metadata: 
            name: leader-locking-nfs-client-provisioner 
            namespace: kube-system 
          subjects: 
            - kind: ServiceAccount 
              name: nfs-client-provisioner 
              # replace with namespace where provisioner is deployed 
              namespace: kube-system 
          roleRef: 
            kind: Role 
            name: leader-locking-nfs-client-provisioner 
            apiGroup: rbac.authorization.k8s.io 
          
          ---
          apiVersion: v1 
          kind: ServiceAccount 
          metadata: 
            name: nfs-client-provisioner 
            namespace: kube-system 
          ---
          apiVersion: v1 
          kind: PersistentVolume 
          metadata: 
            name: pv-cfs 
          spec: 
            capacity: 
              storage: 5Gi 
            accessModes: 
              - ReadWriteMany 
            persistentVolumeReclaimPolicy: Retain 
            mountOptions: 
              - hard 
              - nfsvers=4.1 
              - nordirplus 
            nfs: 
              path: {{NFS_PATH}} 
              server: {{NFS_SERVER}} 
          ---
          kind: PersistentVolumeClaim 
          apiVersion: v1 
          metadata: 
            name: pvc-cfs 
            namespace: kube-system 
          spec: 
            accessModes: 
              - ReadWriteMany 
            resources: 
              requests: 
                storage: 5Gi 
          ---
          kind: Deployment 
          apiVersion: extensions/v1beta1 
          metadata: 
            name: nfs-client-provisioner 
            namespace: kube-system 
          spec: 
            replicas: 1 
            strategy: 
              type: Recreate 
            template: 
              metadata: 
                labels: 
                  app: nfs-client-provisioner 
              spec: 
                serviceAccountName: nfs-client-provisioner 
                containers: 
                  - name: nfs-client-provisioner 
                    image: hub.agilecloud.com/jpaas-public/nfs-client-provisioner:latest 
                    imagePullPolicy: Always 
                    volumeMounts: 
                      - name: nfs-client-root 
                        mountPath: /persistentvolumes 
                    env: 
                      - name: PROVISIONER_NAME 
                        value: {{PROVISIONER_NAME}} 
                      - name: NFS_SERVER 
                        value: {{NFS_SERVER}} 
                      - name: NFS_PATH 
                        value: {{NFS_PATH}} 
                      - name: SHARE_PATH 
                        value: "{{SHARE_PATH}}" 
                volumes: 
                  - name: nfs-client-root 
                    persistentVolumeClaim: 
                      claimName: pvc-cfs 
          
          ---
          apiVersion: storage.k8s.io/v1 
          kind: StorageClass 
          metadata: 
            name: {{STORAGE_CLASS_NAME}} 
          provisioner: {{PROVISIONER_NAME}} 
          parameters: 
            archiveOnDelete: "{{ARCHIVE_ON_DELETE}}" 
          mountOptions: 
            - hard 
            - nfsvers=4.1 
            - nordirplus 

          The user-defined options in the dynamic-cfs-template.yaml template file are as follows:

          • NFS SERVER: CFS mount point address.
          • NfS_PATH: CFS remote mount directory. Note that the directory needs to exist in advance before use. If the directory does not exist, it will cause the provider plug-in to fail to start.
          • SHARE_PATH: whether CFS mount directories of different PVC are isolated, true-not isolated, false-isolated. If isolation is specified, a subdirectory will be created for each PVC in the CFS mount directory, and the corresponding PVC uses the subdirectory as the mount directory; otherwise, all PVC shares the mount directory.
          • ARCHIVE_ON_DELETE: whether to keep the corresponding data after PVC is deleted. It only takes effect when PVC mount directory is isolated. true-retain, false-not retain. When PVC mount directory is shared, deleting PVC will not delete any data. If it is set to not-retain, the subdirectory corresponding to PVC will be deleted directly. Otherwise, only the original directory name will be retained if prefixed with 'archive -'.
          • STORAGE_CLASS_NAME: the name of the StorageClass created.
          • PROVIDER'S NAME: Provider name.

          In the system that supports shell, you can directly use the following replace.sh script to replace the template variables in yaml template.

          #!/bin/sh 
          # user defined vars 
          
          NFS_SERVER="cfs-test.agilecloud.com" 
          NFS_PATH="/cce/shared"
          SHARE_PATH="true" # Whether mount directories of different PVC are isolated, true-not isolated, false-isolated 
          ARCHIVE_ON_DELETE="false" # Whether the corresponding data is retained after deleting PVC. It only takes effect when PVC mount directory is isolated. true-retain, false-not retain 
          STORAGE_CLASS_NAME="sharedcfs" # StorageClass name 
          PROVISIONER_NAME="agilecloud/$STORAGE_CLASS_NAME" # Provider name 
          
          YAML_FILE="./dynamic-cfs-template.yaml" 
          
          # replace template vars in yaml file 
          
          sed -i "s#{{SHARE_PATH}}#$SHARE_PATH#" $YAML_FILE 
          sed -i "s#{{ARCHIVE_ON_DELETE}}#$ARCHIVE_ON_DELETE#" $YAML_FILE 
          sed -i "s#{{STORAGE_CLASS_NAME}}#$STORAGE_CLASS_NAME#" $YAML_FILE 
          sed -i "s#{{PROVISIONER_NAME}}#$PROVISIONER_NAME#" $YAML_FILE 
          sed -i "s#{{NFS_SERVER}}#$NFS_SERVER#" $YAML_FILE 
          sed -i "s#{{NFS_PATH}}#$NFS_PATH#" $YAML_FILE 

          Replace the shell variable in the first half of the script with the expected value, place the replace.sh script and the dynamic-cfs-template.yaml file in the same directory, and executesh replace.sh.

          Or alternatively, replace the template variables in the template yaml file with the expected values.

          Finally, use the kubectl tool to execute kubectl create -f dynamic-cfs-template.yaml to complete the creation of StorageClass and provisioner.

          $ kubectl create -f dynamic-cfs-template.yaml 
          clusterrole "nfs-client-provisioner-runner" created 
          clusterrolebinding "run-nfs-client-provisioner" created 
          role "leader-locking-nfs-client-provisioner" created 
          rolebinding "leader-locking-nfs-client-provisioner" created 
          serviceaccount "nfs-client-provisioner" created 
          persistentvolume "pv-cfs" created 
          persistentvolumeclaim "pvc-cfs" created 
          deployment "nfs-client-provisioner" created 
          storageclass "sharedcfs" created 
          $ kubectl get pod --namespace kube-system|  grep provisioner 
          nfs-client-provisioner-c94494f6d-dlxmj   1/1       Running   0          26s 

          If the corresponding Pod enters the Running state, the resources required for dynamic PV binding have been established successfully.

          2.Dynamically generate PV and bind it when creating PVC.

          If the storageclass name created above is specified in the PVC Spec, when the PVC is created, the provider bound to the corresponding storageclass will be automatically called to generate the corresponding PV for binding.

          executekubectl create -f dynamic-pvc-cfs.yaml through kubectl to complete the creation of PVC.

          Assuming the name of the created StorageClass issharedcfs, the corresponding dynamic-pvc-cfs.yaml file is as follows:

          kind: PersistentVolumeClaim 
          apiVersion: v1 
          metadata: 
            name: dynamic-pvc-cfs 
          spec: 
            accessModes: 
              - ReadWriteMany 
            storageClassName: sharedcfs 
            resources: 
              requests: 
                storage: 5Gi 

          After creating PVC, you can see that the corresponding PV is automatically created, and the PVC status changes to Bound, that is, PVC has been bound with the newly created PV.

          $ kubectl create -f dynamic-pvc-cfs.yaml 
          persistentvolumeclaim "dynamic-pvc-cfs" created 
          $ kubectl get pvc 
          NAME              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
          dynamic-pvc-cfs   Bound     pvc-6dbf3265-bbe0-11e8-bc54-fa163e08135d   5Gi        RWX            sharedcfs      4s 
          $ kubectl get pv 
          NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                     STORAGECLASS   REASON    AGE 
          pv-cfs                                     5Gi        RWX            Retain           Bound     kube-system/pvc-cfs                                21m 
          pvc-6dbf3265-bbe0-11e8-bc54-fa163e08135d   5Gi        RWX            Delete           Bound     default/dynamic-pvc-cfs   sharedcfs                7s 

          3.Mount PVC in pod.

          Specify the corresponding PVC name in the Pod Spec, and use kubectl to executekubectl create -f dynamic-cfs-pod.yamlto complete the resource creation.

          The corresponding dynamic-cfs-pod.yaml file is as follows:

          apiVersion: v1 
          kind: Pod 
          metadata: 
            name: test-pvc-pod 
            labels: 
              app: test-pvc-pod 
          spec: 
            containers: 
            - name: test-pvc-pod 
              image: nginx 
              volumeMounts: 
                - name: cfs-pvc 
                  mountPath: "/cfs-volume" 
            volumes: 
              - name: cfs-pvc 
                persistentVolumeClaim: 
                  claimName: dynamic-pvc-cfs 

          After the Pod is created, you can read and write the /cfs-volume path in the container to access the content on the corresponding CFS storage.

          4.Destroy binding PV dynamically when releasing PVC.

          When PVC is deleted, the dynamic PV bound to it will be deleted together, and the data in it will be retained or deleted according to the user-defined SHARE_PATH and "ARCHIVE_ON_DELETE options.

          $ kubectl get pvc 
          NAME              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
          dynamic-pvc-cfs   Bound     pvc-6dbf3265-bbe0-11e8-bc54-fa163e08135d   5Gi        RWX            sharedcfs      9m 
          $ kubectl get pv 
          NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                     STORAGECLASS   REASON    AGE 
          pv-cfs                                     5Gi        RWX            Retain           Bound     kube-system/pvc-cfs                                31m 
          pvc-6dbf3265-bbe0-11e8-bc54-fa163e08135d   5Gi        RWX            Delete           Bound     default/dynamic-pvc-cfs   sharedcfs                9m 
          $ kubectl delete -f dynamic-pvc-cfs.yaml 
          persistentvolumeclaim "dynamic-pvc-cfs" deleted 
          $ kubectl get pv 
          NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                 STORAGECLASS   REASON    AGE 
          pv-cfs    5Gi        RWX            Retain           Bound     kube-system/pvc-cfs                            31m 
          $ kubectl get pvc 
          No resources found.
          Previous
          Use Cloud Disk Storage (CDS)
          Next
          Use Baidu Object Storage (BOS)