Use Cloud Disk Storage (CDS)
The current CDS disk supports mounting in the cluster in the form of FlexVolume and CSI, which requires that the cluster k8s version should be ≥ 1.11. FlexVolume supports directly mounting in Pod in volume mode. CSI mounting requires the creation of PV and PVC storage resources before mounting the PVC to the Pod. Both mounting schemes can meet the requirements for specific data persistence in the container. For detailed differences between the two schemes, refer to the official kubernetes documentation:
Preparation
The user must make the following preparations before mounting the CDS instance in the container:
- Register Baidu Accountand complete the Identity Verification;
- Create CCE K8S Cluster
Note: The CDS disks for the same region and availability zone of the cluster nodes have been created and left in unmounted status.
Create Container Cluster
- When you create a container cluster, you can refer to Create Cluster for the operation steps.
- Download the command line client kubectl and connect to the cluster. For the operation steps, refer to Connect to Kubernetes Cluster via kubectl).
Operation Guide
Mount via PV/PVC
1. Install CSI CDS Plug-in
- Select in turn: Cloud Container Engine CCE -> Helm Template -> Baidu AI Cloud Template.
- Search for template by template name
cce-csi-cds-plugin
- Click Install and enter the corresponding parameters.
Parameter Descriptions
- Instance name: the plug-in instance name, for example: bos;
- Deploy cluster: select the cluster where the CDS CSI plug-in needs to be deployed;
- Namespace: the namespace that manages the helm metadata of the instance, for example: kube-system;
- kubernetets version: enter the version of the actual deployed cluster, currently supported: 1.18, 1.16, 1.13;
- nodes: If you specify the data directory of kubelet when deploying the nodes of the cluster, you need to enter the specific data directory to this list. Otherwise, keep the default setting;
- region: Enter the region for the cluster to deploy, including bj (Beijing), su (Suzhou), gz (Guangzhou), bd (Baoding), hkg (Hong Kong), and fwh (Wuhan);
2. Mount CDS into Pod
There are two ways to use CDS through PV/PVC:
- Static mounting: The user needs to create a CDS in Baidu AI Cloud in advance (for the operation method, see CDS documentation), and then create PV and PVC resources in the cluster through the volume id of the CDS.
- Dynamic mounting: When the user declares the PVC in the cluster, the CDS disk is automatically created and dynamically associated with the PV.
The static mounting method requires the CDS disk provided by the user to be in an unmounted state, and to ensure that the correct volume id is filled in. Dynamic mounting may be more convenient in some scenarios, but it should be noted that CDS disks will be automatically created and billed during dynamic mounting. The dynamically created CDS will be deleted when the user deletes the PVC (The claimPolicy parameter corresponding to StorageClass needs to be set to DELETE), but if the user directly deletes the CCE cluster, without deleting the PVC, the CDS needs to be processed manually.
Static Mounting
Create PV/PVC
Use kubectl, and execute kubectl create -f pv-cds.yaml
to complete the creation of PV.
The corresponding pv-cds.yaml file is as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-cds
namespace: "default"
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
csi:
driver: "csi-cdsplugin"
volumeHandle: "v-xxxx"
persistentVolumeReclaimPolicy: Retain
Note: The volumeHandle field in yaml corresponds to cds volume id (short id)
After creating the PV, enter kubectl get pv to see a PV in the available state, as shown below:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-cds 5Gi RWO Retain Available 22s
Create a PVC that can be bound to the PV
Use kubectl and execute kubectl create -f pvc-cds.yaml
to complete the creation of PVC.
The corresponding pvc-cds.yaml file is as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cds-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Before binding, the PVC is pending.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cds-pvc Pending 2s
After binding, the status of PV and PVC becomes Bound.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-cds 5Gi RWO Retain Bound default/csi-cds-pvc 4s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cds-pvc Bound pv-cds 5Gi RWO 36s
For more settings and field descriptions of PV and PVC, see k8s Official Documentation.
Create POD
Just specify the corresponding PVC name in the Pod spec, use kubectl, and execute kubectl create -f demo-cds-rc.yaml
to complete the creation of rc. The corresponding demo-cds-rc.yaml file is as follows:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web-server-rs
labels:
test: ws
spec:
replicas: 1
selector:
matchLabels:
se: ws
template:
metadata:
labels:
se: ws
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: csi-cds-pvc
volumes:
- name: csi-cds-pvc
persistentVolumeClaim:
claimName: csi-cds-pvc
After the Pod creation, you can read and write the /cds-volume path in the container to access the contents of the corresponding cds storage.
Since only AccessModes supported as ReadWriteOnce when creating PV and PVC, the PVC can be read and written by a Pod mounted on a node.
Release PV/PVC
After the use of the storage resources, the PVC and PV resources can be released. Use the following command to release the PVC.
$ kubectl delete -f pvc-cds.yaml
After releasing the PVC, the status of the PV originally bound to it changes to Release, as shown below:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-cds 5Gi RWO Retain Released default/csi-cds-pvc 4m
Enter the following command to release PV resources.
$ kubectl delete -f pv-cds.yaml
Dynamic Mounting
Create StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hp1 #The name can be customized.
provisioner: csi-cdsplugin
parameters:
paymentTiming: "Postpaid" #Prepaid or Postpaid
storageType: "hp1" # Support cloud_hp1, hp1, hdd.
reservationLength: "3" #Required in the Prepaid mode
reclaimPolicy: Delete #Delete and Retain allowed, “Delete” by default
StorageClass Parameter Description:
storageType
This parameter is used to set the type of CDS disks dynamically created, which supports cloud_hp1 (general SSD), hp1 (high-performance cloud disk), and hdd (general HDD). For details, see the specifications about the “storageType” parameters in the interface for creating CDS disks in BCC OpenAPI at: Https://cloud.baidu.com/doc/BCC/s/Ujwvyo1ta
paymentTiming
This parameter is used to set the payment method. The payment method is prepaid or postpaid.
reservationLength
Means the disk reservation time. You need to enter this parameter in the prepaid mode. The unit is month.
Create PVC
Specify the name of the StorageClass created above in the PVC Spec, and when the PVC is created, the corresponding PV will be automatically generated for binding.
Use kubectl, execute kubectl create -f csi-pvc-cds.yaml
to complete the creation of the PVC.
Assuming that the name of the created StorageClass is hp1, the corresponding csi-pvc-cds.yaml file is as follows:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-pvc-cds
spec:
accessModes:
- ReadWriteOnce
storageClassName: hp1
resources:
requests:
storage: 5Gi
PVC Parameter Description: The storage of resource request in PVC has related restrictions on different storageTypes in different StorageClass. hp1: 5~32765GB cloud_hp1: 50~32765GB hdd:5~32765GB
After creating the PVC, you can see that the corresponding PV is automatically created, and the PVC status changes to Bound. That is, the PVC has been bound to the created PV.
$ kubectl create -f csi-pvc-cds.yaml
persistentvolumeclaim "csi-pvc-cds" created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-cds Bound pvc-1ab36e4d1d2711e9 5Gi RWX hp1 4s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ab36e4d1d2711e9 5Gi RWX Delete Bound default/csi-pvc-cds hp1 7s
Mount PVC in Pod
Just specify the corresponding PVC name in the Pod spec, use kubectl to execute kubectl create -f dynamic-cds-pod.yaml
, and complete the resource creation. The corresponding dynamic-cds-pod.yaml file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: test-pvc-pod
labels:
app: test-pvc-pod
spec:
containers:
- name: test-pvc-pod
image: nginx
volumeMounts:
- name: cds-pvc
mountPath: "/cds-volume"
volumes:
- name: cds-pvc
persistentVolumeClaim:
claimName: csi-pvc-cds
After creating the Pod, you can read and write the /cds-volume path in the container to access the contents of the corresponding cds storage.
Dynamically destroy bound PV duringrelease of PVC
When deleting a PVC, whether to automatically delete the dynamic PV bound to the PVC is determined by the reclaimPolicy in the custom StorageClass. reclaimPolicy supports Delete, and the default value of Retain is Delete.
- Delete: Deleting a PVC can automatically delete the PV, and the dynamically applied CDS is automatically released.
- Retain: PV status becomes Released during the deletion of PVC. PV is not deleted automatically. The automatically created CDS needs to be manually released on the CDS console.
$ kubectl delete pvc csi-pvc-cds
persistentvolumeclaim "csi-pvc-cds" deleted
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
pvc-1ab36e4d1d2711e9 40Gi RWO Retain Released default/csi-pvc-cds hp1 3m
In the scenario where pvc is used in batches through claimTemple in the StatefuleSet, the pre-step is required to create a StorageClass.
Take sts-claim.yaml as an example.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
storageClassName: hp1
Create sts-claim.yaml and automatically create PVC simultaneously:
$ kubectl apply -f sts-claim.yaml
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m
web-1 1/1 Running 0 2m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-a1e885701d2f11e9 5Gi RWO hp1 6m
www-web-1 Bound pvc-c91edb891d2f11e9 5Gi RWO hp1 5m
Multi-AZ Clusters Use cds
(1) StorageClass specifies the zone mode
Currently, CDS does not support cross-availability zones. If there are machines with multiple availability zones in the cce cluster, the dynamic creation of CDS needs to be achieved by setting the zone field in StorageClass and NodeSelector and other designated availability zones. You can also use the existing CDS to specify the availability zone required for NodeSelector.
By default, the CCE cluster will label the cluster nodes with the availability zone:
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
192.168.80.15 Ready <none> 13d v1.8.12 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=BCC,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=bj,failure-domain.beta.kubernetes.io/zone=zoneC,kubernetes.io/hostname=192.168.80.15
The failure-domain.beta.kubernetes.io/zone=zoneC label indicates that the cluster node is in the C availability zone.
- Example of using cds in a multi-AZ cluster
StorageClass-Multi-Zone-Temple.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hp1-zonec
provisioner: csi-cdsplugin
parameters:
paymentTiming: "Postpaid"
storageType: "hp1"
reservationLength: "3"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- zoneC
Sts-Multi-Zone-Temple.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts-multi-zone
spec:
serviceName: "nginx"
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 40Gi
storageClassName: hp1-zonec
(2) Set the Volume Binding Mode of the StorageClass to solve this problem by specifying the mode of WaitForFirstConsumer and configuring the PersistentVolume. Specify the binding mode as WaitForFirstConsumer. K8S schedules Pod. Then, you can create CDS.
- Use example
StorageClass-WaitForFirstConsumer-Temple.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cds
provisioner: csi-cdsplugin
volumeBindingMode: WaitForFirstConsumer
parameters:
dynamicVolume: "true"
cdsSizeInGB: "40"
paymentTiming: "Postpaid"
storageType: "hp1"
reclaimPolicy: Delete
Sts-WaitForFirstConsumer-Temple.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
storageClassName: csi-cds
For specific details, see the relevant documentation StorageClass.
Note:
- If you need to destroy the cce cluster and use the csi-provisioner in this article and the reclaimPolicy in the storageClass is set to Delete, delete the pvc. Otherwise, there may be CDS that cannot be automatically released.
Mount multiple CDSs in a single Pod.
Notes:
- In the volume field of Pod Spec, each CDS can only be declared once.
- In the volumeMounts field of the Pod Spec, the volume corresponding to each CDS can be referenced multiple times.
- For resources such as Deployment/ReplicaSet, the replicas field can only be set to 1.
In the following example cds-multi-mount-example.yaml
, mount the four CDS disks declared in the file to the corresponding paths of the two containers my-nginx and my-mysql in Pod mysql-nginx, of which test-data and test-data-2 are two volumes. The corresponding CDS is shared by the two containers, and the CDS corresponding to the two volumes of nginx-data and db-data are only mounted in their respective containers.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-nginx
spec:
replicas: 1
selector:
matchLabels:
app: mysql-nginx
template:
metadata:
name: mysql-nginx
labels:
app: mysql-nginx
spec:
containers:
- name: my-nginx
image: nginx
volumeMounts:
- name: nginx-data
mountPath: /nginx-data
- name: test-data
mountPath: /test-data
- name: test-data-2
mountPath: /test-data-2
ports:
- containerPort: 80
- name: my-mysql
image: mysql:5.7
volumeMounts:
- name: db-data
mountPath: /db-data
- name: test-data
mountPath: /test-data
- name: test-data-2
mountPath: /test-data-2
ports:
- containerPort: 3306
env:
- name:
value: M6jshva8hTXAGnin
volumes:
- name: db-data
"ext4"
options:
"v-Hd8MUXA7"
- name: nginx-data
flexVolume:
driver: "baidubce/cds"
fsType: "ext4"
options:
volumeID: "v-OWEHtlSI"
- name: test-data
flexVolume:
driver: "baidubce/cds"
fsType: "ext4"
options:
volumeID: "v-9PhGVoMu"
- name: test-data-2
flexVolume:
driver: "baidubce/cds"
fsType: "ext4"
options:
volumeID: "v-SxjD482V"
flexVolume Field Description:
- fsType: The file system of the CDS disk. If it is a newly created CDS disk, the CDS disk will be automatically formatted as the file system
- volumeID: CDS disk ID, you can get it from the page of Baidu AI Cloud Console
Use kubectl create -f cds-multi-mount-example.yaml
to create the corresponding resource. The corresponding CDS disk is mounted to each directory declared in the file. When a Pod drifts within the cluster, these CDS disks will migrate with the Pod.
Mount via FlexVolume
Note:
- CDS can only be mounted on one bcc instance at a time.
- CDS can only be used by one Pod at the same time.
- For resources such as Deployment/ReplicaSet, the replicas field can only be set to 1.
1. Install CDS FlexVolume Plug-in
Save the following content as cds-flexVolume.yaml and execute kubectl create -f cds-flexVolume.yaml
to install the cds flexVolume plug-in.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cds-flex-volume-ds
namespace: kube-system
spec:
selector:
matchLabels:
app: cds-flex-volume-ds
template:
metadata:
name: cds-flex-volume-ds
labels:
app: cds-flex-volume-ds
spec:
hostNetwork: true
priorityClassName: system-node-critical
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: hub.baidubce.com/jpaas-public/cds-flex-volume:latest
imagePullPolicy: Always
name: cds-flex-volume-ds
resources:
requests:
cpu: 20m
memory: 20Mi
limits:
cpu: 200m
memory: 200Mi
securityContext:
privileged: true
volumeMounts:
- name: flexvolume-mount
mountPath: /flexmnt
- name: log-volume
mountPath: /log-volume
volumes:
- name: flexvolume-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
- name: log-volume
hostPath:
path:
After the corresponding Pod is in the Running state, the plug-in installation is complete.
2. Mount CDS Disk into Pod
After installing the CDS flexVolume plug-in, the following example cds-example.yaml mounts the CDS disk declared in the file to the /data path of the my-nginx container in the Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 1
selector:
matchLabels:
app: my-nginx
template:
metadata:
name: my-nginx
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
volumeMounts:
- name: cds-data
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: cds-data
flexVolume:
driver: "baidubce/cds"
fsType: "ext4"
options:
volumeID: "v-hTXAGtin"
flexVolume Field Descriptions:
- fsType: The file system of the CDS disk. If it is a created CDS disk, the CDS disk is automatically formatted as the file system.
- volumeID: the CDS disk ID. You can get it from the page of the Baidu AI Cloud Console.
Use kubectl create -f cds-example.yaml
to create the corresponding resource. The corresponding CDS disk is mounted under the /data directory in the my-nginx container. When a Pod drifts within the cluster, the CDS disk is migrated together with the Pod.