Create cluster
This document offers a detailed guide on the steps and parameter configurations needed to create a managed cluster in the console.
Preparation
Before creating a cluster, it is advised to plan and design its configuration based on service needs to ensure stable, efficient, and secure operations.
- Region: Selecting a region closer to the user's location and the deployed resources reduces network latency and enhances access speed.
- Network address planning: Define the VPC network segment and Kubernetes network segment (pod address range and service address range) based on the service scenario and cluster size, specifying the IP address range for the cluster and the available IP count for pods and nodes.
- Public network access: Decide if cluster nodes require public network access (essential for pulling public images).
Prerequisites
Before creating a cluster, you must complete the following tasks:
- [Register a Baidu account](UserGuide/Register an account.md#Register a Baidu Account) and complete real-name certification.
- During cluster creation, multiple resources such as virtual private cloud, subnet, and security group will be utilized or created. Resource regions have quota limits. For details, refer to Usage Limits.
- To add an existing Baidu Cloud Compute (BCC) instance to the cluster, ensure both the cluster and the BCC instance are within the same virtual private cloud (VPC).
- The cluster creation process involves purchasing pay-as-you-go resources such as Baidu Load Balancer (BLB). Ensure your account has sufficient balance to proceed with these resource allocations smoothly.
Note
Once a cluster is created, the following items cannot be modified:
- Modify the cluster type.
- Adjust the number of master nodes in the cluster.
- Update the cluster's location.
- Modify the cluster’s network configurations, such as virtual private cloud (VPC), subnet, service network segment, IPv6, and kube-proxy mode (service forwarding mode).
Cost description
Baidu AI Cloud Container Engine (CCE) provides users with services free of charge, but creating a cluster may incur costs for cloud infrastructure resources such as Baidu Load Balancer (BLB), NAT (Network Address Translation), public IPs, and cloud disks. Check the actual billing under the cost configuration section at the bottom of the cluster creation page. For more billing details and cloud infrastructure resource billing, refer to Product Billing.
Operation steps
- Log in to the Cloud Container Engine Console, select Product Services - Cloud Native - Cloud Container Engine (CCE), and select Cluster Management - Cluster List in the left navigation bar to access the cluster list page.
- In the cluster list page, click Create Cluster at the top of the list.
- CCE provides preconfigured cluster creation templates for various scenarios. In the template selection pop-up window, you can select a template based on the template descriptions and your service requirements, then click OK to access the cluster creation page.
- On the cluster creation page, follow the creation wizard step by step to configure settings based on your service needs: cluster configuration, Master configuration (only for standard standalone clusters), and Worker configuration.
Description:
- [Recommended] Standard managed cluster: The cluster Master is fully managed by Cloud Container Engine (CCE). You only need to purchase Worker nodes to run workloads.
- Standard standalone cluster: The cluster Master is deployed on the server instance you purchase, and you are responsible for planning, maintaining, and upgrading the cluster servers.
Cluster configuration
Basic configuration
| ConfigMap | Description |
|---|---|
| Cluster name | Cluster name: Supports uppercase and lowercase letters, numbers, and -_/ special characters, must start with a letter, and have a length between 1 and 65. |
| Cluster scale | When selecting a standard managed cluster, it is necessary to choose the cluster scale. Select the maximum manageable node count per cluster based on service scenarios. Refer to How to Select Cluster Scale. To use a larger cluster scale, submit a ticket. |
| Kubernetes version | Select the currently supported Kubernetes version. The latest version is recommended. Refer to CCE Kubernetes Version Upgrade Notes for details on CCE version support. |
| Container runtime | For Kubernetes cluster container runtime components, CCE currently supports the latest versions of Containerd 1.6.36 and 1.7.25. Refer to How to Select Kubernetes Cluster Container Runtime Components. |
| Node naming pattern | Intranet IP: The cluster will use the instance’s internal IP as the node name (NodeName). |
| Hostname:The cluster will use the instance’s hostname as the NodeName. Ensure that hostnames of instances joining the cluster are unique. |
|
| Bill type | Subscription: Pay before use with lower price. The associated BCC, BLB, and EIP resources created are all charged in the subscription bill type. |
| Postpay: Pay after use, activated on demand. The associated BCC, BLB, and EIP resources created are all charged in the postpay bill type. | |
| Current region | Choose the region where the cluster will be located. Cloud products in different regions have distinct intranets, so the region cannot be changed after purchase. It’s recommended to select a region closer to your clients to reduce access latency and increase download speeds. |
Network configuration
The container network requires its own address space, which must remain isolated from the node network, node subnet, and container networks of other clusters. The following example uses the network configuration details for standard managed cluster creation.
| ConfigMap | Description |
|---|---|
| IPv6 dual-stack network | Enabling IPv6 dual-stack network will filter VPC networks and subnets configured with IPv6, and automatically assign IPv6 container network segment and service network segment.Note: Changes are not supported after selection. Select it carefully by referring toCCE IPv4/IPv6 Dual-Stack Network Instructions. |
| Virtual private cloud | Configure the cluster's network. The virtual private cloud provides the cluster with an isolated, self-configured, and managed virtual network environment. To create a new virtual private cloud, navigate to Create Virtual Private Cloud . |
| API Server access | Automatically create Baidu Load Balancer (BLB): The system will automatically create an intranet application Baidu Load Balancer (BLB) instance for the cluster API Server. Deleting this BLB instance will render the API Server inaccessible. If you choose to automatically create Baidu Load Balancer (BLB), additional API Server subnet and public access should be set.
|
| Use existing Baidu Load Balancer (BLB): The cluster API Server will be bound to the existing Baidu load balancer (BLB) instance. Deleting this BLB instance will render the API Server inaccessible. |
|
| Security group | Specify the security group for Worker nodes, support regular security group and enterprise security group:
|
| Container network mode | Support the VPC-ENI container network mode.VPC-ENI: A container network mode implemented via elastic network interface, where the container network and cloud host network reside in the same VPC. When selecting VPC-ENI mode, you must configure the elastic network interface mode, container subnet, and ClusterIP network segment. CCE offers two elastic network interface modes: shared and exclusive .
|
| LB Service subnet | Set the default subnet for BLB association when creating Service and Ingress (NAT subnets are excluded). |
| Elastic network interface security group | Container network mode: When selecting VPC-ENI, specify the security group bound to the elastic network interface mounted on Worker nodes, supporting regular security groups and enterprise security groups:
|
| Kube-proxy mode | Configure Kube-proxy mode by selecting either IPTABLES or IPVS. If the cluster enables eBPF enhancement, the native Kube-proxy mode will be replaced without additional configuration.
|
| NodePort range | Define the range of port numbers available for NodePort services. In Kubernetes, the default NodePort range is 30000-32767. |
Advanced settings (optional)
| ConfigMap | Description |
|---|---|
| Resource labels | You can configure resource tags to classify resources by purpose, owner, project, etc. |
| Cluster deletion protection | By default, accidental cluster deletion through the console or API is prevented. |
| Certification method | By default, the commonly used X509 client certificate authentication is used. OIDC authentication can be optionally enabled. For details, refer to Cluster Authentication Instructions. |
| Custom certificate SAN | Add custom IPs or domain names to the SAN (Subject Alternative Name) field in the cluster API Server’s certificate to enable client access control. Updating this setting is not supported after the cluster is created. |
| Cluster notes | Provide a description for the cluster. |
Worker configuration
When creating a standard managed cluster, Work must be configured on the cluster creation page. An example below explains relevant Work configuration items for a standard managed cluster.
Node configuration
| ConfigMap | Description |
|---|---|
| Node source | Support two node sources: Create New Nodes or Use Existing Servers. If selecting existing servers, only available instances within the current cluster’s virtual private cloud is supported. |
| Worker node | Click Add Node to configure Worker nodes. For node configuration parameters, refer to Add Node. |
| Resource labels | You can configure resource tags to categorize resources by purpose, owner, project, etc., and uniformly apply them to related resources such as CDS, EIP, snapshots, and snapshot chains. |
Advanced settings (optional)
| ConfigMap | Description | |
|---|---|---|
| Data storage | Kubelet data directory | The directory in the file system where kubelet stores runtime data, such as volume files, plugin files, and other data storage directories. If a data disk is mounted, it is recommended to save the data on the data disk. |
| Container data directory | Directories used for storing container images and related data. If a data disk is mounted, it is recommended to save the data on the data disk. | |
| Script execution | Pre-deployment execution script | Before node deployment, the system will automatically execute the specified scripts. Ensure the script has reentrancy and retry logic. Script content and generated logs will be automatically saved to the node’s /usr/local/cce/scripts/ directory. |
| Post-deployment execution script | The system will automatically execute this script upon node deployment. You need to independently monitor and verify script execution status. Script content and generated logs will be automatically saved to the node’s /usr/local/cce/scripts/ directory. | |
| Node parameters | Cordon | This feature is disabled by default. Once node blocking is enabled, the node enters a non-schedulable state, and no newly created Pods will be scheduled onto it. |
| Custom kubelet parameters | You may manually configure kubelet’s startup parameters based on specific requirements or cluster environments. For more information, see Custom Kubelet Parameters. | |
| Labels | K8S labels serve as identifiers for managing and selecting K8S objects. These labels are automatically assigned to newly created nodes and consist of a key-value pair. | |
| Taints | In a Kubernetes cluster, taints are a mechanism to prevent pods from being scheduled onto specific nodes. Taints are markers added to nodes, which can indicate that the node should be not accessed by certain pods. Node taints and pod
Node taints and pod tolerations work in conjunction. After setting taints on a node, pod scheduling onto the node can be prevented, or pods can be evicted from the node. Unless the pod tolerations can match the node taints. A taint consists of three parts:
|
|
Master configuration
When setting up a standard standalone cluster, the Master node must be configured on the cluster creation page. Using a standard standalone cluster as an example, the Master node configuration items are explained below.
Node configuration
| ConfigMap | Description |
|---|---|
| Node source | Support two node sources: Create New Nodes or Use Existing Servers. If selecting existing servers, only available instances within the current cluster’s virtual private cloud is supported. |
| Master node | You may select either 3 replicas or 5 replicas. If selecting 3 replicas, a minimum of 3 nodes must be added. Click Add Node to configure Master nodes. For node configuration parameters, refer to Add Node. |
Server configuration
| ConfigMap | Description |
|---|---|
| Security group for masters | Specify the security group bound to Master nodes, supporting both regular security group and enterprise security group. |
| Resource labels | You can configure resource tags to categorize resources by purpose, owner, project, etc., and uniformly apply them to related resources such as CDS, EIP, snapshots, and snapshot chains. |
Advanced settings (optional)
| ConfigMap | Description | |
|---|---|---|
| Data storage | etcd data directory | The directory in the file system used for storing etcd data files on the server where etcd runs. If a data disk is mounted, it is recommended to save the data on the data disk. |
| Kubelet data directory | The directory in the file system where kubelet saves runtime data, such as volume files, plugin files, and other data storage directories. If a data disk is mounted, it is recommended to save the data on the data disk. | |
| Container data directory | Directories used for storing container images and related data. If a data disk is mounted, it is recommended to save the data on the data disk. | |
| Script execution | Pre-deployment execution script | Before deploying a node, the system will automatically execute the predefined scripts. Make sure the script includes reentrancy and retry mechanisms. The script content and generated logs will be saved to the /usr/local/cce/scripts/ directory on the node. |
| Post-deployment execution script | During node deployment, the system will automatically execute the specified script. Ensure you monitor and verify the script's execution independently. The script content and generated logs will be saved to the /usr/local/cce/scripts/ directory on the node. | |
Confirm configuration
After confirming the selected configuration details and billing costs, pass the pre-check in the pre-check module, then click Submit to initiate cluster creation.
- You can view the cluster's creation status in the cluster list. Once the cluster is successfully created, its status will appear as "running".
