NodeGroup Introduction
To help you better manage nodes in the Kubernetes cluster, the concept of node group is introduced into the Cloud Container Engine (CCE). This document mainly introduces the overview, core concepts and usage limitations of node group.
Overview
A node group is a logical collection of nodes with identical attributes, which enables unified management and O&M of nodes, such as node dilatation and auto scaling. By creating multiple node groups, users can further achieve resource isolation, such as deploying independent node groups for different types of workloads and managing nodes with different attributes in groups. Multiple node groups with different configurations and types can be created in a cluster. The configuration of the node group includes node attributes such as node specifications, availability zone, labels, taints, etc. These attributes can be specified when creating node group or when editing the node group created.
Core concepts
Before first use, it is recommended that you understand the concepts related with node group.
| Concepts | Description |
|---|---|
| Node group | A collection of nodes with the same configuration. You can perform operations on the node group as an object, and complete creation, management, deletion, count adjustment, and auto scaling of a group of nodes. |
| Node | A worker node in the cluster corresponds to one BCC instance, containing all necessary conditions for running containers |
| Node configuration | The configuration template for creating nodes in a node group includes options like availability zone, node subnet, and instance specifications. Currently, a node group supports one unmodifiable node configuration. Scaling up a node group involves creating nodes based on this predefined setup. |
| Actual node count | The count of actual available nodes in the current node group, i.e., the count of nodes in the ready state |
| Desired node count | Desired count of available nodes maintained in the node group |
| Adjust node count | Node group scaling allows manual adjustment (scale-up or scale-down) without adding or removing specific nodes. Instead, you directly set the desired node count for the group. The group compares the current node count and adjusts by creating or removing nodes as needed to match the target count. |
| Auto scaling | The Cluster Autoscaler (CA) feature in Kubernetes enables automatic scaling. When resources are insufficient, the cluster automatically adds nodes. Similarly, when there is an excess of resources, unnecessary nodes are removed based on scaling policies, ensuring optimal resource usage while minimizing costs. |
| Scaling range | By enabling auto scaling, the node group will automatically adjust the desired node count within this scaling range to achieve automatic scaling |
| Scaling activity | Every time nodes are added, removed, or scaled in a node group, a scaling event is triggered. The system automatically completes scaling-in or scaling-out tasks, and detailed records are generated. You can review the history of scaling activities for each node group. |
| Scale-up priority | Node groups with auto-scaling enabled adjust their size based on set priority levels. Groups with smaller priority values are scaled up first. |
| Scale-down threshold | Applied to auto scaling; automatic scale-down may be triggered when the allocation rate of node resources (CPU/memory) in the node group is lower than the set threshold. |
| Scale-down trigger latency | It is applied to auto scaling. If the node resource utilization continues to be lower than the scale-down threshold within the configured scale-down trigger latency, it may trigger automatic scale-down. |
| Maximum concurrent scale-down count | It is applied to auto scaling. This is an integer representing the number of nodes with zero resource utilization that can be scaled down concurrently. |
| Scale-down start interval after scale-up | It is applied to auto scaling. This is an integer in minutes. Nodes created after scaling up will start to be evaluated for possible scale-down after this interval. |
| Scaling strategy | Applied to multi-model scaling strategies; model configuration sequence and multi-subnet uniform distribution are supported. See Configure Scaling Strategy. |
Usage restrictions
| Resource name | Default quota / usage limits |
|---|---|
| Count of node groups | No restriction, as long as the total count of nodes in the cluster does not exceed the quota of "maximum count of nodes in single cluster". To increase the quota, please submit a ticket. |
| Maximum count of nodes in single node group | Do not exceed the quota of "maximum count of nodes in single cluster". The total count of nodes in the cluster must also not exceed this quota. To increase the quota, please submit a ticket. |
| Desired node count | Do not exceed the quota of "maximum count of nodes in single cluster". The total count of nodes in the cluster must also not exceed this quota. To increase the quota, please submit a ticket. |
| Count of nodes in single node group | One node group supports one node configuration |
| Auto scaling range | 0 - 200; the minimum count of nodes must not exceed the maximum count of nodes |
| Scale-up priority | 0 - 100 |
