User Guide for Custom CNI Plugins
Background
To better address users' needs for highly customized network configurations, CCE Network allows users to not only utilize the default CNI plugins but also customize their own CNI plugins. This capability significantly boosts configuration flexibility, enabling users to design and configure networks tailored to their specific requirements.
The methods available for customizing CNI plugins include selecting and configuring officially supported plugins or directly deploying user-defined plugins by modifying their configuration files.
This document explains how to use and configure CCE CNI to implement customized user-defined CNI plugins.
Instructions for V2
Prerequisites:
- The cce-network-v2 component must be installed, with a version of 2.10.0 or higher.
- Ensure that the specification version utilized by the CNI plugin is 0.4.0 or higher.
1. Confirm the container network mode
Execute kubectl get cm -n kube-system cce-network-v2-config -o yaml and check the ipam field.
- A value starting with
vpc-route-indicates a network solution based on VPC routing - A value starting with
vpc-eniindicates a network solution based on elastic network interfaces
2. Customizing officially supported CCE plugins
The following is a list of plugins officially supported by CCE. Users can select them through the ccedConfig.ext-cni-plugins configuration file when installing cce-network-v2.
| Plugin name | Provider | Plugin description | Plugin version |
|---|---|---|---|
| portmap | Community | Community CNI plugin: Supports port passthrough configurations on pods. Note that this plugin is automatically disabled when eBPF acceleration is enabled. | v1.0.0 and above |
| cilium-cni | CCE | Cilium CNI plugin: Provides support for network policies, service acceleration, and other features. | v1.12.5-baidu and above |
| endpoint-probe | CCE | CCE-provided CNI plugin: Enables capabilities like Pod QoS. | v2.9.0 and above |
| cptp | CCE | CCE-provided default CNI plugin: Offers basic network communication functions for pods. | v1.0.0 and above |
| exclusive-device | CCE | CCE-provided CNI plugin: Allows pods to exclusively utilize network cards. | v2.6.0 and above |
| sbr-eip | CCE | CCE's built-in CNI plugin enables pods to directly connect to the EIP function. | v2.6.0 and above |
For instance, if you need to activate the portmap plugin, you can redeploy it using the configuration file below.
- Modify the CNI plugin template.
1kubectl edit cm cce-network-v2-config -n kube-system -o yaml
2apiVersion: v1
3data:
4 cced: |
5 ……
6 ext-cni-plugins:
7 - portmap
8 - endpoint-probe
9 ……
- Clean up the CNI configuration file
After modification, you need to restart the
daemonset cce-network-agent
1kubectl rollout restart daemonset -n kube-system cce-network-agent
- Verification
Sign in to any machine and execute cat /etc/cni/net.d/00-cce-cni.conflist to verify whether it contains the user-defined CNI configuration file.
1cat /etc/cni/net.d/00-cce-cni.conflist
2{
3 "cniVersion": "0.4.0",
4 "name": "generic-veth",
5 "ipam": {},
6 "dns": {},
7 "plugins": [
8 {
9 "ipam": {
10 "type": "cipam"
11 },
12 "mtu": 1500,
13 "type": "cptp"
14 },
15 {
16 "capabilities": {
17 "portMappings": true
18 },
19 "externalSetMarkChain": "KUBE-MARK-MASQ",
20 "type": "portmap"
21 },
22 {
23 "type": "endpoint-probe"
24 }
25 ]
3. Customizing CNI plugins
CCE also supports retaining user-defined CNI plugins. Users can deploy CNI configuration files by modifying the host configuration file /etc/cni/net.d/00-cce-cni.conflist. CCE will use the user-defined CNI plugins during updates.
1{
2 "name":"generic-veth",
3 "cniVersion":"0.4.0",
4 "plugins":[
5 {
6 "type":"cptp",
7 "ipam":{
8 "type":"cipam",
9 },
10 "mtu": 1500
11 }
12 ,{
13 "type": "endpoint-probe"
14 },
15 {
16 "type": "user-cni"
17 }
18 ]
19}
The CCE container network automatically reads this configuration file during startup. If a user-defined CNI plugin is detected, it will be automatically appended to the end of the plugin chain when the CNI configuration is updated.
Instructions for V1
1. Confirm the container network mode
Execute kubectl get cm -n kube-system cce-cni-node-agent -o yaml and check the cniMode field.
- A value starting with
vpc-route-indicates a network solution based on VPC instance routing - A value starting with
vpc-secondary-ip-indicates a network solution based on elastic network interfaces
2. Modify the CNI configuration file template
Based on the network mode obtained in Step 1, execute kubectl edit cm -n kube-system cce-cni-config-template to modify the CNI configuration template corresponding to the mode.
Manually edit the plugins list and add the sysctl configuration file at the end. The CNI configurations of all nodes will be updated after 1 minute.
A sample of the modified configuration is as follows:
1 cce-cni-secondary-ip-veth: |
2 {
3 "name":"{{ .NetworkName }}",
4 "cniVersion":"0.3.1",
5 "plugins":[
6 {
7 "type":"ptp",
8 "enableARPProxy":true,
9 "vethPrefix":"veth",
10 "mtu": {{ .VethMTU }},
11 "ipam":{
12 "type":"eni-ipam",
13 "endpoint":"{{ .IPAMEndPoint }}",
14 "instanceType":"{{ .InstanceType }}",
15 "deleteENIScopeLinkRoute":true
16 }
17 },
18 {
19 "type":"sysctl",
20 "kubeconfig":"/etc/cni/net.d/cce-cni.d/cce-cni.kubeconfig"
21 }
22 ]
23 }
3. Cluster-granularity configuration (optional)
In the configuration from Step 2, you can further add the sysctl field to specify the parameters that need to be configured.
After the following configuration takes effect, all newly created containers in the cluster will set /proc/sys/net/core/somaxconn to 500.
1{
2 "type":"sysctl",
3 "kubeconfig":"/etc/cni/net.d/cce-cni.d/cce-cni.kubeconfig",
4 "sysctl":{
5 "net.core.somaxconn":"8192"
6 }
7}
4. Application-granularity configuration (optional)
After completing Step 2, users can configure sysctl parameters by setting the pod annotations.
For example, an Nginx container created using the following YAML file will set /proc/sys/net/core/somaxconn to 8192 and /proc/sys/net/ipv4/tcp_tw_reuse to 1.
1kind: Deployment
2apiVersion: apps/v1
3metadata:
4 name: nginx-example
5spec:
6 replicas: 1
7 selector:
8 matchLabels:
9 app: nginx-example
10 template:
11 metadata:
12 labels:
13 app: nginx-example
14 annotations:
15 net.sysctl.cce.io/net.core.somaxconn: "8192"
16 net.sysctl.cce.io/net.ipv4.tcp_tw_reuse: "1"
17 spec:
18 containers:
19 - name: nginx-example
20 image: nginx:alpine
21 imagePullPolicy: IfNotPresent
Note
User-defined CNI plugins must meet the following conditions:
- Ensure that the type field of the CNI plugin is not empty and does not duplicate with existing plugins;
- Ensure that the executable program of the CNI plugin /opt/cni/bin/{user-cni} exists and is executable; otherwise, kubelet cannot create pods normally;
- Ensure that the CNI plugin fully adheres to the CNI specification and that the CNI DEL operation functions properly. This ensures that pods can be deleted and recreated as needed, preventing continuous failures caused by CNI plugin issues.
- Ensure the CNI configuration is a valid JSON file
- The configurable sysctl parameters for netns vary across different kernel versions. Configure carefully; otherwise, the container may fail to start
