Using NetworkPolicy in CCE Clusters
NetworkPolicy is a Kubernetes (K8s) resource used to define pod-based network isolation rules. It determines whether a group of pods can communicate with other pod groups or endpoints. This document primarily illustrates how to enable the NetworkPolicy function in CCE using the open-source tools felix or kube-router.
Users can choose the appropriate components for deployment depending on the cluster's container network mode.
felix
Note: felix can only be used with veth network mode (see advanced options of "VPC Network" mode)
felix, a component of the open-source Calico container networking solution, runs on each node to manage routes, ACLs, and other configurations.
- Official website: https://docs.projectcalico.org/reference/felix/
- Item: https://github.com/projectcalico/felix
CCE has customized and enhanced felix to support the container NetworkPolicy functionality.
Deploy felix
To deploy felix on CCE K8S Cluster, the YAML is as follows:
1---
2# Source: calico-felix/templates/rbac.yaml
3apiVersion: v1
4kind: ServiceAccount
5metadata:
6 name: cce-calico-felix
7 namespace: kube-system
8---
9# Source: calico-felix/templates/cce-reserved.yaml
10apiVersion: v1
11kind: ConfigMap
12metadata:
13 name: calico-felix-cce-reserved
14 namespace: kube-system
15 labels:
16 heritage: Helm
17 release: RELEASE-NAME
18 chart: calico-felix-1.0.0
19 app: cce-calico-felix
20data:
21 hash: "22ec24f7bfe36fe18917ff07659f9e6e3dfd725af4c3371d3e60c7195744bea4"
22---
23# Source: calico-felix/templates/crd.yaml
24apiVersion: apiextensions.k8s.io/v1beta1
25kind: CustomResourceDefinition
26metadata:
27 name: felixconfigurations.crd.projectcalico.org
28spec:
29 scope: Cluster
30 group: crd.projectcalico.org
31 version: v1
32 names:
33 kind: FelixConfiguration
34 plural: felixconfigurations
35 singular: felixconfiguration
36---
37# Source: calico-felix/templates/crd.yaml
38apiVersion: apiextensions.k8s.io/v1beta1
39kind: CustomResourceDefinition
40metadata:
41 name: bgpconfigurations.crd.projectcalico.org
42spec:
43 scope: Cluster
44 group: crd.projectcalico.org
45 version: v1
46 names:
47 kind: BGPConfiguration
48 plural: bgpconfigurations
49 singular: bgpconfiguration
50---
51# Source: calico-felix/templates/crd.yaml
52apiVersion: apiextensions.k8s.io/v1beta1
53kind: CustomResourceDefinition
54metadata:
55 name: ippools.crd.projectcalico.org
56spec:
57 scope: Cluster
58 group: crd.projectcalico.org
59 version: v1
60 names:
61 kind: IPPool
62 plural: ippools
63 singular: ippool
64---
65# Source: calico-felix/templates/crd.yaml
66apiVersion: apiextensions.k8s.io/v1beta1
67kind: CustomResourceDefinition
68metadata:
69 name: hostendpoints.crd.projectcalico.org
70spec:
71 scope: Cluster
72 group: crd.projectcalico.org
73 version: v1
74 names:
75 kind: HostEndpoint
76 plural: hostendpoints
77 singular: hostendpoint
78---
79# Source: calico-felix/templates/crd.yaml
80apiVersion: apiextensions.k8s.io/v1beta1
81kind: CustomResourceDefinition
82metadata:
83 name: clusterinformations.crd.projectcalico.org
84spec:
85 scope: Cluster
86 group: crd.projectcalico.org
87 version: v1
88 names:
89 kind: ClusterInformation
90 plural: clusterinformations
91 singular: clusterinformation
92---
93# Source: calico-felix/templates/crd.yaml
94apiVersion: apiextensions.k8s.io/v1beta1
95kind: CustomResourceDefinition
96metadata:
97 name: globalnetworkpolicies.crd.projectcalico.org
98spec:
99 scope: Cluster
100 group: crd.projectcalico.org
101 version: v1
102 names:
103 kind: GlobalNetworkPolicy
104 plural: globalnetworkpolicies
105 singular: globalnetworkpolicy
106---
107# Source: calico-felix/templates/crd.yaml
108apiVersion: apiextensions.k8s.io/v1beta1
109kind: CustomResourceDefinition
110metadata:
111 name: globalnetworksets.crd.projectcalico.org
112spec:
113 scope: Cluster
114 group: crd.projectcalico.org
115 version: v1
116 names:
117 kind: GlobalNetworkSet
118 plural: globalnetworksets
119 singular: globalnetworkset
120---
121# Source: calico-felix/templates/crd.yaml
122apiVersion: apiextensions.k8s.io/v1beta1
123kind: CustomResourceDefinition
124metadata:
125 name: networkpolicies.crd.projectcalico.org
126spec:
127 scope: Namespaced
128 group: crd.projectcalico.org
129 version: v1
130 names:
131 kind: NetworkPolicy
132 plural: networkpolicies
133 singular: networkpolicy
134---
135# Source: calico-felix/templates/rbac.yaml
136kind: ClusterRole
137apiVersion: rbac.authorization.k8s.io/v1
138metadata:
139 name: cce-calico-felix
140rules:
141 - apiGroups: [""]
142 resources: ["pods", "nodes", "namespaces", "configmaps", "serviceaccounts"]
143 verbs: ["get", "watch", "list", "update"]
144 - apiGroups: ["networking.k8s.io"]
145 resources:
146 - networkpolicies
147 verbs:
148 - get
149 - list
150 - watch
151 - apiGroups: ["extensions"]
152 resources:
153 - networkpolicies
154 verbs:
155 - get
156 - list
157 - watch
158 - apiGroups: [""]
159 resources:
160 - pods/status
161 verbs:
162 - update
163 - apiGroups: ["crd.projectcalico.org"]
164 resources: ["*"]
165 verbs: ["*"]
166---
167# Source: calico-felix/templates/rbac.yaml
168apiVersion: rbac.authorization.k8s.io/v1beta1
169kind: ClusterRoleBinding
170metadata:
171 name: cce-calico-felix
172roleRef:
173 apiGroup: rbac.authorization.k8s.io
174 kind: ClusterRole
175 name: cce-calico-felix
176subjects:
177 - kind: ServiceAccount
178 name: cce-calico-felix
179 namespace: kube-system
180---
181# Source: calico-felix/templates/daemonset.yaml
182apiVersion: apps/v1
183kind: DaemonSet
184metadata:
185 name: cce-calico-felix
186 namespace: kube-system
187spec:
188 selector:
189 matchLabels:
190 app: cce-calico-felix
191 template:
192 metadata:
193 labels:
194 app: cce-calico-felix
195 annotations:
196 scheduler.alpha.kubernetes.io/critical-pod: ""
197 spec:
198 hostPID: true
199 nodeSelector:
200 beta.kubernetes.io/arch: amd64
201 tolerations:
202 - key: node.cloudprovider.kubernetes.io/uninitialized
203 value: "true"
204 effect: NoSchedule
205 - key: node-role.kubernetes.io/master
206 effect: NoSchedule
207 - key: "CriticalAddonsOnly"
208 operator: "Exists"
209 terminationGracePeriodSeconds: 0
210 serviceAccountName: cce-calico-felix
211 hostNetwork: true
212 containers:
213 - name: policy
214 image: registry.baidubce.com/cce-plugin-pro/cce-calico-felix:v3.5.8
215 command: ["/bin/policyinit.sh"]
216 imagePullPolicy: Always
217 env:
218 - name: NODENAME
219 valueFrom:
220 fieldRef:
221 fieldPath: spec.nodeName
222 - name: FELIX_INTERFACEPREFIX
223 value: veth
224 securityContext:
225 privileged: true
226 resources:
227 requests:
228 cpu: 250m
229 livenessProbe:
230 httpGet:
231 path: /liveness
232 port: 9099
233 host: localhost
234 periodSeconds: 10
235 initialDelaySeconds: 10
236 failureThreshold: 6
237 readinessProbe:
238 httpGet:
239 path: /readiness
240 port: 9099
241 host: localhost
242 periodSeconds: 10
243 volumeMounts:
244 - mountPath: /lib/modules
245 name: lib-modules
246
247 volumes:
248 - name: lib-modules
249 hostPath:
250 path: /lib/modules
251---
252apiVersion: apps/v1
253kind: DaemonSet
254metadata:
255 name: kube-proxy-config
256 namespace: kube-system
257 labels:
258 app: kube-proxy-config
259spec:
260 selector:
261 matchLabels:
262 app: kube-proxy-config
263 template:
264 metadata:
265 labels:
266 app: kube-proxy-config
267 spec:
268 nodeSelector:
269 beta.kubernetes.io/arch: amd64
270 tolerations:
271 - operator: "Exists"
272 restartPolicy: Always
273 hostNetwork: true
274 containers:
275 - name: busybox
276 image: busybox
277 command:
278 - sh
279 - /tmp/update-proxy-yaml.sh
280 env:
281 - name: NODE_NAME
282 valueFrom:
283 fieldRef:
284 fieldPath: spec.nodeName
285 - name: NODE_IP
286 valueFrom:
287 fieldRef:
288 fieldPath: status.hostIP
289 imagePullPolicy: IfNotPresent
290 volumeMounts:
291 - name: etc-k8s
292 mountPath: /etc/kubernetes/
293 - name: shell
294 mountPath: /tmp/
295
296 terminationGracePeriodSeconds: 0
297 volumes:
298 - name: etc-k8s
299 hostPath:
300 path: /etc/kubernetes/
301 type: "DirectoryOrCreate"
302 - name: shell
303 configMap:
304 name: update-proxy-yaml-shell
305 optional: true
306 items:
307 - key: update-proxy-yaml.sh
308 path: update-proxy-yaml.sh
309
310---
311apiVersion: v1
312kind: ConfigMap
313metadata:
314 labels:
315 addonmanager.kubernetes.io/mode: EnsureExists
316 name: update-proxy-yaml-shell
317 namespace: kube-system
318data:
319 update-proxy-yaml.sh: |-
320 #!/bin/sh
321
322 if [[ -e /etc/kubernetes/proxy.yaml ]]; then
323 sed -i 's/masqueradeAll: true/masqueradeAll: false/g' /etc/kubernetes/proxy.yaml
324 if grep -q "masqueradeAll: false" /etc/kubernetes/proxy.yaml; then
325 echo "update config successfully"
326 else
327 exit 1
328 fi
329 else
330 echo "/etc/kubernetes/proxy.yaml not exists"
331 exit 1
332 fi
333 sleep infinity
kube-router
Note: kube-router can only be used with the kubenet network mode (see advanced options of "VPC Network" mode)
kube-router is a container network solution for Kubernetes. Its official website and code repository are as follows:
- Official website:https://www.kube-router.io
- Project:https://github.com/cloudnativelabs/kube-router
kube-router has three core functions:
- Pod Networking;
- IPVS/LVS based service proxy;
- Network Policy Controller.
CCE has its own container network implementation. This document focuses on using kube-router’s Network Policy Controller function.
Deploy kube-router
To deploy kube-router on a CCE K8S cluster, use the following YAML configuration:
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4 name: kube-router
5 namespace: kube-system
6---
7kind: ClusterRole
8apiVersion: rbac.authorization.k8s.io/v1
9metadata:
10 name: kube-router
11 namespace: kube-system
12rules:
13 - apiGroups:
14 - ""
15 resources:
16 - namespaces
17 - pods
18 - services
19 - nodes
20 - endpoints
21 verbs:
22 - list
23 - get
24 - watch
25 - apiGroups:
26 - "networking.k8s.io"
27 resources:
28 - networkpolicies
29 verbs:
30 - list
31 - get
32 - watch
33 - apiGroups:
34 - extensions
35 resources:
36 - networkpolicies
37 verbs:
38 - get
39 - list
40 - watch
41---
42kind: ClusterRoleBinding
43apiVersion: rbac.authorization.k8s.io/v1
44metadata:
45 name: kube-router
46roleRef:
47 apiGroup: rbac.authorization.k8s.io
48 kind: ClusterRole
49 name: kube-router
50subjects:
51- kind: ServiceAccount
52 name: kube-router
53 namespace: kube-system
54---
55apiVersion: v1
56kind: ConfigMap
57metadata:
58 name: kube-router-cfg
59 namespace: kube-system
60 labels:
61 tier: node
62 k8s-app: kube-router
63data:
64 cni-conf.json: |
65 {
66 "name":"kubernetes",
67 "type":"bridge",
68 "bridge":"kube-bridge",
69 "isDefaultGateway":true,
70 "ipam": {
71 "type":"host-local"
72 }
73 }
74---
75apiVersion: apps/v1
76kind: DaemonSet
77metadata:
78 name: kube-router
79 namespace: kube-system
80 labels:
81 k8s-app: kube-router
82spec:
83 selector:
84 matchLabels:
85 k8s-app: kube-router
86 template:
87 metadata:
88 labels:
89 k8s-app: kube-router
90 annotations:
91 scheduler.alpha.kubernetes.io/critical-pod: ''
92 spec:
93 serviceAccountName: kube-router
94 containers:
95 - name: kube-router
96 image: registry.baidubce.com/cce-plugin-pro/kube-router:latest
97 args: ["--run-router=false", "--run-firewall=true", "--run-service-proxy=false"]
98 securityContext:
99 privileged: true
100 imagePullPolicy: Always
101 env:
102 - name: NODE_NAME
103 valueFrom:
104 fieldRef:
105 fieldPath: spec.nodeName
106 livenessProbe:
107 httpGet:
108 path: /healthz
109 port: 20244
110 initialDelaySeconds: 10
111 periodSeconds: 3
112 volumeMounts:
113 - name: lib-modules
114 mountPath: /lib/modules
115 readOnly: true
116 - name: cni-conf-dir
117 mountPath: /etc/cni/net.d
118 initContainers:
119 - name: install-cni
120 image: registry.baidubce.com/cce-plugin-pro/kube-router-busybox:latest
121 imagePullPolicy: Always
122 command:
123 - /bin/sh
124 - -c
125 - set -e -x;
126 if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
127 TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
128 cp /etc/kube-router/cni-conf.json ${TMP};
129 mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
130 fi
131 volumeMounts:
132 - name: cni-conf-dir
133 mountPath: /etc/cni/net.d
134 - name: kube-router-cfg
135 mountPath: /etc/kube-router
136 hostNetwork: true
137 tolerations:
138 - key: CriticalAddonsOnly
139 operator: Exists
140 - effect: NoSchedule
141 key: node-role.kubernetes.io/master
142 operator: Exists
143 - effect: NoSchedule
144 key: node.kubernetes.io/not-ready
145 operator: Exists
146 volumes:
147 - name: lib-modules
148 hostPath:
149 path: /lib/modules
150 - name: cni-conf-dir
151 hostPath:
152 path: /etc/cni/net.d
153 - name: kube-router-cfg
154 configMap:
155 name: kube-router-cfg
Example demonstration
1. Create namespaces
1$kubectl create namespace production
2$kubectl create namespace staging
2. Deploy the Nginx service
Create an Nginx deployment in different namespaces.
1$kubectl apply -f nginx.yaml --namespace=production
2$kubectl apply -f nginx.yaml --namespace=staging
YAML for nginx.yaml:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deployment
5 labels:
6 app: nginx
7spec:
8 replicas: 3
9 selector:
10 matchLabels:
11 app: nginx
12 template:
13 metadata:
14 labels:
15 app: nginx
16 spec:
17 containers:
18 - name: nginx
19 image: hub.baidubce.com/cce/nginx-alpine-go:latest
20 ports:
21 - containerPort: 80
Verify pod startup:
1# staging environment
2$kubectl get pods -n staging
3NAME READY STATUS RESTARTS AGE
4nginx-deployment-7fbd5f4c55-2xgd4 1/1 Running 0 45s
5nginx-deployment-7fbd5f4c55-5xr75 1/1 Running 0 45s
6nginx-deployment-7fbd5f4c55-fn6lr 1/1 Running 0 20m
7
8# production environment
9$kubectl get pods -n production
10NAME READY STATUS RESTARTS AGE
11nginx-deployment-7fbd5f4c55-m764f 1/1 Running 0 10s
12nginx-deployment-7fbd5f4c55-pdhhz 1/1 Running 0 10s
13nginx-deployment-7fbd5f4c55-r98w5 1/1 Running 0 20m
When no NetworkPolicy is set, all pods can communicate with each other. You can directly ping the IP of any pod to test connectivity.
NetworkPolicy policy testing
1. Default deny all ingress traffic
Block access to pods in the staging namespace.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5 namespace: staging
6spec:
7 podSelector: {}
8 policyTypes:
9 - Ingress
Explanation of key fields:
- podSelector: Selects the pods to be isolated;
- policyTypes: Defines the policy type. NetworkPolicy differentiates traffic into ingress (incoming) and egress (outgoing). If not specified, no restrictions apply.
- ingress: Defines an allow list for inbound traffic. Requires specifying from (traffic source) and ports (destination locations). from supports three types: ipBlock, namespaceSelector, and podSelector;
- egress: Defines an outbound traffic allowlist. Like ingress, egress requires specifying traffic destinations ("to") and destination ports.
After creating the above NetworkPolicy, try accessing the IP of any pod in namespace=staging from another pod (e.g., a pod in the production namespace):
1$kubectl exec -it nginx-deployment-7fbd5f4c55-m764f /bin/sh -n production
2/ # ping 172.16.0.92
3PING 172.16.0.92 (172.16.0.92): 56 data bytes
2. Default allow all ingress traffic
Allow access to pods in the staging namespace.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-all
5 namespace: staging
6spec:
7 podSelector: {}
8 ingress:
9 - {}
10 policyTypes:
11 - Ingress
3. Default deny all egress traffic
Block outbound traffic from pods in the production namespace.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5 namespace: production
6spec:
7 podSelector: {}
8 policyTypes:
9 - Egress
4. Default allow all egress traffic
Allow outbound traffic from pods in the production namespace.
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: allow-all
5 namespace: production
6spec:
7 podSelector: {}
8 egress:
9 - {}
10 policyTypes:
11 - Egress
5. Default deny all ingress and all egress traffic
Block all inbound and outbound traffic for all pods:
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: default-deny
5spec:
6 podSelector: {}
7 policyTypes:
8 - Ingress
9 - Egress
