百度智能云

All Product Document

          Cloud Container Engine

          Use Directly-connected Pod for LoadBalancer Service

          Comparison Between Traditional Mode and Direct Connection Mode

          Features:

          The LoadBalancer Service in traditional mode takes the IP address of the cluster node as the back-end server of BLB. When accessing the LoadBalancer Service in the traditional mode, the request is forwarded to the node by the Baidu Load Balance (BLB) service provided by Baidu AI Cloud and then further forwarded to the specific Pod through iptables or IPVS. In this process, the request undergoes two Baidu Load Balance (BLB) operations.

          Direct Pod Mode LoadBalancer Service uses the IP address of the Pod as its back-end server. When accessing the LoadBalancer Service in the direct-connected Pod mode, the service request can be forwarded to the specific Pod after only one Baidu Load Balance (BLB) operation. This mode of Service has the following advantages:

          • Keep the source IP of the request;
          • In the case of one less request forwarding, the system has higher performance.
          • More even load balancing at the Pod level.

          Performance

          For the two modes of LoadBalancer Service, we have tested their average network latency. According to the test results, the LoadBalancer Service of the directly-connected Pod can reduce the network delay by about 10% on average.
          Pod direct connection performance evaluation.jpg

          Prerequisites for Use

          The cluster that uses the LoadBalancer Service in the directly-connected Pod mode should meet the following prerequisites simultaneously:

          Confirm Cluster Version

          The new version of the CCE cluster (the cluster ID is prefixed with cce-) does not support this feature.
          The old version of the CCE cluster (the cluster ID is prefixed with c-) does not support this feature.

          Confirm CCE LB Controller Plugin

          You should deploy the CCE LB Controller plugin in the cluster.
          Check the "Workload-Deployment List" interface on the cluster console. The plug-in is available when there’s cce-lb-controller under the namespace kube-system as shown in the figure below.

          cce-lb-controller.png

          If this plug-in is not deployed in the cluster, you can deploy it in the following steps:
          Log in to the Baidu Cloud console, enter the『CCE-Helm-Helm Template-Baidu Cloud Template』 page, and search cce-lb-controller. Then, you can see the following page.

          helm-lb1.png

          You can click to enter and see the following page. Then, select version 1.0.0 and install it. Take care to install the correct version.

          helm-lb2.png

          Click “Install” on the right, enter the instance name and deployment cluster, and click “OK” to complete the deployment of CCE-LB-Controller.

          helminstall.png

          Confirm Type of Container Network Card

          • It meets the requirements to use a veth or kubenet network card.
          • If the network card type is IPVLAN, you need to set the masqOutBound and masqOutBoundIPv6 properties of the ConfigMap cce-ip-masq-agent to false. The operating process is as follows:
          • Use Kubectl to Connect to Cluster, After entering the kubectl describe configmap cce-ip-masq-agent -n kube-system command, you can get the following output. At this time, masqOutBound and masqOutBoundIPv6 are true:

            # kubectl describe configmap  cce-ip-masq-agent -n kube-system
            Name:         cce-ip-masq-agent
            Namespace:    kube-system
            Labels:       addonmanager.kubernetes.io/mode=EnsureExists
            Annotations:
            Data
            ====
            config:
            ----
            nonMasqueradeCIDRs:
              - 10.0.0.0/8
              - 172.16.0.0/12
              - 192.168.0.0/16
              - fc00::/7
            masqOutBound: true
            masqOutBoundIPv6: true
            masqLinkLocal: false
            masqLinkLocalIPv6: false
            resyncInterval: 60s
            
            Events:  <none>

            Enter the kubectl edit configmap cce-ip-masq-agent -n kube-systemcommand, modify masqOutBound and masqOutBoundIPv6 to false and save the modification. The configuration information of cce-ip-masq-agent is updated automatically. At this moment, enter the kubectl describe configmap cce-ip-masq-agent -n kube-system command again so that you may get the following output. Then, masqOutBound and masqOutBoundIPv6 have become false.

            # kubectl describe configmap  cce-ip-masq-agent -n kube-system
            Name:         cce-ip-masq-agent
            Namespace:    kube-system
            Labels:       addonmanager.kubernetes.io/mode=EnsureExists
            Annotations:
            Data
            ====
            config:
            ----
            nonMasqueradeCIDRs:
              - 10.0.0.0/8
              - 172.16.0.0/12
              - 192.168.0.0/16
              - fc00::/7
            masqOutBound: false
            masqOutBoundIPv6: false
            masqLinkLocal: false
            masqLinkLocalIPv6: false
            resyncInterval: 60s
            
            Events:  <none>

          Operation Steps

          Check Quota Limit

          The LoadBalancer Service in the directly-connected Pod mode uses the application type BLB, whose backend associates with up to Pods by default. If failed to meet your requirements, submit a ticket application to open the backend quantity limitation.

          Open Security Group Port

          When the CCE cluster uses a VPC network as the container network, it is required to allow traffics for the destination port “Service TargetPort” in the security group bound to the cluster nodes. Otherwise, the packet cannot reach the Pod.

          When the CCE cluster uses VPC-CNI as the container network, it is necessary to allow traffics for the destination port “Service TargetPort” in the security group bound to the ENI elastic network card. Otherwise, the packet cannot reach the Pod.

          Create Service

          When you create LoadBalancer Service in the directly-connected Pod modem you need to add service.beta.kubernetes.io/cce-load-balancer-backend-type: "eni" to the annotations property。
          The yaml example is as follows:

          apiVersion: v1
          kind: Service
          metadata:
            name: pod-direct-service-example
            annotations:
              prometheus.io/scrape: "true"
              service.beta.kubernetes.io/cce-load-balancer-backend-type: "eni"
          spec:
            selector:
              app: nginx
            type: LoadBalancer
            sessionAffinity: None
            ports:
            - name: nginx
              protocol: TCP
              port: 80
              targetPort: 80

          FAQs

          Pod Zero-disruption Rolling Update

          LB Service that uses directly connected Pods may cause service interruption during the rolling update of the Pod.
          Reason
          When a Pod is terminated, the following event occurs:

          1. The Pod enters the Terminating state;
          2. The Pod is removed from the Endpoints; LB Controller monitors Endpoints changes and removes the Pod backend from the application BLB;
          3. Destroy Pod;
            Among them, Steps 2 and 3 are performed in parallel. If the LB Controller fails to remove the Pod when it has been destroyed, the traffic may be forwarded to the destroyed Pod and lead to service interruption.
            Handling Method
            Set the preStopHook for the Pod, sleep for a while (usually 30 seconds), and then destroy the Pod after the application BLB removes it.
            The reference is as follows:
                lifecycle:
                  preStop:
                    exec:
                      command: ["sleep", "30"]
          Previous
          Create LoadBalancer_Service Through YAML
          Next
          Use NGINX Ingress