百度智能云

All Product Document

          Cloud Container Engine

          CCE Best Practices-Guestbook Building

          This document introduces how to build the simplest web application in the Cloud Container Engine services of Baidu AI Cloud.

          Process Overview

          The container engine is a kind of elastic and high-availability container cluster service of Baidu AI Cloud based on Kubernetes+Docker. You can easily deploy the required business applications in the purchased cluster. Taking the classical front-end application GuestBook for example, this document shows you the overall process of cluster building and application deployment.

          Guestbook building steps

          1.Account Registration and Real-name System of Baidu AI Cloud

          2.Create Container Cluster

          3.Create Image Library

          4.Configure Kubectl Command Line Tool

          5.(Optional) Configure Kubernetes-Dashboard

          6.Create GuestBook by kubernetes-kubectl

          7.Bind Domain Name and Provide Services

          Account Registration and Real-name System of Baidu AI Cloud

          To register a Baidu AI Cloud account, please refer to Resister Baidu Account to complete registration operation.

          Please complete "Enterprise Authentication" or "Individual Authentication" according to your situation. Please refer to Authentication Process for details.

          If still failing to complete the identity verification when creating the CCE cluster, you can click the Authentication Prompt button in the page to complete related operations in the page of identity verification.

          Create Container Cluster

          The container cluster is a resource assembly composed by a set of servers, and meanwhile includes the cluster capability of Kubernetes. You can construct high-availability business applications in the cluster. The specific creation process of the container cluster is as shown below:

          1. Log in to Baidu AI Cloud official website.
          2. After logging in, select "Product Service > Cloud Container Engine (CCE)" to enter the page of "Cluster List".
          3. (Optional) The "Cluster List" page displays the names/IDs, status, regions and other information of all created CCE clusters, and the users can search the clusters by the cluster names.
          4. (Optional) For selection of regions, please select and switch according to actual demands.

            • The cluster is a global operation. The users can select different regions as required. If you want to create clusters in multiple regions, please refer to Region Selection Introduction.
            • The clusters created in different regions are mutually independent.
          5. Click "Create Cluster" to enter the interface of "Create Cluster".

          6. Create the cluster name and then fill in the related configuration information as required.

          Cluster configuration

          Configuration Note:

          • The quota for a single region is 5 clusters, and that for each cluster is 20 nodes.
          • The cluster name doesn't support modification temporarily. Please enter the name carefully. The users can self-define the cluster name. The custom name should follow the rules "letters, numbers, and special characters such as -\_/., starting with letters, length:1-64".
          • In the cluster configuration, the select box of node subnet in the available region displays only after the user ticks an available region. The region is required, the node subnet is subordinate to the node network, and the node subnets of different available regions must conflict with each other.
          • The container network is an independent address space, and must be independent of the node network, node subnet, and the container network of other clusters.

          Node configuration

          The nodes are single server resources forming the cluster, and support multiple types of CPU and GPU for selection. The containers required by the follow-up business applications created by you are distributed to all nodes to build a high-availability environment.

          The users select different cloud server types according to their own business requirements. If the common type is selected, the configuration details are as shown in the figure below:

          If the GPU instance is selected, the detail configuration is as follows:

          Configuration Note: The operation system supports the integrated GPU driver system configured by Baidu AI Cloud. If the users create the GPU custom image in their own BCCs. The image supports the GPU custom image. To use the custom image, the users should Manually Set the Integrated GPU Driver. GPU provides local disks with a fixed size and temporarily doesn't support the independent selection. The specific information is shown as in the table below:

          GPU card model Number of GPUs (block) GPU (core) Memory (GB) Disk (GB)
          NVIDIA Tesla K40 1 6 40 200
          NVIDIA Tesla K40 2 12 80 400
          NVIDIA Tesla K40 4 24 160 800
          NVIDIA deep learning development card 1 6 40 200
          NVIDIA deep learning development card 2 12 80 400
          NVIDIA deep learning development card 4 24 160 800

          Storage

          Configuration Note:

          • The users can select to directly create a CDS disk or create a CDS disk from a snapshot. For how to select the SSD cloud disk/high-performance cloud disk/common cloud disk, Please View the Detailed Description.
          • Currently, it supports automatic formatting of created CDS cloud disks, mounting of cloud disks to directory/data, and meanwhile designation of container storage target as/data/docker (the function only supports the first CDS cloud disk).

          Elastic resource

          Configuration Note:

          • Purchase of elastic public network IP: The users can select whether to purchase the elastic IP as required. If not purchasing the elastic IP, the users cannot use the command line tool and Kubernetes Dashboard of Baidu AI Cloud to manage the cluster.
          • If ticking "Purchase an Elastic IP", we should configure the EIP. Please refer to EIP Instance for details.
          • We can select either "Charging by Traffic" or "Charging by Bandwidth" as the charging standard in the public network bandwidth.

          System information

          Configuration Note:

          • Instance name: The instance name can be generated randomly or custom. The custom name should follow the rules "letters, numbers, and special characters such as -\_/., starting with letters, length: 1-64".
          • Administrator password: The instance name can be generated randomly or custom. The user-definition should conform to: 8-16 characters among which English, numbers and symbols must coexist, and the symbols are only limited to! @#$%^\*()".
          • Security group: By default, the related access rules can be also changed for the security group/custom security group after the instance is created. Refer to Management Security Group for details.

          Fill in the purchase information.

          Configuration Note:

          • Purchased duration: Select configuration for 1-9 months or 1-3 years. The duration cycle is the unit cycle from the instance creation day. If the BCC services are purchased for one month on May 20, 2015, the use cycle lasts from May 20, 2015 to June 20, 2015 (the default unit month is 30 days).
          • The filing is not supported temporarily for the postpaid child nodes.

          7.After confirming the selected configuration information and configuration fees, the users click "Pay" to enter the payment link.

          Note: If you have cash coupons of Baidu AI Cloud, you can use the cash coupons for settlement and deduct the fees. If the cash coupons are insufficient, you can use the bankcard to pay.

          8.Click "Confirm Payment" to complete the payment. After the payment, the system creates a cluster in the background.

          9.Click the link "Management Console" to enter the interface of "Cluster Interface". View the status of cluster creation, and the status of the created cluster is displayed as "Operating".

          Create Image Library

          The image warehouse is an exclusive Docker image storage space of users and the template service for container operation. It supports the use of the image, DockerHub, Baidu AI Cloud image self-constructed by the users to create business applications.

          Open Image Library

          When you first enter the image library, the page as shown in the following figure pops up. Fill in the user name and set a password according to requirements.

          Note:

          User name: The naming rules for user names of image library are: The user name is composed by 4-30 English lowercase letters and numbers. Password: The password is a token for users to log in to the docker image library.

          Create Namespace

          1.Select the name space under the image library classification, and then click "Create Name Space" in the name space list. 2.Fill in the space name to be created and select the space type. Then click "Confirm" to create the name space.

          Note:

          The naming rules of the name space are: the name space is composed by lowercase letters, numbers and . \_- and must start and end with lowercase letters and numbers, with the length of: 1-65.

          (Optional) Create Image

          1.Click the "Create" button in the image list page.

          2.Select the name space, fill in the image name and version number, and then click "Confirm".

          Note:

          1.The naming of image name and version number should conform to the rules of : "composition by lowercase letters, numbers and . \_-, starting and ending with lowercase letters and numbers, length: 1-65", and doesn't support Chinese. The names of the name space, image name and version number shouldn't be repeated. 2.If the users don't select to create an image, the basic information above is also generated automatically after the image is pushed to the corresponding name space.

          Push the Image to the Image Library

          Log in to Baidu AI Cloud image library

              >$ sudo docker login --username=[username] hub.baidubce.com 

          Username: name of image library, namely, the user name to be filled in when the image library services are subscribed. Enter the password to log in.

          Upload image

              $ sudo docker tag [ImageId]hub.baidubce.com/[namespace]/[ImageName]:[Image Version No.] 
              $ sudo docker push hub.baidubce.com/[namespace]/[ImageName]:[Image Version No.] 
          • The ImageId and image version number are supplemented according to the image information.
          • The namespace is the name space to be filled in when the image library is opened.
          • The ImageName is the image name created in the console.

          Configure Kubectl Command Line Tool

          Kubernetes is an open-source container cluster management system of Google, an open-source version of the large-scale container management technology Borg of Google for years, as well as one of the most important projects of CNCF. The major functions include:

          • Container-based application deployment, maintenance and rolling upgrade.
          • Load balance and service discovery
          • Cluster scheduling across machines and regions
          • Automatic scaling
          • Stateless services and stateful services
          • Extensive Volume support
          • The plug-in mechanism guarantees the expansibility.

          At present, Baidu AI Cloud container service is based on kubernetes v1.8.6.If users need to connect to the kubernetes cluster of Baidu AI Cloud from their local personal computers, they need to use kubernetes command-line client kubectl. The installation steps of kubectl are as follows

          Download the corresponding kubectl client from download page of kubernetes version. For other information about kubectl, please see the official kubernetes install and set kubectl documentation.

          Note: Before downloading kubectl, users need to log in to their servers and use the command kubectl version to view the version number of the cluster they created, and then select and download the corresponding kubectl version. E.g.: If the cluster version number of the user is 1.8, the kubectl to download is 1.8.

          Operation Steps in Windows Environment

          1. After creating the cluster, download the kubectl tool of windows version. For example, the 64-bit tool is as below:

            Note: Before downloading kubectl, users need to log in to their servers and use the command kubectl version to view the version number of the cluster they created, and then select and download the corresponding kubectl version. E.g.: If the cluster version number of the user is 1.8, the kubectl to download is 1.8.The download link is users can download the corresponding version of the tool according to their cluster version number.

          2. After downloading the tool, unzip it to any directory, such as disk D.
          1. Enter the user's folder under the directory C:\users\, for example, if the user name is foo, the folder should beC:\users\foo. At the same time, enter the directory and create the folder.kube.

          2. Down the cluster configuration file in the page of cluster list. Put it in the .kube file folder created in the previous step, and rename it to config.

          3. Open the command line prompt of windows.

          4. Enter the directory where the kubectl file is stored, and run kubectl.exe get node to view the nodes of the cluster.

          Operation Steps in Linux Environment

          1.Unzip the downloaded file, add the execution permission for kubectl, and put it under PATH.

              chmod +x ./kubectl
              sudo mv ./kubectl /usr/local/bin/kubectl 

          2.Configure kubectl and download cluster credentials. Download the cluster configuration file in the cluster interface and put it in the default configuration path of kubectl.

              mv kubectl.conf  ~/.kube/config

          3.After configuration, you can use kubectl to access the kubernetes cluster from the local computer.

              kubectl get node

          (Optional) Configure Kubernetes-Dashboard

          Kubernetes Dashboard is a kind of graphical user interface based on the Web page, and supports the business deployment, monitoring and management by the users.

          Precondition

          Before using the Kubernetes Dashboard locally, users need to configure kubectl to connect to the Kubernetes cluster of Baidu AI Cloud through kubectl.

          You can locally create one Dashboard application in the kubernetes cluster by the kubectl tool. The operation steps are as follows.

          Create Dashboard Services in Windows Environment

          1. First, create the dashboard.yaml file under the same file where the unzipped kubectl.exe is stored. For example, you can use the notepad for creation:

            The corresponding `dashboard.yaml` file is: 
            
            
            	 	 apiVersion: extensions/v1beta1 
            	 	 kind: Deployment 
            	 	 metadata: 
            	 	  labels: 
            	 	    app: kubernetes-dashboard 
            	 	  name: kubernetes-dashboard 
            	 	  namespace: kube-system 
            	 	 spec: 
            	 	  replicas: 1 
            	 	  selector: 
            	 	    matchLabels: 
            	 	      app: kubernetes-dashboard 
            	 	  template: 
            	 	    metadata: 
            	 	      labels: 
            	 	        app: kubernetes-dashboard 
            	 	    spec: 
            	 	      containers: 
            	 	      - name: kubernetes-dashboard 
            	 	        image: hub.baidubce.com/public/dashboard:v1.5.0-caas 
            	 	        imagePullPolicy: Always 
            	 	        ports: 
            	 	        - containerPort: 9090 
            	 	          protocol: TCP 
            	 	        volumeMounts: 
            	 	        - name: "kubeconfig" 
            	 	          mountPath: "/etc/kubernetes/" 
            	 	          readOnly: true 
            	 	        args: 
            	 	          - --kubeconfig=/etc/kubernetes/config 
            	 	        livenessProbe: 
            	 	          httpGet: 
            	 	            path: / 
            	 	            port: 9090 
            	 	          initialDelaySeconds: 30 
            	 	          timeoutSeconds: 30 
            	 	      volumes: 
            	 	      - name: "kubeconfig" 
            	 	        hostPath: 
            	 	          path: "/root/.kube/" 
            		---
            	 	 kind: Service 
            	 	 apiVersion: v1 
            	 	 metadata: 
            	 	  labels: 
            	 	    app: kubernetes-dashboard 
            	 	  name: kubernetes-dashboard 
            	 	  namespace: kube-system 
            	 	 spec: 
            	 	  type: NodePort 
            	 	  ports: 
            	 	  - port: 80 
            	 	    targetPort: 9090 
            	 	  selector: 
            	 	    app: kubernetes-dashboard 
          2. Then enter kubectl.exe create -f dashboard.yaml to create dashboard.
          1. View whether the creation is successful by the kubectl.exe get pod --all-namespaces command.

          2. Enter kubectl.exe proxyto open proxy, then access127.0.0.1:8001/ui in the browser. So you can access dashboard.

          Create Dashboard Services in Linux Environment

          Enter the following commands to create Dashboard services.

          $ kubectl create -f dashboard.yaml 
          deployment "kubernetes-dashboard" created 
          service "kubernetes-dashboard" created 

          The corresponding dashboard.yaml file is:

          	 apiVersion: extensions/v1beta1 
          	 kind: Deployment 
          	 metadata: 
          	  labels: 
          	    app: kubernetes-dashboard 
          	  name: kubernetes-dashboard 
          	  namespace: kube-system 
          	 spec: 
          	  replicas: 1 
          	  selector: 
          	    matchLabels: 
          	      app: kubernetes-dashboard 
          	  template: 
          	    metadata: 
          	      labels: 
          	        app: kubernetes-dashboard 
          	    spec: 
          	      containers: 
          	      - name: kubernetes-dashboard 
          	        image: hub.baidubce.com/public/dashboard:v1.5.0-caas 
          	        imagePullPolicy: Always 
          	        ports: 
          	        - containerPort: 9090 
          	          protocol: TCP 
          	        volumeMounts: 
          	        - name: "kubeconfig" 
          	          mountPath: "/etc/kubernetes/" 
          	          readOnly: true 
          	        args: 
          	          - --kubeconfig=/etc/kubernetes/config 
          	        livenessProbe: 
          	          httpGet: 
          	            path: / 
          	            port: 9090 
          	          initialDelaySeconds: 30 
          	          timeoutSeconds: 30 
          	      volumes: 
          	      - name: "kubeconfig" 
          	        hostPath: 
          	          path: "/root/.kube/" 
          	---
          	 kind: Service 
          	 apiVersion: v1 
          	 metadata: 
          	  labels: 
          	    app: kubernetes-dashboard 
          	  name: kubernetes-dashboard 
          	  namespace: kube-system 
          	 spec: 
          	  type: NodePort 
          	  ports: 
          	  - port: 80 
          	    targetPort: 9090 
          	  selector: 
          	    app: kubernetes-dashboard 

          After creating the dashboard, use the kubectl proxy command to open a proxy and connect to the Kubernetes API Server.

          	 kubectl proxy 

          After opening the proxy, you can access http://localhost:8001/ui in the local browser, and connect to the Dashboard.

          Note:

          localhost refers to the local machine rather than the virtual machine used by the user, and the corresponding IP address is: 127.0.0.1.

          Create GuestBook by kubernetes-kubectl

          The GuestBook application in the practice consists of two parts:

          • Web front-end services.
          • Redis storage system, including redis master node and redis slave node.

          Create Redis Master Pod

          redis-master: Redis services are used for front-end web application to "Write" comments.

          Note: All the yaml files in the operation are saved together with kubectl in the same directory.

          Execute the following commands on the Master node, and release the redis-master-deployment.yaml file to the Kubernetes cluster to create the redis-master.

              kubectl apply -f redis-master-deployment.yaml 

          The corresponding redis-master-deployment.yaml file is:

          	 apiVersion: extensions/v1beta1 
          	 kind: Deployment 
          	 metadata: 
          	  name: redis-master 
          	 spec: 
          	  replicas: 1 
          	  template: 
          	    metadata: 
          	      labels: 
          	        app: redis 
          	        role: master 
          	        tier: backend 
          	    spec: 
          	      containers: 
          	      - name: master 
          	        image: hub.baidubce.com/public/guestbook-redis-master:e2e  # or just image: redis 
          	        resources: 
          	          requests: 
          	            cpu: 100m 
          	            memory: 100Mi 
          	        ports: 
          	        - containerPort: 6379 

          After creation, use the following commands to verify whether the Pod is running:

          kubectl get pods 

          The responses should have the following similarities:

          NAME                            READY     STATUS    RESTARTS   AGE 
          redis-master-1068406935-3lswp   1/1       Running   0          28s 

          When the status of STATUS is: Running means that the pod is running normally.

          Create Redis Master Service

          A service is a set of the same containers configured and rules on how to access them, that is, the deployment unit in the microservice architecture, which is the smallest running unit of the user's business application.

          Enter the following commands to create one redis-master Pod associated Service.

          kubectl apply -f redis-master-service.yaml 

          The corresponding redis-master-service.yaml file is:

          	 apiVersion: v1 
          	 kind: Service 
          	 metadata: 
          	  name: redis-master 
          	  labels: 
          	    app: redis 
          	    role: master 
          	    tier: backend 
          	 spec: 
          	  ports: 
          	  - port: 6379 
          	    targetPort: 6379 
          	  selector: 
          	    app: redis 
          	    role: master 
          	    tier: backend 

          Note: tarPort attribute, designating the port number monitored by the container application in the Pod.

          Then use the next command to verify whether Redis Master Service is running:

          kubectl get service 

          The response should be similar to:

          NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE 
          kubernetes     10.0.0.1     <none>        443/TCP    1m 
          redis-master   10.0.0.151   <none>        6379/TCP   8s 

          Create Redis Slave Pod

          redis-slave: Redis services are used for front-end Web application to "read" comments, and keep synchronization with the data of Redis-Master.

          Release the redis-slave-deployment.yaml file to the Kubernetes cluster to create the redis-slave.

          	 kubectl apply -f redis-slave-deployment.yaml 

          The corresponding redis-slave-deployment.yaml file is:

          	 apiVersion: extensions/v1beta1 
          	 kind: Deployment 
          	 metadata: 
          	  name: redis-slave 
          	 spec: 
          	  selector: 
          	    matchLabels: 
          	      app: redis 
          	      role: slave 
          	      tier: backend 
          	  replicas: 2 
          	  template: 
          	    metadata: 
          	      labels: 
          	        app: redis 
          	        role: slave 
          	        tier: backend 
          	    spec: 
          	      containers: 
          	      - name: slave 
          	        image: hub.baidubce.com/public/guestbook-redis-slave:v1 
          	        resources: 
          	          requests: 
          	            cpu: 100m 
          	            memory: 100Mi 
          	        env: 
          	        - name: GET_HOSTS_FROM 
          	          value: dns 
          	          # Using `GET_HOSTS_FROM=dns` requires your cluster to 
          	          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in 
          	          # service launched automatically. However, if the cluster you are using 
          	          # does not have a built-in DNS service, you can instead 
          	          # instead access an environment variable to find the master 
          	          # service's host. To do so, comment out the 'value: dns' line above, and 
          	          # uncomment the line below: 
          	          # value: env 
          	        ports: 
          	        - containerPort: 6379 

          After the above operation, execute the next code to query the Pods list to verify whether Redis Slave Pods are running:

          kubectl get pods 

          The response should be similar to:

          NAME                            READY     STATUS              RESTARTS   AGE 
          redis-master-1068406935-3lswp   1/1       Running             0          1m 
          redis-slave-2005841000-fpvqc    1/1       Running             0          6s 
          redis-slave-2005841000-phfv9    1/1       Running             0          6s 

          Create Redis Slave Service

          Enter the following commands to start two duplicates of redis-slave. The Redis in each duplicate keep data synchronization with redis-master, and together with redis-master, constitutes one Redis cluster with the ability to read and write.

          kubectl apply -f redis-slave-service.yaml 

          The corresponding redis-slave-service.yaml file is:

          	 apiVersion: v1 
          	 kind: Service 
          	 metadata: 
          	  name: redis-slave 
          	  labels: 
          	    app: redis 
          	    role: slave 
          	    tier: backend 
          	 spec: 
          	  ports: 
          	  - port: 6379 
          	  selector: 
          	    app: redis 
          	    role: slave 
          	    tier: backend 

          After the above operation, execute the following commands to query the service list to verify whether Redis Slave Service is running:

          kubectl get services

          The response should be similar to:

          NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE 
          kubernetes     10.0.0.1    <none>        443/TCP    2m 
          redis-master   10.0.0.151  <none>        6379/TCP   1m 
          redis-slave    10.0.0.223  <none>        6379/TCP   6s 

          Create Frontend Pod

          Release the frontend-deployment.yaml file to the Kubernetes cluster to create the redis-slave.

          kubectl apply -f frontend-deployment.yaml 

          The frontend-deployment.yaml file is:

          	 apiVersion: apps/v1beta2 
          	 kind: Deployment 
          	 metadata: 
          	  name: frontend 
          	 spec: 
          	  selector: 
          	    matchLabels: 
          	      app: guestbook 
          	      tier: frontend 
          	  replicas: 3 
          	  template: 
          	    metadata: 
          	      labels: 
          	        app: guestbook 
          	        tier: frontend 
          	    spec: 
          	      containers: 
          	      - name: php-redis 
          	        image: hub.baidubce.com/public/guestbook-frontend:v4 
          	        resources: 
          	          requests: 
          	            cpu: 100m 
          	            memory: 100Mi 
          	        env: 
          	        - name: GET_HOSTS_FROM 
          	          value: dns 
          	          # Using `GET_HOSTS_FROM=dns` requires your cluster to 
          	          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in 
          	          # service launched automatically. However, if the cluster you are using 
          	          # does not have a built-in DNS service, you can instead 
          	          # instead access an environment variable to find the master 
          	          # service's host. To do so, comment out the 'value: dns' line above, and 
          	          # uncomment the line below: 
          	          # value: env 
          	        ports: 
          	        - containerPort: 80 

          After the above operation, execute the following commands to query the Pods list to verify whether the three front-end duplicates are running:

          	 kubectl get pods -l app=guestbook -l tier=frontend 

          The response should be similar to:

          NAME                        READY     STATUS    RESTARTS   AGE 
          frontend-3823415956-dsvc5   1/1       Running   0          54s 
          frontend-3823415956-k22zn   1/1       Running   0          54s 
          frontend-3823415956-w9gbt   1/1       Running   0          54s 

          Create Frontend Service

          The purpose of creating Frontend Service is to use the NodePort of Service to map one port which can be accessed by the extranet to redis-master and redis-slave Service in the Kubernetes cluster.

          Note: If you use the load balancer provided by Baidu AI Cloud, please comment type: NodePort, and uncomment type: LoadBalancer.

          Enter the following commands to create Frontend Service.

          	 kubectl apply -f frontend-service.yaml 

          The frontend-service.yaml file is:

              apiVersion: v1 
          	 kind: Service 
          	 metadata: 
          	  name: frontend 
          	  labels: 
          	    app: guestbook 
          	    tier: frontend 
          	 spec: 
          	  # comment or delete the following line if you want to use a LoadBalancer 
          	  type: LoadBalancer 
          	  # if your cluster supports it, uncomment the following to automatically create 
          	  # an external load-balanced IP for the frontend service. 
          	  ports: 
          	  - port: 80 
          	  selector: 
          	    app: guestbook 
          	    tier: frontend 

          After the above operation, execute the following codes to query the service list to verify whether the front-end service is running:

          	 kubectl get services 

          The response should be similar to:

          	 NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE 
          	 frontend       10.0.0.112  <nodes>       80:32372/TCP   6s 
          	 kubernetes     10.0.0.1    <none>        443/TCP        4m 
          	 redis-master   10.0.0.151  <none>        6379/TCP       2m 
          	 redis-slave    10.0.0.223  <none>        6379/TCP       1m 

          View Frontend Service LoadBalancer

          Run the following commands to acquire Frontend ServiceIP address.

          	 kubectl get service frontend 

          The response should be similar to:

              NAME       CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE 
          	 frontend   10.0.0.112      180.76.174.157     80:32372/TCP   1m 

          Copy the external IP address and load the page in the browser to view your Guestbook.

          Bind the Domain Name and Provide Services

          Purchase Domain Name

          If having a registered domain name, the user can directly bind the domain name in the next step. If having no domain name, the user can refer to Purchase of Domain Name.

          Bind Domain Name

          1. Log in to the management console to enter Domain Name Management.
          2. Select the domain name of the extranet IP requiring binding of frontend service, and in the operation bar, click Parse

          3. In the pop-up Add Parse menu bar,Record the type Select A record class. For the record value fill in extranet IP of frontend service.

          4. Use the domain name to access services, with the results as shown in the figure below:

          Previous
          CCE Use of Check List
          Next
          CCE Practice of Accessing Public Network