Log Management
Overview
CCE log management feature helps users manage the business logs and container logs in Kubernetes. Through log management, users can output the logs in Kubernetes to external Elasticsearch service or BOS storage, so as to analyze or save the logs for a long time.
1.Create Log Rules In the left navigation bar, click "Monitoring Log > Log Management" to enter the log rule list page. Click new log rule in the log rule list:
- Rule name: Custom, used to identify and describe different log rules
- Log type: "Container Standard Output" refers to the log output when the container itself is running, which can be viewed by naming docker logs; "Container Internal Log" refers to the business process running in the container, stored in a path inside the container.
- The cluster name and log source are located to the objects that need to output logs. If you select "Specify Container", you can select from five resource dimensions: Deployment, Job, CronJob, DaemonSet and StatefullSet.
- The address, port, index, encryption and other configurations of ES are used to help CCE log association service output logs to the corresponding Elasticsearch service. Please fill in the Elasticsearch service information inside and outside any cluster, and make sure that CCE cluster can correctly establish connection with the Elasticsearch service.
- To push to BOS storage, you need to create a Secret resource that can connect to BOS storage in the following format, and then select the namespace and name of the Secret when selecting BOS storage. If the machine can connect to the external network, then the Endpoint of BOS can choose any region. If it cannot connect to the external network, it can only choose the same region as CCE cluster.
apiVersion: v1
kind: Secret
metadata:
name: bos-secret
data:
bosak: dXNlcm5hbWU= # echo -n "bosak...."| base64
bossk: cGFzc3dvcmQ= # echo -n "bossk...."| base64
2.Configure Kubernetes Resources
After configuring the log management rules, you need to ensure that the logs in Kubernetes can be output correctly, so you need to pass in the specified environment variables when creating the related Kubernetes resources:
- Input the environment variable cce_log_stdout and specify value as true to collect the standard output of the container. If not, the environment variable does not need to be passed
-
Input the environment variable CCE log internal and specify value as the absolute path of the log file in the container. Fill in the file path here, not the directory
When collecting the files in the container, you need to mount the directory of the log files in the container to the host in the form of emptydir.Please see the following yaml example:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: tomcat spec: selector: matchLabels: app: tomcat replicas: 4 template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: "tomcat:7.0" env: - name: cce_log_stdout value: "true" - name: cce_log_internal value: "/usr/local/tomcat/logs/catalina.*.log" volumeMounts: - name: tomcat-log mountPath: /usr/local/tomcat/logs volumes: - name: tomcat-log emptyDir: {}
3.Modify and Delete Log Rules
After a log rule is created, users can modify or delete the rule at any time. Click "Modify" to re-edit the created log rules. The logic of the edit page is basically the same as that of the new page, but the cluster and log type cannot be modified.
4.Deploy elasticsearch+kibana in CCE cluster
Deploy elasticsearch in the CCE cluster by the following yaml file:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v6.3.0
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v6.3.0
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v6.3.0
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: hub.agilecloud.com/jpaas-public/elasticsearch:v6.3.0
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: elasticsearch-logging
emptyDir: {}
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
After the deployment is successful, a service named elasticsearch logging will be created, as shown in the following figure. When creating a log rule, the address of ES can be filled with the name of service, and the port is the port of service:
Use the following yaml to deploy kibana. After successful deployment, use the LoadBalancer named kibana-logging to access kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kibana-logging
image: hub.agilecloud.com/jpaas-public/kibana:v6.3.0
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
- name: SERVER_BASEPATH
value: ""
ports:
- containerPort: 5601
name: ui
protocol: TCP
Note:
It is recommended to use cloud platform's elasticsearch service or build a dedicated elasticsearch cluster in the production environment.