SHOW-ROUTINE-LOAD
SHOW ROUTINE LOAD
Description
Used to display Routine Load job information.
SHOW [ALL] ROUTINE LOAD
[FOR [db.]job_name]
- ALL:If ALL keyword is specified, then all running and stopped jobs will be displayed. Otherwise, only jobs that are not STOPPED are displayed.
- db:If no database is specified, the jobs under the current database will be displayed by default.
Return results of statements are as follows:
mysql> show routine load\G
*************************** 1. row ***************************
Id: 10280
Name: job1
CreateTime: 2020-12-10 19:32:58
PauseTime: NULL
EndTime: NULL
DbName: default_cluster:db1
TableName: tbl1
State: RUNNING
DataSourceType: KAFKA
CurrentTaskNum: 1
JobProperties: {"partitions":"*","columnToColumnExpr":"","maxBatchIntervalS":"10","whereExpr":"*","timezone":"Asia/Shanghai","mergeType":"APPEND","format":"csv","columnSeparator":"','","json_root":"","maxBatchSizeBytes":"104857600","exec_mem_limit":"2147483648","strict_mode":"false","jsonpaths":"","deleteCondition":"*","desireTaskConcurrentNum":"1","maxErrorNum":"0","strip_outer_array":"false","currentTaskConcurrentNum":"1","execMemLimit":"2147483648","num_as_string":"false","maxBatchRows":"200000"}
DataSourceProperties: {"topic":"test","currentKafkaPartitions":"0","brokerList":"127.0.0.1:9094"}
CustomProperties: {}
Statistic: {"receivedBytes":0,"errorRows":0,"committedTaskNum":0,"loadedRows":0,"loadRowsRate":0,"abortedTaskNum":0,"totalRows":0,"unselectedRows":0,"receivedBytesRate":0,"taskExecuteTimeMs":1}
Progress: {"0":"OFFSET_BEGINNING"}
ReasonOfStateChanged:
ErrorLogUrls:
OtherMsg:
1 row in set (0.01 sec)
Id
: unique ID of job.Name
: job name.CreateTime
: job create time.PauseTime
:the time when the last job was suspended.EndTime
:job finished time.DbName
:database name.TableName
: name of table loaded.-
State
:running status of job.NEED_SCHEDULE
:job pending for schedulingRUNNING
:job is running.PAUSED
:job is paused.STOPPED
:job has been stopped.
DataSourceType
:data source type.CurrentTaskNum
:number of subtasks running.-
JobProperties
:Job configuration details, a string in Json format.{ "partitions": "*", // Specify the loaded partition list, * represents unspecified. "columnToColumnExpr": "", // Column mapping and conversion relationships.Empty means unspecified. "maxBatchIntervalS": "10", // Maximum running time of subtasks, in seconds. "whereExpr": "*", // Column filter criteria.* represents unspecified. "timezone": "Asia/Shanghai", // time zone: "mergeType": "APPEND", // data merger type. "format": "csv", // load data format. "columnSeparator": "','", // column separator. "json_root": "", // Json Root。 "maxBatchSizeBytes": "104857600", // maximum bytes consumed by subtasks. "exec_mem_limit": "2147483648", // memory limits for subtasks. "strict_mode": "false", // whether to turn on strict mode. "jsonpaths": "", // json paths "deleteCondition": "*", // whether the Marked Delete column is specified. * represents unspecified. "desireTaskConcurrentNum": "1", // the expected maximum number of concurrent subtasks set by the user. "maxErrorNum": "0", // max rows with errors allowed. "strip_outer_array": "false", // Whether to expand the array for Json format data. "currentTaskConcurrentNum": "1", // Current number of concurrent subtasks. "num_as_string": "false", // Whether to parse all fields in Json data into string type. "maxBatchRows": "200000" // Maximum row number consumed by subtasks. }
CustomProperties
:extra property configured by users. It is a string in Json format.-
Statistic
:statistical information of job running. It is a string in Json format.{ "receivedBytes": 0, // bytes of received data. "errorRows": 0, // error data rows. "committedTaskNum": 0, // number of subtasks successfully loaded. "loadedRows": 0, // number of rows loaded. "loadRowsRate": 0, // average number of rows loaded per second. "abortedTaskNum": 0, // number of subtasks that failed or did not consume data. "totalRows": 0, // total number of rows consumed. "unselectedRows": 0, // number of rows filtered by the where condition. "receivedBytesRate": 0, // bytes of data received per second. "taskExecuteTimeMs": 1 // accumulated execution time of subtasks. }
ReasonOfStateChanged
:the reason for the change of job status.-
ErrorLogUrls
:When there is error data, the url links of error data in last three subtasks with error data will be displayed here. The error data can be viewed through the following statement.SHOW LOAD WARNINGS ON "your_url";
Description of error messages can be found in SHOW LOAD
-
Progress
:consumption progress. It is a string in Json format.Key is the Partition ID of Kafka. The meaning of Value is as follows:- OFFSET_BEGINNING:initial state, which means that the consumption starts from the beginning and has not started yet.
- OFFSET_END:initial state, which means that the consumption starts from the end and has not started yet.
- OFFSET_ZERO:initial state, which means that the consumption starts from 0 and has not started yet.
- Integer value: offset that has been consumed by corresponding Kafka partition.
OtherMsg
: other information.
example
-
Show all routine load jobs named test1 (including stopped or cancelled jobs). The result is in one or several rows.
SHOW ALL ROUTINE LOAD FOR test1;
-
Show the currently running routine load job named test1
SHOW ROUTINE LOAD FOR test1;
-
Show all routine load jobs (including stopped or cancelled jobs) under example_db. The result is in one or several rows.
use example_db; SHOW ALL ROUTINE LOAD;
-
Show all running routine load jobs under example_db
use example_db; SHOW ROUTINE LOAD;
-
Show the currently running routine load job named test1 under example_db
SHOW ROUTINE LOAD FOR example_db.test1;
-
Show all routine load jobs named test1 under example_db (including stopped or cancelled jobs). The result is in one or several rows.
SHOW ALL ROUTINE LOAD FOR example_db.test1;
Keywords
SHOW, ROUTINE, LOAD
Best Practices
-
Observe the progress of load jobs.
The consumption progress of the load job can be viewed through the following fields in the return results.
-
Statistic
Observe the changes of
committedTaskNum
andabortedTaskNum
in Statistic. IfcommittedTaskNum
continues to increase, it indicates that the job is consuming normally. IfabortedTaskNum
continues to increase, it indicates that there might be no data to be consumed in Kafka; or the data in Kafka cannot be consumed; or there are other errrors encountered. -
Progress
Observe the offset consumption progress of Kafka Partition to know to overal consumption progress.
-