Monitor the TiFlash Cluster
This document describes the monitoring items of TiFlash.
If you use TiUP to deploy the TiDB cluster, the monitoring system (Prometheus & Grafana) is deployed at the same time. For more information, see Overview of the Monitoring Framework.
The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node_exporter, and so on. A lot of metrics are there to help you diagnose.
TiFlash has three dashboard panels: TiFlash-Summary, TiFlash-Proxy-Summary, and TiFlash-Proxy-Details. The metrics on these panels indicate the current status of TiFlash. The TiFlash-Proxy-Summary and TiFlash-Proxy-Details panels mainly show the information of the Raft layer and the metrics are detailed in Key Monitoring Metrics of TiKV.
The following sections introduce the default monitoring information of TiFlash-Summary.
Server
- Store size: The storage size used by each TiFlash instance.
- Available size: The storage size available for each TiFlash instance.
- Capacity size: The storage capacity for each TiFlash instance.
- Uptime: The runtime of TiFlash since last restart.
- Memory: The memory usage per TiFlash instance.
- CPU Usage: The CPU utilization per TiFlash instance.
- FSync OPS: The number of fsync operations per TiFlash instance per second.
- File Open OPS: The number of
open
operations per TiFlash instance per second. - Opened File Count: The number of file descriptors currently opened by each TiFlash instance.
Coprocessor
- Request QPS: The number of coprocessor requests received by all TiFlash instances.
batch
is the number of batch requests.batch_cop
is the number of coprocessor requests in the batch requests.cop
is the number of coprocessor requests that are sent directly via the coprocessor interface.cop_dag
is the number of dag requests in all coprocessor requests.super_batch
is the number of requests to enable the Super Batch feature. - Executor QPS: The number of each type of dag executors in the requests received by all TiFlash instances.
table_scan
is the table scan executor.selection
is the selection executor.aggregation
is the aggregation executor.top_n
is theTopN
executor.limit
is the limit executor. - Request Duration: The total duration of all TiFlash instances processing coprocessor requests. The total duration is from the time that the coprocessor request is received to the time that the response to the request is completed.
- Error QPS: The number of errors of all TiFlash instances processing coprocessor requests.
meet_lock
means that the read data is locked.region_not_found
means that the Region does not exist.epoch_not_match
means the read Region epoch is inconsistent with the local epoch.kv_client_error
means that the communication with TiKV returns an error.internal_error
is the internal system error of TiFlash.other
is other types of errors. - Request Handle Duration: The duration of all TiFlash instances processing coprocessor requests. The processing time is from starting to execute the coprocessor request to completing the execution.
- Response Bytes/Seconds: The total bytes of the response from all TiFlash instances.
- Cop task memory usage: The total memory usage of all TiFlash instances processing coprocessor requests.
- Handling Request Number: The total number of all TiFlash instances processing coprocessor requests. The classification of the requests is the same as that of Request QPS.
- Threads of RPC: The real-time number of RPC threads used in each TiFlash instance.
- Max Threads of RPC: The maximum number of RPC threads recently used in each TiFlash instance.
- Threads: The real-time number of threads used in each TiFlash instance.
- Max Threads: The maximum number of threads recently used in each TiFlash instance.
Task Scheduler
- Min TSO: The minimum TSO among all queries running on each TiFlash instance. This value ensures that queries with the minimum TSO can be scheduled to run. When no queries are running, this value is the maximum unsigned 64-bit integer.
- Estimated Thread Usage and Limit: The estimated amount of threads used by all queries running on each TiFlash instance, and the soft and hard limits on the amount.
- Active and Waiting Queries Count: The amount of running queries and that of waiting queries on each TiFlash instance.
- Active and Waiting Tasks Count: The amount of running tasks and that of waiting tasks on each TiFlash instance.
- Hard Limit Exceeded Count: Times that the estimated amount of threads used by queries running on each TiFlash instance exceeds the hard limit.
- Task Waiting Duration: The duration from task initialization to task scheduling on each TiFlash instance.
DDL
- Schema Version: The version of the schema currently cached in each TiFlash instance.
- Schema Apply OPM: The number of TiDB
schema diff
synchronized inapply
operations by all TiFlash instances per minute. This item includes the count of three types ofapply
:diff apply
,full apply
, andfailed apply
.diff apply
is the normal process of a single apply. Ifdiff apply
fails,failed apply
increases by1
, and TiFlash rolls back tofull apply
and pulls the latest schema information to update the schema version of TiFlash. - Schema Internal DDL OPM: The number of specific DDL operations executed per minute in all TiFlash instances.
- Schema Apply Duration: The time used for a single
apply schema
operation in all TiFlash instances.
Storage
- Write Command OPS: The number of write requests received per second by the storage layer of all TiFlash instances.
- Write Amplification: Write amplification of each TiFlash instance (the actual bytes of disk writes divided by the written bytes of logical data).
total
is the write amplification since this start, and5min
is the write amplification in the last 5 minutes. - Read Tasks OPS: The number of read tasks in the storage layer per second for each TiFlash instance.
- Rough Set Filter Rate: The proportion of the number of packets read by each TiFlash instance in the last minute that are filtered by the rough set index of the storage layer.
- Internal Tasks OPS: The number of times that all TiFlash instances perform internal data sorting tasks per second.
- Internal Tasks Duration: The time consumed by all TiFlash instances for internal data sorting tasks.
- Page GC Tasks OPM: The number of times that all TiFlash instances perform Delta data sorting tasks per minute.
- Page GC Tasks Duration: The distribution of time consumed by all TiFlash instances to perform Delta data sorting tasks.
- Disk Write OPS: The number of disk writes per second by all TiFlash instances.
- Disk Read OPS: The number of disk reads per second by all TiFlash instances.
- Write flow: The traffic of disk writes by all TiFlash instances.
- Read flow: The traffic of disk reads by all TiFlash instances.
Storage Write Stall
- Write & Delta Management Throughput: The throughput of write and data compaction for all instances.
throughput_write
means the throughput of data synchronization through Raft.throughput_delta-management
means the throughput of data compaction.total_write
means the total bytes written since the last start.total_delta-management
means the total bytes of data compacted since the last start.
- Write Stall Duration: The stall duration of write and removing Region data (deleting ranges) by instance.
- Write Throughput By Instance: The throughput of write by instance. It includes the throughput by applying the Raft write commands and Raft snapshots.
- Write Command OPS By Instance: The total count of different kinds of commands received by instance.
write block
means the data logs synchronized through Raft.delete_range
means that some Regions are removed from or moved to this instance.ingest
means some Region snapshots are applied to this instance.
Raft
- Read Index OPS: The number of times that each TiFlash instance triggers the
read_index
request per second, which equals to the number of Regions triggered. - Read Index Duration: The time used by
read_index
for all TiFlash instances. Most time is used for interaction with the Region leader and retry. - Wait Index Duration: The time used by
wait_index
for all TiFlash instances, namely the time used to wait until local index >= read_index after theread_index
request is received.