Sign InTry Free

Software and Hardware Recommendations

As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server, ARM architecture server, and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems.

OS and platform requirements

  • For v6.1.1 and later v6.1.x versions
  • v6.1.0

Starting from v6.1.1, TiDB provides multi-level support for different quality standards on the combination of operating systems and CPU architectures.

  • For the following combinations of operating systems and CPU architectures, TiDB provides enterprise-level production quality, and the product features have been comprehensively and systematically verified:

    Operating systemsSupported CPU architectures
    • Red Hat Enterprise Linux 8.4 or a later 8.x version
    • CentOS 8.4 or a later 8.x version
    • x86_64
    • ARM 64
    • Red Hat Enterprise Linux 7.3 or a later 7.x version
    • CentOS 7.3 or a later 7.x version
    • x86_64
    • ARM 64
    Kylin Euler V10 SP1/SP2
    • x86_64
    • ARM 64
    UOS V20
    • x86_64
    • ARM 64
  • For the following combinations of operating systems and CPU architectures, you can compile, build, and deploy TiDB. In addition, you can also use the basic features of OLTP, OLAP, and the data tools. However, TiDB does not guarantee enterprise-level production quality:

    Operating systemsSupported CPU architectures
    macOS Catalina or later
    • x86_64
    • ARM 64
    Oracle Enterprise Linux 7.3 or a later 7.x versionx86_64
    Ubuntu LTS 18.04 or laterx86_64
    Debian 9 (Stretch) or laterx86_64
    Fedora 35 or laterx86_64
    openSUSE Leap later than v15.3 (not including Tumbleweed)x86_64
    SUSE Linux Enterprise Server 15x86_64
  • If you are using the 32-bit version of an operating system listed in the preceding two tables, TiDB is not guaranteed to be compilable, buildable or deployable on the 32-bit operating system and the corresponding CPU architecture, or TiDB does not actively adapt to the 32-bit operating system.

  • Other operating system versions not mentioned above might work but are not officially supported.

Linux OSVersion
Red Hat Enterprise Linux7.3 or later 7.x versions
CentOS7.3 or later 7.x versions
Oracle Enterprise Linux7.3 or later 7.x versions
Amazon Linux2
Ubuntu LTS16.04 or later

Other Linux OS versions such as Debian Linux and Fedora Linux might work but are not officially supported.

Libraries required for compiling and running TiDB

Libraries required for compiling and building TiDBVersion
Golang1.18.5 or later
Rustnightly-2022-07-31 or later
GCC7.x
LLVM13.0 or later

Library required for running TiDB: glibc (2.28-151.el8 version)

Software recommendations

Control machine

SoftwareVersion
sshpass1.06 or later
TiUP1.5.0 or later

Target machines

SoftwareVersion
sshpass1.06 or later
numa2.0.12 or later
tarany

Server recommendations

You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture or on the hardware server platform in the ARM architecture. The requirements and recommendations about server hardware configuration (ignoring the resources occupied by the operating system itself) for development, test, and production environments are as follows:

Development and test environments

ComponentCPUMemoryLocal StorageNetworkInstance Number (Minimum Requirement)
TiDB8 core+16 GB+No special requirementsGigabit network card1 (can be deployed on the same machine with PD)
PD4 core+8 GB+SAS, 200 GB+Gigabit network card1 (can be deployed on the same machine with TiDB)
TiKV8 core+32 GB+SAS, 200 GB+Gigabit network card3
TiFlash32 core+64 GB+SSD, 200 GB+Gigabit network card1
TiCDC8 core+16 GB+SAS, 200 GB+Gigabit network card1

Production environment

ComponentCPUMemoryHard Disk TypeNetworkInstance Number (Minimum Requirement)
TiDB16 core+48 GB+SAS10 Gigabit network card (2 preferred)2
PD8 core+16 GB+SSD10 Gigabit network card (2 preferred)3
TiKV16 core+64 GB+SSD10 Gigabit network card (2 preferred)3
TiFlash48 core+128 GB+1 or more SSDs10 Gigabit network card (2 preferred)2
TiCDC16 core+64 GB+SSD10 Gigabit network card (2 preferred)2
Monitor8 core+16 GB+SASGigabit network card1

Before you deploy TiFlash, note the following items:

  • TiFlash can be deployed on multiple disks.
  • It is recommended to use a high-performance SSD as the first disk of the TiFlash data directory to buffer the real-time replication of TiKV data. The performance of this disk should not be lower than that of TiKV, such as PCI-E SSD. The disk capacity should be no less than 10% of the total capacity; otherwise, it might become the bottleneck of this node. You can deploy ordinary SSDs for other disks, but note that a better PCI-E SSD brings better performance.
  • It is recommended to deploy TiFlash on different nodes from TiKV. If you must deploy TiFlash and TiKV on the same node, increase the number of CPU cores and memory, and try to deploy TiFlash and TiKV on different disks to avoid interfering each other.
  • The total capacity of the TiFlash disks is calculated in this way: the data volume of the entire TiKV cluster to be replicated / the number of TiKV replicas * the number of TiFlash replicas. For example, if the overall planned capacity of TiKV is 1 TB, the number of TiKV replicas is 3, and the number of TiFlash replicas is 2, then the recommended total capacity of TiFlash is 1024 GB / 3 * 2. You can replicate only the data of some tables. In such case, determine the TiFlash capacity according to the data volume of the tables to be replicated.

Before you deploy TiCDC, note that it is recommended to deploy TiCDC on PCIe-SSD disks larger than 1 TB.

Network requirements

As an open source distributed NewSQL database, TiDB requires the following network port configuration to run. Based on the TiDB deployment in actual environments, the administrator can open relevant ports in the network side and host side.

ComponentDefault PortDescription
TiDB4000the communication port for the application and DBA tools
TiDB10080the communication port to report TiDB status
TiKV20160the TiKV communication port
TiKV20180the communication port to report TiKV status
PD2379the communication port between TiDB and PD
PD2380the inter-node communication port within the PD cluster
TiFlash9000the TiFlash TCP service port
TiFlash8123the TiFlash HTTP service port
TiFlash3930the TiFlash RAFT and Coprocessor service port
TiFlash20170the TiFlash Proxy service port
TiFlash20292the port for Prometheus to pull TiFlash Proxy metrics
TiFlash8234the port for Prometheus to pull TiFlash metrics
Pump8250the Pump communication port
Drainer8249the Drainer communication port
TiCDC8300the TiCDC communication port
Monitoring9090the communication port for the Prometheus service
Monitoring20120the communication port for the NgMonitoring service
Node_exporter9100the communication port to report the system information of every TiDB cluster node
Blackbox_exporter9115the Blackbox_exporter communication port, used to monitor the ports in the TiDB cluster
Grafana3000the port for the external Web monitoring service and client (Browser) access
Alertmanager9093the port for the alert web service
Alertmanager9094the alert communication port

Disk space requirements

ComponentDisk space requirementHealthy disk usage
TiDBAt least 30 GB for the log diskLower than 90%
PDAt least 20 GB for the data disk and for the log disk, respectivelyLower than 90%
TiKVAt least 100 GB for the data disk and for the log disk, respectivelyLower than 80%
TiFlashAt least 100 GB for the data disk and at least 30 GB for the log disk, respectivelyLower than 80%
TiUP
  • Control machine: No more than 1 GB space is required for deploying a TiDB cluster of a single version. The space required increases if TiDB clusters of multiple versions are deployed.
  • Deployment servers (machines where the TiDB components run): TiFlash occupies about 700 MB space and other components (such as PD, TiDB, and TiKV) occupy about 200 MB space respectively. During the cluster deployment process, the TiUP cluster requires less than 1 MB of temporary space (/tmp directory) to store temporary files.
N/A
Ngmonitoring
  • Conprof: 3 x 1 GB x Number of components (each component occupies about 1 GB per day, 3 days in total) + 20 GB reserved space
  • Top SQL: 30 x 50 MB x Number of components (each component occupies about 50 MB per day, 30 days in total)
  • Conprof and Top SQL share the reserved space
N/A

Web browser requirements

TiDB relies on Grafana to provide visualization of database metrics. A recent version of Internet Explorer, Chrome or Firefox with Javascript enabled is sufficient.

Download PDFRequest docs changes
Was this page helpful?
Open Source Ecosystem
TiDB
TiKV
TiSpark
Chaos Mesh
© 2022 PingCAP. All Rights Reserved.