This document lists the Most Frequently Asked Questions about TiDB.
TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQL’s SQL syntax and protocol to manage and retrieve data.
The TiDB cluster has three components: the TiDB server, the PD (Placement Driver) server, and the TiKV server. For more details, see TiDB architecture.
No. TiDB supports MySQL syntax and protocol, but it is a new open source database that is developed and maintained by PingCAP, Inc.
Yes, it is. When all the required services are started, you can use TiDB as easily as a MySQL server. You can replace MySQL with TiDB to power your applications without changing a single line of code in most cases. You can also manage TiDB using the popular MySQL management tools.
Currently, TiDB supports the majority of MySQL 5.7 syntax, but does not support trigger, stored procedures, user-defined functions, and foreign keys. For more details, see Compatibility with MySQL.
TiDB is self-healing. All of the three components, TiDB, TiKV and PD, can tolerate failures of some of their instances. With its strong consistency guarantee, whether it’s data machine failures or even downtime of an entire data center, your data can be recovered automatically. For more information, see TiDB architecture.
TiDB uses the Raft consensus algorithm to ensure consistency among multiple replicas. At the bottom layer, TiDB uses a model of replication log + State Machine to replicate data. For the write requests, the data is written to a Leader and the Leader then replicates the command to its Followers in the form of log. When the majority of nodes in the cluster receive this log, this log is committed and can be applied into the State Machine. TiDB has the latest data even if a minority of the replicas are lost.
Yes. The transaction model in TiDB is inspired by Google’s Percolator, a paper published in 2006. It’s mainly a two-phase commit protocol with some practical optimizations. This model relies on a timestamp allocator to assign monotone increasing timestamp for each transaction, so the conflicts can be detected. PD works as the timestamp allocator in a TiDB cluster.
Any language supported by MySQL client or driver.
Yes. TiKV and TiDB support many popular standalone storage engines, such as GolevelDB and BoltDB. If the storage engine is a KV engine that supports transactions and it provides a client that meets the interface requirement of TiDB, then it can connect to TiDB.
The architecture of TiDB guarantees that it fully supports geo-distribution and multi-activeness. Your data and applications are always-on. All the outages are transparent to your applications and your data can recover automatically. The operation depends on the network latency and stability. It is recommended to keep the latency within 5ms. Currently, we already have similar use cases. For details, contact info@pingcap.com.
Currently, TiDB documentation is the most important and timely way to get knowledge of TiDB. In addition, we also have some technical communication groups. If you have any needs, contact info@pingcap.com.
See The System Variables.
select for update
?Yes. But it differs from MySQL in syntax. As a distributed database, TiDB uses the optimistic lock. select for update
does not lock data when the transaction is started, but checks conflicts when the transaction is committed. If the check reveals conflicts, the committing transaction rolls back.
The character sets of TiDB use UTF-8 by default and currently only support UTF-8. The string of TiDB uses the memcomparable format.
32 characters at most.
5000 at most.
No. The JDBC drive of TiDB is MySQL JDBC (Connector/J). When using Atomikos, set the data source to type="com.mysql.jdbc.jdbc2.optional.MysqlXADataSource"
. TiDB does not support the connection with MySQL JDBC XADataSource. MySQL JDBC XADataSource only works for MySQL (for example, using DML to modify the redo
log).
After you configure the two data sources of Atomikos, set the JDBC drives to XA. When Atomikos operates TM and RM (DB), Atomikos sends the command including XA to the JDBC layer. Taking MySQL for an example, when XA is enabled in the JDBC layer, JDBC will send a series of XA logic operations to InnoDB, including using DML to change the redo
log. This is the operation of the two-phase commit. The current TiDB version does not support the upper application layer JTA/XA and does not parse XA operations sent by Atomikos.
As a standalone database, MySQL can only implement across-database transactions using XA; while TiDB supports distributed transactions using Google Percolator transaction model and its performance stability is higher than XA, so TiDB does not support XA and there is no need for TiDB to support XA.
show processlist
display the system process ID?The display content of TiDB show processlist
is almost the same as that of MySQL show processlist
. TiDB show processlist
does not display the system process ID. The ID that it displays is the current session ID. The differences between TiDB show processlist
and MySQL show processlist
are as follows:
tidb-server
instance is a stateless engine for parsing and executing the SQL statements (for details, see TiDB architecture). show processlist
displays the session list executed in the tidb-server
instance that the user logs in to from the MySQL client, not the list of all the sessions running in the cluster. But MySQL is a standalone database and its show processlist
displays all the SQL statements executed in MySQL.show processlist
displays the estimated memory usage (unit: Byte) of the current session, which is not displayed in MySQL show processlist
.To modify the user password in TiDB, it is recommended to use set password for 'root'@'%' = '0101001';
or alter
, not update mysql.user
which might lead to the condition that the password in other nodes is not refreshed timely.
It is recommended to use the official standard statements when modifying the user password and privilege. For details, see TiDB user account management.
The auto-increment ID feature in TiDB is only guaranteed to be automatically incremental and unique but is not guaranteed to be allocated sequentially. Currently, TiDB is allocating IDs in batches. If data is inserted into multiple TiDB servers simultaneously, the allocated IDs are not sequential. When multiple threads concurrently insert data to multiple tidb-server
instances, the auto-increment ID of the later inserted data may be smaller. TiDB allows specifying AUTO_INCREMENT
for the integer field, but allows only one AUTO_INCREMENT
field in a single table. For details, see DDL.
sql_mode
in TiDB except using the set
command?The configuration method of TiDB sql_mode
is different from that of MySQL sql_mode
. TiDB does not support using the configuration file to configure sql\_mode
of the database; it only supports using the set
command to configure sql\_mode
of the database. You can use set @@global.sql_mode = 'STRICT_TRANS_TABLES';
to configure it.
Like MySQL, TiDB supports the SASL protocol for user login authentication and password processing.
When the client connects to TiDB, the challenge-response authentication mode starts. The process is as follows:
See TiDB Internal (I) - Data Storage.
See TiDB Internal (II) - Computing.
See TiDB Internal (III) - Scheduling.
Linux OS Platform | Version |
---|---|
Red Hat Enterprise Linux | 7.3 or later |
CentOS | 7.3 or later |
Oracle Enterprise Linux | 7.3 or later |
As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. For details, see Software and Hardware Requirements for deploying TiDB.
You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, testing and production environments are as follows:
Component | CPU | Memory | Local Storage | Network | Instance Number (Minimum Requirement) |
---|---|---|---|---|---|
TiDB | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with PD) |
PD | 8 core+ | 16 GB+ | SAS, 200 GB+ | Gigabit network card | 1 (can be deployed on the same machine with TiDB) |
TiKV | 8 core+ | 32 GB+ | SAS, 200 GB+ | Gigabit network card | 3 |
Total Server Number | 4 |
Component | CPU | Memory | Hard Disk Type | Network | Instance Number (Minimum Requirement) |
---|---|---|---|---|---|
TiDB | 16 core+ | 48 GB+ | SAS | 10 Gigabit network card (2 preferred) | 2 |
PD | 8 core+ | 16 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 |
TiKV | 16 core+ | 48 GB+ | SSD | 10 Gigabit network card (2 preferred) | 3 |
Monitor | 8 core+ | 16 GB+ | SAS | Gigabit network card | 1 |
Total Server Number | 9 |
As a distributed cluster, TiDB has a high demand on time, especially for PD, because PD needs to distribute unique timestamps. If the time in the PD servers is not consistent, it takes longer waiting time when switching the PD server. The bond of two network cards guarantees the stability of data transmission, and 10 gigabit guarantees the transmission speed. Gigabit network cards are prone to meet bottlenecks, therefore it is strongly recommended to use 10 gigabit network cards.
If the resources are adequate, it is recommended to use RAID 10 for SSD. If the resources are inadequate, it is acceptable not to use RAID for SSD.
For details, see TiDB software and hardware requirements.
See Ansible Deployment.
toml
configuration for TiKV/PD does not take effect?You need to set the --config
parameter in TiKV/PD to make the toml
configuration effective. TiKV/PD does not read the configuration by default. Currently, this issue only occurs when deploying using Binary. For TiKV, edit the configuration and restart the service. For PD, the configuration file is only read when PD is started for the first time, after which you can modify the configuration using pd-ctl. For details, see PD Control User Guide.
The monitoring machine is recommended to use standalone deployment. It is recommended to use an 8 core CPU with 16 GB+ memory and a 500 GB+ hard disk.
Check the time difference between the machine time of the monitor and the time within the cluster. If it is large, you can correct the time and the monitor will display all the metrics.
Variable | Description |
---|---|
cluster_name | the name of a cluster, adjustable |
tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches |
deployment_method | the method of deployment, binary by default, Docker optional |
process_supervision | the supervision way of processes, systemd by default, supervise optional |
timezone | the timezone of the managed node, adjustable, Asia/Shanghai by default, used with the set_timezone variable |
set_timezone | to edit the timezone of the managed node, True by default; False means closing |
enable_elk | currently not supported |
enable_firewalld | to enable the firewall, closed by default |
enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it |
machine_benchmark | to monitor the disk IOPS of the managed node, True by default; do not close it |
set_hostname | to edit the hostname of the managed node based on the IP, False by default |
enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the zookeeper_addrs variable |
zookeeper_addrs | the ZooKeeper address of the binlog Kafka cluster |
enable_slow_query_log | to record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log |
deploy_without_tidb | the Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the inventory.ini file |
It is not recommended to deploy TiDB offline using Ansible. If the Control Machine has no access to external network, you can deploy TiDB offline using Ansible. For details, see Offline Deployment Using Ansible.
You can use Docker Compose to build a TiDB cluster locally, including the cluster monitoring components. You can also customize the version and number of instances for each component. The configuration file can also be customized. You can only use this deployment method for testing and development environment. For details, see Building the Cluster Using Docker Compose.
The slow query definition for TiDB is in the conf/tidb.yml
configuration file of tidb-ansible
. The slow-threshold: 300
parameter is used to configure the threshold value of the slow query (unit: millisecond).
The slow query log is recorded in tidb.log
by default. If you want to generate a slow query log file separately, set enable_slow_query_log
in the inventory.ini
configuration file to True
.
Then run ansible-playbook rolling_update.yml --tags=tidb
to perform a rolling update on the tidb-server
instance. After the update is finished, the tidb-server
instance will record the slow query log in tidb_slow_query.log
.
If a slow query occurs, you can locate the tidb-server
instance where the slow query is and the slow query time point using Grafana and find the SQL statement information recorded in the log on the corresponding node.
label
configuration if label
of TiKV was not configured when I deployed the TiDB cluster for the first time?The configuration of TiDB label
is related to the cluster deployment architecture. It is important and is the basis for PD to execute global management and scheduling. If you did not configure label
when deploying the cluster previously, you should adjust the deployment structure by manually adding the location-labels
information using the PD management tool pd-ctl
, for example, config set location-labels "zone, rack, host"
(you should configure it based on the practical label
level name).
For the usage of pd-ctl
, see PD Control Instruction.
dd
command for the disk test use the oflag=direct
option?The Direct mode wraps the Write request into the I/O command and sends this command to the disk to bypass the file system cache and directly test the real I/O Read/Write performance of the disk.
fio
command to test the disk performance of the TiKV instance?Random Read test:
./fio -ioengine=libaio -bs=32k -direct=1 -thread -rw=randread -size=10G -filename=fio_randread_test.txt -name='PingCAP' -iodepth=4 -runtime=60
The mix test of sequential Write and random Read:
./fio -ioengine=libaio -bs=32k -direct=1 -thread -rw=randrw -percentage_random=100,0 -size=10G -filename=fio_randr_write_test.txt -name='PingCAP' -iodepth=4 -runtime=60
UNREACHABLE! "msg": "Failed to connect to the host via ssh: "
when deploying TiDB using TiDB-AnsibleTwo possible reasons and solutions:
ansible -i inventory.ini all -m shell -a 'whoami' -b
.ansible … -f 1
to avoid this error.Apply rolling updates to the TiKV node (only update the TiKV service).
ansible-playbook rolling_update.yml --tags=tikv
Apply rolling updates to all services.
ansible-playbook rolling_update.yml
When you apply rolling updates to the TiDB services, the running application is not affected. You need to configure the minimum cluster topology (TiDB * 2, PD * 3, TiKV * 3). If the Pump/Drainer service is involved in the cluster, it is recommended to stop Drainer before rolling updates. When you update TiDB, Pump is also updated.
It is not recommended to deploy TiDB using Binary. The support for upgrading using Binary is not as friendly as using Ansible. It is recommended to deploy TiDB using Ansible.
Generally you should upgrade all components, because the whole version is tested together. Upgrade a single component only when an emergent issue occurs and you need to upgrade this component.
Possible reasons:
rolling_update.yml
to upgrade the cluster when the cluster is stopped (operation error).Solution:
Job | Playbook |
---|---|
Start the cluster | ansible-playbook start.yml |
Stop the cluster | ansible-playbook stop.yml |
Destroy the cluster | ansible-playbook unsafe_cleanup.yml (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected) |
Clean data (for test) | ansible-playbook unsafe_cleanup_data.yml |
Apply rolling updates | ansible-playbook rolling_update.yml |
Apply rolling updates to TiKV | ansible-playbook rolling_update.yml --tags=tikv |
Apply rolling updates to components except PD | ansible-playbook rolling_update.yml --skip-tags=pd |
Apply rolling updates to the monitoring components | ansible-playbook rolling_update_monitor.yml |
You can log into TiDB like logging into MySQL. For example:
mysql -h 127.0.0.1 -uroot -P4000
Similar to MySQL, TiDB includes static and solid parameters. You can directly modify static parameters using set global xxx = n
, but the new value of a parameter is only effective within the life cycle in this instance.
TiDB data directories are in ${[data-dir](https://pingcap.com/docs-cn/op-guide/configuration/#data-dir-1)}/data/
by default, which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively.
Similar to MySQL, TiDB includes system tables as well, used to store the information required by the server when it runs.
By default, TiDB/PD/TiKV outputs standard error in the logs. If a log file is specified by --log-file
during the startup, the log is output to the specified file and executes rotation daily.
If the cluster is deployed using Ansible, you can use the ansible-playbook stop.yml
command to stop the TiDB cluster. If the cluster is not deployed using Ansible, kill
all the services directly. The components of TiDB will do graceful shutdown
.
kill
be executed in TiDB?kill
DML statements. First use show processlist
to find the ID corresponding with the session, and then run kill tidb [session id]
.kill
DDL statements. First use admin show ddl jobs
to find the ID of the DDL job you need to kill, and then run admin cancel ddl jobs 'job_id' [, 'job_id'] ...
. For more details, see the ADMIN
statement.Currently, TiDB does not support session timeout in the database level. If you want to implement session timeout, use the session ID started by side records in the absence of LB (Load Balancing), and customize the session timeout on the application. After timeout, kill SQL using kill tidb [session id]
on the node that starts the query. It is currently recommended to implement session timeout using applications. When the timeout time is reached, the application layer reports an exception and continues to execute subsequent program segments.
Currently, TiDB has a standard management of various versions. Each release contains a detailed change log and release notes. Whether it is necessary to upgrade in the production environment depends on the application system. It is recommended to learn the details about the functional differences between the previous and later versions before upgrading.
Take Release Version: v1.0.3-1-ga80e796
as an example of version number description:
v1.0.3
indicates the standard GA version.-1
indicates the current version has one commit.ga80e796
indicates the version git-hash
.The TiDB community is highly active. After the 1.0 GA release, the engineers have been keeping optimizing and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see TiDB Weekly update.
It is recommended to deploy the TiDB cluster using the latest version of TiDB-Ansible, which will also be updated along with the TiDB version. TiDB has a unified management of the version number after the 1.0 GA release. You can view the version number using the following two methods:
select tidb_version()
tidb-server -V
Currently no.
As your business grows, your database might face the following three bottlenecks:
Lack of storage resources which means that the disk space is not enough.
Lack of computing resources such as high CPU occupancy.
Not enough write and read capacity.
You can scale TiDB as your business grows.
If the disk space is not enough, you can increase the capacity simply by adding more TiKV nodes. When the new node is started, PD will migrate the data from other nodes to the new node automatically.
If the computing resources are not enough, check the CPU consumption situation first before adding more TiDB nodes or TiKV nodes. When a TiDB node is added, you can configure it in the Load Balancer.
If the capacity is not enough, you can add both TiDB nodes and TiKV nodes.
Not really. We need some good features of gRPC, such as flow control, encryption and streaming.
like(bindo.customers.name, jason%, 92)
?The 92 indicates the escape character, which is ASCII 92 by default.
TiKV cluster is not bootstrapped
message is displayed when I access PD.Most of the APIs of PD are available only when the TiKV cluster is initialized. This message is displayed if PD is accessed when PD is started while TiKV is not started when a new cluster is deployed. If this message is displayed, start the TiKV cluster. When TiKV is initialized, PD is accessible.
etcd cluster ID mismatch
message is displayed when starting PD.This is because the --initial-cluster
in the PD startup parameter contains a member that doesn’t belong to this cluster. To solve this problem, check the corresponding cluster of each member, remove the wrong member, and then restart PD.
Theoretically, the smaller of the tolerance, the better. During leader changes, if the clock goes back, the process won’t proceed until it catches up with the previous leader. PD can tolerate any synchronization error, but a larger error value means a longer period of service stop during the leader change.
The client connection can only access the cluster through TiDB. TiDB connects PD and TiKV. PD and TiKV are transparent to the client. When TiDB connects to any PD, the PD tells TiDB who is the current leader. If this PD is not the leader, TiDB reconnects to the leader PD.
leader-schedule-limit
and region-schedule-limit
scheduling parameters in PD?leader-schedule-limit
scheduling parameter is used to balance the Leader number of different TiKV servers, affecting the load of query processing.region-schedule-limit
scheduling parameter is used to balance the replica number of different TiKV servers, affecting the data amount of different nodes.Yes. Currently, you can only update the global number of replicas. When started for the first time, PD reads the configuration file (conf/pd.yml) and uses the max-replicas configuration in it. If you want to update the number later, use the pd-ctl configuration command config set max-replicas $num
and view the enabled configuration using config show all
. The updating does not affect the applications and is configured in the background.
Make sure that the total number of TiKV instances is always greater than or equal to the number of replicas you set. For example, 3 replicas need 3 TiKV instances at least. Additional storage requirements need to be estimated before increasing the number of replicas. For more information about pd-ctl, see PD Control User Guide.
You can determine the general status of the cluster using the pd-ctl tool. For detailed cluster status, you need to use the monitor to determine.
The offline node usually indicates the TiKV node. You can determine whether the offline process is finished by the pd-ctl or the monitor. After the node is offline, perform the following steps:
node_exporter
data of the corresponding node from the Prometheus configuration file.inventory.ini
.127.0.0.1
when I was using the PD Control?If your TiDB cluster is deployed using TiDB-Ansible, the PD external service port is not bound to 127.0.0.1
, so PD Control does not recognize 127.0.0.1
and you can only connect to it using the local IP address.
lease
parameter in TiDB?The lease parameter (--lease=60
) is set from the command line when starting a TiDB server. The value of the lease parameter impacts the Database Schema Changes (DDL) speed of the current session. In the testing environments, you can set the value to 1s for to speed up the testing cycle. But in the production environments, it is recommended to set the value to minutes (for example, 60) to ensure the DDL safety.
The processing time is different for different scenarios. Generally, you can consider the following three scenarios:
Add Index
operation with a relatively small number of rows in the corresponding data table: about 3sAdd Index
operation with a relatively large number of rows in the corresponding data table: the processing time depends on the specific number of rows and the QPS at that time (the Add Index
operation has a lower priority than ordinary SQL operations)If the TiDB server instance that receives the DDL request is the same TiDB server instance that the DDL owner is in, the first and third scenarios above may cost only dozens to hundreds of milliseconds.
Possible reasons:
kill -9
command. If you run DDL statements during this period, for the state change of each DDL, you need to wait for 2 * lease (lease = 45s).No. Currently, TiDB only supports the distributed storage engine and the Goleveldb/Rocksdb/Boltdb engine.
Infomation_schema
support more real information?The tables in Infomation_schema
exist mainly for compatibility with MySQL, and some third-party software queries information in the tables. Currently, most of those tables are null. More parameter information is to be involved in the tables as TiDB updates later.
For the Infomation_schema
that TiDB currently supports, see The TiDB System Database.
In the communication process between the TiDB server and the TiKV server, the Server is busy
or backoff.maxsleep 20000ms
log message is displayed when processing a large volume of data. This is because the system is busy while the TiKV server processes data. At this time, usually you can view that the TiKV host resources usage rate is high. If this occurs, you can increase the server capacity according to the resources usage.
The current TiDB version has no limit for the maximum number of concurrent connections. If too large concurrency leads to an increase of response time, you can increase the capacity by adding TiDB nodes.
The create_time
of tables in the information_schema
is the creation time.
EXPENSIVE_QUERY
in the TiDB log?When TiDB is executing a SQL statement, the query will be EXPENSIVE_QUERY
if each operator is estimated to process over 10000 pieces of data. You can modify the tidb-server
configuration parameter to adjust the threshold and then restart the tidb-server
.
TiDB supports changing the priority on a per-session, global or individual statement basis. Priority has the following meaning:
HIGH_PRIORITY: this statement has a high priority, that is, TiDB gives priority to this statement and executes it first.
LOW_PRIORITY: this statement has a low priority, that is, TiDB reduces the priority of this statement during the execution period.
You can combine the above two parameters with the DML of TiDB to use them. For usage details, see TiDB DML. For example:
Adjust the priority by writing SQL statements in the database:
select HIGH_PRIORITY | LOW_PRIORITY count(*) from table_name;
insert HIGH_PRIORITY | LOW_PRIORITY into table_name insert_values;
delete HIGH_PRIORITY | LOW_PRIORITY from table_name;
update HIGH_PRIORITY | LOW_PRIORITY table_reference set assignment_list where where_condition;
replace HIGH_PRIORITY | LOW_PRIORITY into table_name;
The full table scan statement automatically adjusts itself to a low priority. analyze
has a low priority by default.
auto analyze
in TiDB?Trigger strategy: auto analyze
is automatically triggered when the number of pieces of data in a new table reaches 1000 and this table has no write operation within one minute.
When the modified number or the current total row number is larger than tidb_auto_analyze_ratio
, the analyze
statement is automatically triggered. The default value of tidb_auto_analyze_ratio
is 0, indicating that this feature is disabled. To ensure safety, its minimum value is 0.3 when the feature is enabled, and it must be smaller than pseudo-estimate-ratio
whose default value is 0.7, otherwise pseudo statistics will be used for a period of time. It is recommended to set tidb_auto_analyze_ratio
to 0.5.
Its usage is similar to MySQL:
select column_name from table_name use index(index_name)where where_condition;
Use 3 replicas for test. If you increase the number of replicas, the performance declines but it is more secure. Whether to configure more replicas depends on the specific business needs.
cluster ID mismatch
message is displayed when starting TiKV.This is because the cluster ID stored in local TiKV is different from the cluster ID specified by PD. When a new PD cluster is deployed, PD generates random cluster IDs. TiKV gets the cluster ID from PD and stores the cluster ID locally when it is initialized. The next time when TiKV is started, it checks the local cluster ID with the cluster ID in PD. If the cluster IDs don’t match, the cluster ID mismatch
message is displayed and TiKV exits.
If you previously deploy a PD cluster, but then you remove the PD data and deploy a new PD cluster, this error occurs because TiKV uses the old data to connect to the new PD cluster.
duplicated store address
message is displayed when starting TiKV.This is because the address in the startup parameter has been registered in the PD cluster by other TiKVs. This error occurs when there is no data folder under the directory that TiKV --store
specifies, but you use the previous parameter to restart the TiKV.
To solve this problem, use the store delete function to delete the previous store and then restart TiKV.
Currently, some files of TiKV master have a higher compression rate, which depends on the underlying data distribution and RocksDB implementation. It is normal that the data size fluctuates occasionally. The underlying storage engine adjusts data as needed.
TiKV implements the Column Family (CF) feature of RocksDB. By default, the KV data is eventually stored in the 3 CFs (default, write and lock) within RocksDB.
[rocksdb.defaultcf]
. The write CF stores the data version information (MVCC) and index-related data, and the corresponding parameter is in [rocksdb.writecf]
. The lock CF stores the lock information and the system uses the default parameter.[raftdb.defaultcf]
.block-cache-size
parameter. A larger value of the parameter means more hot data can be cached and is more favorable to read operation. At the same time, it consumes more system memory.write-buffer-size
parameter.TiDB uses Raft to synchronize data among multiple replicas and guarantees the strong consistency of data. If one replica goes wrong, the other replicas can guarantee data security. The default number of replicas in each Region is 3. Based on the Raft protocol, a leader is elected in each Region, and if a single leader fails, a follower is soon elected as Region leader after a maximum of 2 * lease time (lease time is 10 seconds).
Writing or reading a large volume of data in TiKV takes up high I/O, memory and CPU. Executing very complex queries costs a lot of memory and CPU resources, such as the scenario that generates large intermediate result sets.
No. For OLTP scenarios, TiDB requires high I/O disks for data access and operation. As a distributed database with strong consistency, TiDB has some write amplification such as replica replication and bottom layer storage compaction. Therefore, it is recommended to use NVMe SSD as the storage disks in TiDB best practices. Mixed deployment of TiKV and PD is not supported.
No. It differs from the table splitting rules of MySQL. In TiKV, the table Range is dynamically split based on the size of Region.
Region is not divided in advance, but it follows a Region split mechanism. When the Region size exceeds the value of the region_split_size
or region-split-keys
parameters, split is triggered. After the split, the information is reported to PD.
innodb_flush_log_trx_commit
parameter like MySQL, to guarantee the security of data?Yes. Currently, the standalone storage engine uses two RocksDB instances. One instance is used to store the raft-log. When the sync-log
parameter in TiKV is set to true, each commit is mandatorily flushed to the raft-log. If a crash occurs, you can restore the KV data using the raft-log.
WAL belongs to ordered writing, and currently, we do not apply a unique configuration to it. Recommended configuration is as follows:
interleave = all
sync-log = true
?Generally, enabling sync-log
reduces about 30% of the performance. For the test about sync-log = false
, see Performance test result for TiDB using Sysbench.
Raft uses strong consistency, and only when the data has been written into more than 50% of the nodes, the application returns ACK (two out of three nodes). In this case, data consistency is guaranteed. However, theoretically, two nodes might crash. Therefore, for scenarios that have a strict requirement on data security, such as scenarios in financial industry, you need to enable the sync-log
.
Theoretically, TiDB has 4 more network roundtrips than standalone databases.
TiKV supports calling the interface separately. Theoretically, you can take an instance as the cache. Because TiDB is a distributed relational database, we do not support TiKV separately.
IO error: No space left on device While appending to file
is displayed.This is because the disk space is not enough. You need to add nodes or enlarge the disk space.
At the beginning, many users tend to do a benchmark test or a comparison test between TiDB and MySQL. We have also done a similar official test and find the test result is consistent at large, although the test data has some bias. Because the architecture of TiDB differs greatly from MySQL, it is hard to find a benchmark point. The suggestions are as follows:
TiDB is designed for scenarios where sharding is used because the capacity of a MySQL standalone is limited, and where strong consistency and complete distributed transactions are required. One of the advantages of TiDB is pushing down computing to the storage nodes to execute concurrent computing.
TiDB is not suitable for tables of small size (such as below ten million level), because its strength in concurrency cannot be shown with a small size of data and limited Regions. A typical example is the counter table, in which records of a few lines are updated high frequently. In TiDB, these lines become several Key-Value pairs in the storage engine, and then settle into a Region located on a single node. The overhead of background replication to guarantee strong consistency and operations from TiDB to TiKV leads to a poorer performance than a MySQL standalone.
Currently, the preferred method for backup is using the PingCAP fork of mydumper. Although the official MySQL tool mysqldump
is also supported in TiDB to back up and restore data, its performance is poorer than mydumper
/loader
and it needs much more time to back up and restore large volumes of data.
Keep the size of the data file exported from mydumper
as small as possible. It is recommended to keep the size within 64M. You can set value of the -F
parameter to 64.
You can edit the t
parameter of loader
based on the number of TiKV instances and load status. For example, in scenarios of three TiKV instances, you can set its value to 3 * (1 ~ n)
. When the TiKV load is very high and backoffer.maxSleep 15000ms is exceeded
displays a lot in loader
and TiDB logs, you can adjust the parameter to a smaller value. When the TiKV load is not very high, you can adjust the parameter to a larger value accordingly.
See Loader Instructions.
Because TiDB supports most MySQL syntax, generally you can migrate your applications to TiDB without changing a single line of code in most cases. You can use checker to check whether the Schema in MySQL is compatible with TiDB.
Restart the TiDB service, add the -skip-grant-table=true
parameter in the configuration file. Log into the cluster without password and recreate the user, or recreate the mysql.user
table using the following statement:
DROP TABLE IF EXIST mysql.user;
CREATE TABLE if not exists mysql.user (
Host CHAR(64),
User CHAR(16),
Password CHAR(41),
Select_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Insert_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Update_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Delete_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Drop_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Process_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Grant_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
References_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Alter_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Show_db_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Super_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_tmp_table_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Lock_tables_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Execute_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Show_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Alter_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Index_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_user_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Event_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Trigger_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
PRIMARY KEY (Host, User));
INSERT INTO mysql.user VALUES ("%", "root", "", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y");
Currently, TiDB does not support select into outfile
. You can use the following methods to export the data in TiDB:
select
to a file.To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see the following solution:
Currently, it is recommended to use OGG.
java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation
while using Sqoop to write data into TiDB in batchesIn Sqoop, --batch
means committing 100 statement
s in each batch, but by default each statement
contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.
Two solutions:
Add the -Dsqoop.export.records.per.statement=10
option as follows:
sqoop export \
-Dsqoop.export.records.per.statement=10 \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop ${user} \
--password ${passwd} \
--table ${tab_name} \
--export-dir ${dir} \
--batch
You can also increase the limited number of statements in a single TiDB transaction, but this will consume more memory.
See Syncer User Guide.
Download and import Syncer Json to Grafana. Edit the Prometheus configuration file and add the following content:
- job_name: ‘syncer_ops’ // task name
static_configs:
- targets: [’10.10.1.1:10096’] // Syncer monitoring address and port, informing Prometheus to pull the data of Syncer
Restart Prometheus.
No. Currently, the data synchronization depends on the application itself.
Yes. For details, see Syncer User Guide
Frequent DDL operations may affect the synchronization speed. For Sycner, DDL operations are executed serially. When DDL operations are executed during data synchronization, data will be synchronized serially and thus the synchronization speed will be slowed down.
syncer.meta
file is lost, what should I do?When you synchronize data using Syncer GTID, the syncer.meta
file is constantly updated during the synchronization process. The current version of Syncer does not contain the design for high availability. The syncer.meta
configuration file of Syncer is directly stored on the hard disks, which is similar to other tools in the MySQL ecosystem, such as mydumper.
Two solutions:
syncer.meta
file in a relatively secure disk. For example, use disks with RAID 1.It is recommended to build a multi-source MySQL -> TiDB real-time synchronization environment using Syncer tool. You can migrate the read and write traffic in batches by editing the network configuration as needed. Deploy a stable network LB (HAproxy, LVS, F5, DNS, etc.) on the upper layer, in order to implement seamless migration by directly editing the network configuration.
The total read capacity has no limit. You can increase the read capacity by adding more TiDB servers. Generally the write capacity has no limit as well. You can increase the write capacity by adding more TiKV nodes.
transaction too large
is displayed.As distributed transactions need to conduct two-phase commit and the bottom layer performs Raft replication, if a transaction is very large, the commit process would be quite slow and the following Raft replication flow is thus struck. To avoid this problem, we limit the transaction size:
There are similar limits on Google Cloud Spanner.
When you import data, insert in batches and keep the number of rows within 10,000 for each batch.
As for insert
and select
, you can open the hidden parameter set @@session.tidb_batch_insert=1;
, and insert
will execute large transactions in batches. In this way, you can avoid the timeout caused by large transactions, but this may lead to the loss of atomicity. An error in the process of execution leads to partly inserted transaction. Therefore, use this parameter only when necessary, and use it in session to avoid affecting other statements. When the transaction is finished, use set @@session.tidb_batch_insert=0
to close it.
As for delete
and update
, you can use limit
plus circulation to operate.
None of the DELETE
, TRUNCATE
and DROP
operations release data immediately. For the TRUNCATE
and DROP
operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the DELETE
operation, the data is deleted but the space is not released according to TiDB GC. When subsequent data is written into RocksDB and executes COMPACT
, the space is reused.
No. None of the DDL operations can be executed on the target table when you load data, otherwise the data fails to be loaded.
replace into
syntax?Yes. But the load data
does not support the replace into
syntax.
Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the deleting data section in TiDB Best Practices.
When deleting a large amount of data, it is recommended to use Delete * from t where xx limit 5000;
. It deletes through the loop and uses Affected Rows == 0
as a condition to end the loop, so as not to exceed the limit of transaction size. With the prerequisite of meeting business filtering logic, it is recommended to add a strong filter index column or directly use the primary key to select the range, such as id >= 5000*n+m and id < 5000*(n+1)+m
.
If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See details in TiDB Best Practices.
You can configure concurrent GC to increase the speed of reclaiming storage space. The default concurrency is 1, and you can modify it to at most 50% of the number of TiKV instances using the following command:
update mysql.tidb set VARIABLE_VALUE="3" where VARIABLE_NAME="tikv_gc_concurrency";
See Understand the Query Execution Plan.
See Introduction to Statistics.
select count(1)
?The count(1)
statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the document. But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access.
Recommendations:
count
in the case of large amount of data.Use admin show ddl
to view the current job of adding an index.
admin show ddl
: to view the running DDL jobadmin show ddl jobs
: to view all the results in the current DDL job queue (including tasks that are running and waiting to run) and the last ten results in the completed DDL job queueadmin show ddl job queries 'job_id' [, 'job_id'] ...
: to view the original SQL statement of the DDL task corresponding to the job_id
; the job_id
only searches the running DDL job and the last ten results in the DDL history job queueYes. TiDB uses the cost-based optimizer. The cost model and statistics are constantly optimized. TiDB also supports correlation algorithms like hash join and soft merge.
analyze
on a table?View the Healthy
field using show stats_healthy
and generally you need to execute analyze
on a table when the field value is smaller than 60.
No rule exists for these IDs but the IDs are unique. When IDs are generated, a counter works and adds one when one plan is generated. The execution order has nothing to do with the ID. The whole query plan is a tree and the execution process starts from the root node and the data is returned to the upper level continuously. For details about the query plan, see Understanding the TiDB Query Execution Plan.
cop
tasks are in the same root. Are they executed concurrently?Currently the computing tasks of TiDB belong to two different types of tasks: cop task
and root task
.
cop task
is the computing task which is pushed down to the KV end for distributed execution; root task
is the computing task for single point execution on the TiDB end.
Generally the input data of root task
comes from cop task
; when root task
processes data, cop task
of TiKV can processes data at the same time and waits for the pull of root task
of TiDB. Therefore, cop
tasks can be considered as executed concurrently; but their data has an upstream and downstream relationship. During the execution process, they are executed concurrently during some time. For example, the first cop task
is processing the data in [100, 200] and the second cop task
is processing the data in [1, 100]. For details, see Understanding the TiDB Query Plan.
In TiDB, data is divided into Regions for management. Generally, the TiDB hotspot means the Read/Write hotspot in a Region. In TiDB, for the table whose primary key (PK) is not an integer or which has no PK, you can properly break Regions by configuring SHARD_ROW_ID_BITS
to scatter the Region hotspots. For details, see the introduction of SHARD_ROW_ID_BITS
in TiDB Specific System Variables and Syntax.
See Overview of the Monitoring Framework.
See Key Metrics.
The monitoring system of TiDB consists of Prometheus and Grafana. From the dashboard in Grafana, you can monitor various running metrics of TiDB which include the monitoring metrics of system resources, of client connection and SQL operation, of internal communication and Region scheduling. With these metrics, the database administrator can better understand the system running status, running bottlenecks and so on. In the practice of monitoring these metrics, we list the key metrics of each TiDB component. Generally you only need to pay attention to these common metrics. For details, see Key Metrics.
Yes. Find the startup script on the machine where Prometheus is started, edit the startup parameter and restart Prometheus.
In TiDB 2.0, Region health is monitored in the PD metric monitoring page, in which the Region Health
monitoring item shows the statistics of all the Region replica status. miss
means shortage of replicas and extra
means the extra replica exists. In addition, Region Health
also shows the isolation level by label
. level-1
means the Region replicas are isolated physically in the first label
level. All the Regions are in level-0
when location label
is not configured.
selectsimplefull
in Statement Count monitor?It means full table scan but the table might be a small system table.
QPS
and Statement OPS
in the monitor?The QPS
statisctics is about all the SQL statements, including use database
, load data
, begin
, commit
, set
, show
, insert
and select
.
The Statement OPS
statistics is only about applications related SQL statements, including select
, update
and insert
, therefore the Statement OPS
statistics matches the applications better.
A PD request timeout. Check the status, monitoring data and log of the PD server, and the network between the TiDB server and the PD server.
A TiKV request timeout. Check the status, monitoring data and log of the TiKV server, and the network between the TiDB server and the TiKV server.
The TiKV server is busy. This usually occurs when the database load is very high. Check the status, monitoring data and log of the TiKV server.
A lock resolving timeout. This usually occurs when a large number of transaction conflicts exist. Check the application code to see whether lock contention exists in the database.
The accessed Region is not available. A Raft Group is not available, with possible reasons like an inadequate number of replicas. This usually occurs when the TiKV server is busy or the TiKV node is shut down. Check the status, monitoring data and log of the TiKV server.
The interval of GC Life Time
is too short. The data that should have been read by long transactions might be deleted. You can add GC Life Time
using the following command:
update mysql.tidb set variable_value='30m' where variable_name='tikv_gc_life_time';
Note: “30m” means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space.
dmesg -T | grep -i oom
.This error usually occurs when the version of TiDB does not match with the version of TiKV. To avoid version mismatch, upgrade all components when you upgrade the version.
When you execute the LOAD DATA LOCAL
statement but the MySQL client does not allow executing this statement (the value of the local_infile
option is 0), this error occurs.
The solution is to use the --local-infile=1
option when you start the MySQL client. For example, use command like mysql --local-infile=1 -u root -h 127.0.0.1 -P 4000
. The default value of local-infile
is different in different versions of MySQL client, therefore you need to configure it in some MySQL clients and do not need to configure it in some others.
This error occurs when TiDB fails to access PD. A worker in the TiDB background continuously queries the safepoint from PD and this error occurs if it fails to query within 100s. Generally, it is because the disk on PD is slow and busy or the network failed between TiDB and PD. For the details of common errors, see Error Number and Fault Diagnosis.
What’s on this page