Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

drainer support relay log #842

Open
july2993 opened this issue Dec 6, 2019 · 4 comments
Open

drainer support relay log #842

july2993 opened this issue Dec 6, 2019 · 4 comments
Labels
feature-request This issue is a feature request

Comments

@july2993
Copy link
Contributor

july2993 commented Dec 6, 2019

Is your feature request related to a problem? Please describe:

When using drainer syncing data to TiDB directly, if the upstream cluster is down totally, the downstream may not reach a consistent status.

We define the downstream reach a consistent status with timestamp ts as:

  • the downstream cluster is a snapshot of the upstream cluster setting tidb_snapshot = ts.

Why drainer can’t guarantee that it can reach a consistent status is that **will not **write data into the downstream cluster in transactions one by one. It will split the transaction and write the downstream concurrently.

As an example, if there’s a transaction at upstream:

begin;
insert into test1(id) values(1);
insert into test1(id) values(2);
...
insert into test1(id) values(100);
commit

drainer will write the downstream in two transactions concurrently:

one is:

begin;
insert into test1(id) values(1);
insert into test1(id) values(2);
...
insert into test1(id) values(50);
commit;

another one is:

begin;
insert into test1(id) values(51);
insert into test1(id) values(52);
...
insert into test1(id) values(100);
commit;

when the upstream cluster is down and drainer quit when only writing a partial transaction into the downstream, there’s no way we can reach a consistent status anymore because it can not fetch data from upstream now.

Describe the feature you'd like:

drainer support option to open relay log when the dest-type is tidb or mysql, before writing the binlog data into the downstream TiDB it must persist binlog data first, so if the upstream cluster is down, it can use the local persistent data to reach a consistent status with a timestamp.

Describe the alternatives you've considered:

Don’t use drainer syncing data to downstream TiDB directly, instead, use drainer with _dest-type = kafka, _persist data into a downstream Kafka cluster, then use another tool like arbiter sync data to downstream TiDB Cluster.

Teachability, Documentation, Adoption, Migration Strategy:

Essential features

Add a config option relay_log_dir to drainer, when it’s not configured, it works like before.

when it’s configured, it must persist binlog data first before writing data to the downstream TiDB.

When the upstream is Down totally, we can start up drainer to reach a consistent status once the relay log data in the relay_log_dir is not lost.

Record format

we can use the same protobuf format binlog.proto currently drainer write into Kafka when the downstream type id Kafka. Note we can not directly persist the binlog in the format receiving from pump directly, because we will need some metadata to decode the binlog like knowing the table schema from a table id.

Purge Data

we can simply purge the data ASAP in relay_log_dir once it has been written to downstream.

Implementation

Phrase 1: Essential features of the relay log pkg.

  • Support to read & write binlog record
  • Support to Purge binlog record

For performance, we should batch records when persisting to filesystem using _fsync.

Phrase 2: Make drainer support the relay_log_dir option

  • make drainer learn the option and persist data first instead of syncing to downstream directly.
  • make sure drainer can reach a consistent status at startup time if the upstream is down totally. (syn the data in relay_log_dir)

Score

4500

References

TiDB binlog reference docs
TiDB binlog source code reading the more related part of drainer not published yes, internal available only now

@july2993 july2993 added the feature-request This issue is a feature request label Dec 6, 2019
@djshow832
Copy link
Contributor

I want to join this task.

@aylen
Copy link

aylen commented Dec 10, 2019

Is it similar to mysql replication, drainer reads the pump's binlog, generates a relay-log, and then rewrites the code synchronized to tidb or mysql,no longer rely on third-party tools

@july2993
Copy link
Contributor Author

Is it similar to mysql replication, drainer reads the pump's binlog, generates a relay-log, and then rewrites the code synchronized to tidb or mysql,no longer rely on third-party tools

It can replicate to tidb or mysql using the locally relay-log to reach a consistent status even the upstream is down totally. That means it must can read the relay-log only and replicate the data to tidb, is this what you means no longer rely on third-party tools ?

@IANTHEREAL
Copy link
Collaborator

Has the implementation scheme been determined?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request This issue is a feature request
Projects
None yet
Development

No branches or pull requests

4 participants