Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Kafka as WAL backend, Really Not lost data when node crash? #131

Open
astor-oss opened this issue Jun 27, 2021 · 2 comments
Open

Using Kafka as WAL backend, Really Not lost data when node crash? #131

astor-oss opened this issue Jun 27, 2021 · 2 comments

Comments

@astor-oss
Copy link

astor-oss commented Jun 27, 2021

Q1: how to acquire the offset of Kafka when nodes restart

When using Kafka as the WAL backend, but the machine of the Writer crash such as node00, then I set up a new node remotely such as node01, then the node01 will fetch all SSTs from s3. But how does node01 acquire the really offset of Kafka?

Q2: function of Local cache in CloudLogControllerImpl

How does the function of the cache directory on the local machine in CloudLogControllerImpl::GetCacheDir class?

@astor-oss astor-oss changed the title Using Using Kafka as WAL backend, Really Not lost data when node crash? Jun 27, 2021
@dhruba
Copy link

dhruba commented Jun 27, 2021

A1: Your code has to store the kafka-offset in your rocksdb database. It is outside the rocksdb-cloud code because rockdb-cloud is not aware of kafka at all

@astor-oss
Copy link
Author

astor-oss commented Jun 28, 2021

A1: Your code has to store the kafka-offset in your rocksdb database. It is outside the rocksdb-cloud code because rockdb-cloud is not aware of kafka at all

Thanks for your attention. When WAL contains delete, but WAL has not persisted to SST or S3, the node crash, It will cause data loss?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants