-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge badger data store #7454
Comments
This is a known issue, and has been fixed upstream. We currently wait for the next release of Badger v1 to include the fix in go-ipfs. The bugfix upstream, our ticket. |
So, they will release both v1 and v2 versions? This bugfix upstream was merged only into v2. |
@RubenKelevra, we've forked badger repo, added this fix and replaced package. We ran the app, and made a screen on start it worked for several minutes and we saw this: In less than 2 minutes After 3 hours, nothing changed. All |
Hey @ridenaio, There's a ticket tracking the progress of the fix for badger v1: I'm sorry, but I'm afraid that this is the wrong place to ask for support for badger. As far as I know: New writes are are expected to be written to value logs, which are by default up to 1 GB. the garbage collector will clean them up if they reach a certain threshold of not containing useful data anymore. Maybe add a support ticket on the badger repo or wait for the release which is expected shortly and will be included in ipfs :) |
Can be closed given that we're now using (on master) go-ds-badger v0.2.5 which uses badger v.1.6.2? |
Yes, we've tried 1.6.2 and it shows better GC. |
Version information:
ipfs-go v0.5.2-0.20200520231924-554a21b64784
Description:
After badgerDS was introduced as stable in 0.5.0, we started to use badgerDs as a default datastore. It gives us better synchronization speed, but the size of that datastore grows unexpectedly.
For the last two weeks, we have ~500mb of live data. But on one node the datastore contains 18GB.
The node that worked with badger ~ 1 month has 120GB datastore.
Looks like badger creates a lof of vlog files.
Maybe this problem caused by our way to call corerepo.ConditionalGC. We invoke corerepo.ConditionalGC every 15 seconds. To prevent big delay in read\write operations we cancel GC invocation if any read\write operation appears.
The text was updated successfully, but these errors were encountered: