-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
100-fold increase in regular incremental backup size using blksnap over veeamsnap #97
Comments
Hi! I can assume that the differences are caused by differences in the operation of the change tracker (CBT). I hope that we can make the next version of blksnap better. |
Hi @bickford . Unfortunately, we were unable to confirm this problem. Thank you. |
Hi @SergeiShtepa , I'm chasing a similar issue and have just opened a ticket with Veeam support about it. In my case, I recently put a Web server into production that uses Apache to serve up static content (images). Before going into production, daily backups were extremely small. But immediately after making the server live, daily incremental backups of my data volume spiked in size massively. As an example, on a recent day, the volume that holds the static content is showing a backup size of 200+ GB, despite only about 2 GB of new images added to the server that day. I've done a bit of troubleshooting and found that Apache is updating the access time (atime) metadata for files it serves. On my server, that's a lot of files and directly corresponds to how busy the server is. Is it reasonable that blksnap could be detecting these atime updates as block changes, thus greatly increasing the number of blocks in the snapshot? Thank you very much for any thoughts! |
If you don't really use access time of files you can set relatime as workaround so that update access time only on files changes. |
Hi. The kernel module tracks activity on the block layer. The nature of the data doesn't matter to him, and the file system doesn't matter. Changing the file access time causes the file metadata to change. That is, one or four kilobytes of file system metadata are updated each time an image is read. I think that the images are read in random order, which means that the metadata changes throughout the disk. Try mount with "-o noatime". Of course, I can't guarantee that this is the problem. This is just a guess. To identify the cause of the problem, you will need to monitor and analyze the changed data. See: blktrace. @bickford - we are still interested in your detailed logs. |
Hi @Fantu and @SergeiShtepa, Thanks for your replies, and especially for confirming my suspicion that the modification of the access time could be the issue. I've currently got the volume mounted with relatime, but will try remounting with noatime and see what effect it has after a full 24-hour cycle. Hopefully this will be successful, as I can find no other reason so much data would be changing. Since you feel this issue is not related to the one described by @bickford, I won't add anything else. Thanks again! |
Hi again Sergei, I've logged a case with Veeam just now. I initially thought as a Free version user I didn't have access to log a case with Veeam - but I checked again now and see that's incorrect and that it's 'best efforts' basis. In the interim I've updated the server to Ubuntu 24.04.2 LTS to see if that improves things - no change. At the moment I am trying to see if I can find anything on the system that would be making sudden increases in size - eg programs having errors dumping excess logs. I noticed Nomachine was creating some significant logs so I've uninstalled it to see if it improves the issue. Will update back in a day or two to let you know if I see reductions in backup size or with feedback from Veeam support. Thanks again |
Hi @bickford, To search for changes on disk, I successfully used the following commands: For modified files: And what really helped me nail down the problem was looking for files with a changed access time: Those may be useful for you as well. |
Distribution
Ubuntu 22.04.5 LTS
Architecture
x86_64 intel
Kernel version
Linux 6.8.0-52-generic #53~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jan 15 19:18:46 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Blksnap version
Version: 6.3.0.73 deb from apt-get
Bug description
Hi, I'm not sure if this is a bug per-se. I'm checking if it's normal for the Veeam incremental files with blksnap to be 100 times larger than the Veeam incremental files created with the older veeamsnap. Mine have ballooned from ~14MB daily to ~1.2GB daily.
I updated Veeam Backup Agent for Linux a couple of weeks ago and found that the new version required the use of blksnap. So I installed that which replaced veeamsnap, I also noticed that blksnap supports Secure Boot now - so I enabled this in my BIOS and went through the process of authorizing the MOK. I re-ran the backup config to reconfigure and run a test backup. Everything worked great, so far so good.
However I've checked on the backups now and noticed that the incremental backup files (.VIB) which were previously ~14MB each day are now ~1.2GB, this is not a server that has a lot of data change daily so the increase is unexpected and I can't account for it other than the veeamsnap to blksnap change. Screenshots of the NAS file sytem showing the backups, attached.
veeamsnap:


The transition from veeamsnap to blksnap:
Steps to reproduce
Perhaps hard to reproduce.
Expected behavior
I'd expect the incremental backup sizes to be similar betwen both backup mechanisms as they're both disk-level, a 100-fold increase is absolutely massive.
Additional information
I could not find any other threads discussing this increase so I thought it may be unique to my situation, or something I've misconfigured.
Normal? Any suggestions? Is there a configuration option I can change for better compression or something?
Thanks for your help.
The text was updated successfully, but these errors were encountered: