How understand value about size during enabled dedup and compression on zpool volume #16783
Unanswered
Neptunet-GitHub
asked this question in
Q&A
Replies: 1 comment
-
When you are looking on pool stats, it shows raw space (including parity) for RAIDZ, that is why there was 31.3TiB vs 20TiB originally reported free/available. Another factor here is huge dedup ratio you've got, that you may see in the pool stats. Since dedup does not work on dataset level, ZFS can not report it there, so it inflates pool capacity, as if all the data you have written are really stored somewhere. The 18TiB of space reported as available for dataset should be right, if you finally start writing real unique data. And it matches FREE reported for the pool if you compensate for parity. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Guys,
I created RAIDZ3 based on 9 disks 3,7 TB each. It created zfs device with file system with 20 TB usage space. Next I enabled deduplication and compression on it + filesystem is synthetic. Everything is correct.
Now I wrote files on the file system.
And from this time I lost understanding of reporting information.
When I try to check how much free space I have from operating system I got:
Next I checked ZFS statistics:
From Linux reports, that file system have size 106T and free 18T,
From ZFS reports. that file system have size 31,4T and free 27,9T, also ALLOCATION is 3,55T,
but
ZFS reports also that used 88,4T and available 17T.
Is any document or faq which explain how to understand these values? Sorry, but I cannot found it.
for example, How much free space I have on that file system? Real information is from ZFS - that allocation is 3,55T and I had 20TB total space on empty ZFS FS, so I have about 16,4T free?
Why ZFS reports that free is 27,9T and size is 31.4T, as size was 20T? Operating system reports that free is 18T, but it also isn't true, because I wrote on it additional 100TB and allocation before it was 3,52T after wrote that 100TB allocation is 3,55T
I try to understand how count these values, but I have little problem with that.
Beta Was this translation helpful? Give feedback.
All reactions