Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to zeroing unused space through zpool initialize #16778

Open
Apachez- opened this issue Nov 18, 2024 · 8 comments
Open

Add option to zeroing unused space through zpool initialize #16778

Apachez- opened this issue Nov 18, 2024 · 8 comments
Labels
Type: Feature Feature request or new feature

Comments

@Apachez-
Copy link

Describe the feature would like to see added to OpenZFS

Add option to zpool inititalize to zeroing written blocks instead of using a known string ("0xdeadbeefdeadbeeeULL", ref: https://github.com/openzfs/zfs/blob/master/module/zfs/vdev_initialize.c#L39).

Perhaps adding this as an optional "-z" (zeroing) option?

A bonus to this feature request would be to also add option to use some random pattern for zeroing, for example through optional "-r" (random) option?

Current manpage:

https://openzfs.github.io/openzfs-docs/man/master/8/zpool-initialize.8.html

How will this feature improve OpenZFS?

Avoiding doing lengthy workarounds (described below) to zero out unused space which then can be reclaimed by the VM-host in a thin-provisioned environment.

Additional context

When issuing zpool initialize it will try to write to all available sectors in the drive(s) being used for a zpool using a known string "0xdeadbeefdeadbeeeULL".

This method is often used to verify that a drive is healthy similar to do a scan for badblocks.

However that writing is using a non-zero string which makes if ZFS is being used through a thin-provisioned VM such as Virtualbox then this unused space cannot be reclaimed by the VM-host.

Doing a dd from within the VM-guest:

dd if=/dev/zero of=/var/tmp/bigemptyfile bs=4096k; rm /var/tmp/bigemptyfile

will also not work if you have compression=on for that zpool.

A workaround for above is to first put the box in singleuser mode (init 1) followed by disabling compression:

zfs set compression=off <poolname>

verify through:

zfs get all | grep -i compression

and then issue the "create large empty file and then remove it" through:

dd if=/dev/zero of=/var/tmp/bigemptyfile bs=4096k; rm /var/tmp/bigemptyfile

Once the above have completed and the large empty file have been deleted you renable compression and then shutdown the VM:

zfs set compression=on <poolname>
zfs get all | grep -i compression

Once the VM-guest is shutdown issue a compaction of the VDI-file at the VM-host such as (example for Virtualbox):

vboxmanage modifymedium --compact /path/to/disk.vdi

Now the thinprovisioned virtual drive can shrink in size at the host to the actual content (in my test from a provisioned 80GB file down to approx 12GB which is the actual content within the VM-guest + some metadata etc).

If the zpool initialize command could have a zeroing option (such as zpool inititalize -z <poolname>) that could be used instead of above workaround to write zeroes to all unused blocks which then will make it possible to compact the thin-provisioned storage.

@Apachez- Apachez- added the Type: Feature Feature request or new feature label Nov 18, 2024
@lundman
Copy link
Contributor

lundman commented Nov 18, 2024

Isn't there a tunable that you can change to 0, instead of the default 0xdeadbeefdeadbeeeULL

https://github.com/openzfs/zfs/blob/master/module/zfs/vdev_initialize.c#L828

@Apachez-
Copy link
Author

Apachez- commented Nov 18, 2024

Hmm...

There seems to exist:

root@box:~# find /sys/module/zfs/ | grep -i initialize_value
/sys/module/zfs/parameters/zfs_initialize_value

root@box:~# cat /sys/module/zfs/parameters/zfs_initialize_value
16045690984833335022

So you mean setting that to lets say "00000000000000000000" would zpool initialize write zeroes instead?

Does it have to be of that length or can I just set it to "0" or should there be some length to it for performance reasons?

If above is possible it should be added to the manpages for zpool-initialize :-)

@lundman
Copy link
Contributor

lundman commented Nov 18, 2024

Just "0" will be expanded to set all bits to 0.

@Apachez-
Copy link
Author

That seems to have been working...

Added zero to the tuneable:

root@box:~# echo "0" > /sys/module/zfs/parameters/zfs_initialize_value

Verified that it updated:

root@box:~# cat /sys/module/zfs/parameters/zfs_initialize_value
0

Uninitialized the zpool:

zpool initialize -u <poolname>

Tried to initialize it again:

zpool initialize <poolname>

Verified through zpool status -i that the uninit was successful and then the same with the new init.

The result is that the thin-provisoned VDI-file grew with about 69 MB at the VM-host.

From 12 657 360 896 bytes to 12 726 566 912 bytes

After running compact it shrunk down to 10 047 455 232 bytes.

Size from within the VM-guest is 8122 MB.

So Im guessing it should be fairly straight forward to add the option "-z" (or similar) if this feature request is picked up?

@AllKind
Copy link
Contributor

AllKind commented Nov 18, 2024

If above is possible it should be added to the manpages for zpool-initialize :-)

Agreed...

There is this:
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-initialize-value

But I can't find a man page for the module parameters. Does it exist and I just can't find it?

@trentbuck
Copy link

However that writing is using a non-zero string which makes if ZFS is being used through a thin-provisioned VM such as Virtualbox then this unused space cannot be reclaimed by the VM-host.

Maybe this is a stupid question, but can't you just issue a zpool trim after your zpool initialize?
That explicitly tells Oracle Virtualbox that those blocks aren't wanted, rather than Oracle Virtualbox having to guess that based on their contents being all zeroes.

@Apachez-
Copy link
Author

zpool status -t (from within the VM-guest) claims that sda3 and sdb3 in my case doesnt support trim.

root@box:~# zpool status -t
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:32 with 0 errors on Mon Nov 18 11:52:47 2024
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda3    ONLINE       0     0     0  (trim unsupported)
	    sdb3    ONLINE       0     0     0  (trim unsupported)

errors: No known data errors

The physical drive (Samsung 850 Pro 1TB) do support trim and in Virtualbox the storage is configured as VirtIO where the virtual drives have "Solid-state Drive" enabled.

The ZFS module within the VM-guest being used is:

filename:       /lib/modules/6.8.12-4-pve/zfs/zfs.ko
version:        2.2.6-pve1
srcversion:     E73D89DD66290F65E0A536D
vermagic:       6.8.12-4-pve SMP preempt mod_unload modversions 

The host is a Ubuntu 24.04 LTS (Intel i5-4250U, 16GB RAM, Samsung 850 Pro 1TB) using VirtualBox 7.1.4 r165100 (Qt6.4.2) with extension pack 7.1.4r165100 installed. The Ubuntu installation is fully updated up until today.

The VM-guest is Proxmox PVE 8.2.8 fully updated up until today.

Main purpose of this setup is to test out software raid1 using ZFS on bootdrives in Proxmox before doing the same on baremetal later on. And thats when I went into this rabbit hole of "hmm, how do I zeroize the zpool so the VM-host can compact the VDI-files?".

@amotin
Copy link
Member

amotin commented Nov 18, 2024

But I can't find a man page for the module parameters. Does it exist and I just can't find it?

.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

5 participants