-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow write speeds #38
Comments
Hi @D-o-d-o-x, |
Linux distribution: Ubuntu 20.04.2 LTS I also tested some other numbers / sizes of blocks: 1x1GB via 10X100MB via 1*10GB via Using actual data (1GB file) (expected to be a bit slower; I'm copying from a slower SSD) |
I saw, you added 'Fuse library version' into the issue template. So here is my output of exfat-fuse/focal,now 1.3.0-1 amd64 [installed,automatic] |
Thank you for this information, we will investigate the issue further on our end. Just as a sanity check, could you attach your autotier.conf configuration? Just want to make sure that the SSD is actually the highest tier in the list. If the SSD is indeed the top tier, you can also check that your test file you're writing to is in the SSD tier with: autotier which-tier /autotier/mountpoint/path/test.img |
Running I had problems getting mounting via fstab to work. (I could also give you more details on this, but I thought that would be an unrelated issue) My autotier.conf is attached: |
If you have the time, could you open a separate issue for the fstab problems? Might make it easier to find in case other people run into the same issue. You're definitely right though, the write speeds you are seeing are lower than what I would expect. I will have to benchmark it under Ubuntu to see if I get different results, as most of the testing I do is on my Manjaro machine. |
@joshuaboud Was there some resolution for this? I've just deployed autotier and similar dd tests show:
|
I'll do a quick investigation to see if I can recreate these results. In the mean time, here are some results from benchmarking I did back in October of 2021, showing a expected loss in performance (especially at small block sizes) through autotier compared to direct to the SSD. The SSD I used in those benchmarks was a Micron 5200 MTFDDAK3T8TDC and the HDD was a WDC WUH721816ALE6L4, both with XFS directly on the device (no partition), and both over SATA. Granted, these benchmarks were an experiment to see how autotier would perform as a sort of writeback cache, so no read benchmarks were done in this case. @gdyr, what block size did you use for those dd tests? |
Cheers for looking into it! I was running the same speed test as OP ( |
Recreating this with the Micron 5200 MTFDDAK3T8TDC SSD, and autotier compiled from source on Manjaro with kernel 5.10.131-1-MANJARO:
|
Another consideration to make is CPU bottlenecking due to Fuse. My workstation I've been testing on has an AMD EPYC 7281 processor, with 16 cores/32 threads at 2.1 GHz. @gdyr What do you have in your home server for hardware? Also, are the drives connected through SATA cables, M.2 SATA, NVMe, or through a HBA card? |
A Xeon E5-1620 v4 - |
During my own testing; I seen a much higher performance penalty by autotier when using NFS. When forcing 'sync' (not using 'async' mode in NFS server) the performance to write is 134MB/s - where raw performance Also if using 'async' NFS I do see I am able to come close to 900MB/s write but then the data written to disk on the filesystem actually has a lot of data loss. 'fio' would do a 16GB file test but then on disk only 5GB of data would be there. Must be some bug in 'autotier' - only when I force NFS to behave async do I see the full 16GB fio file write to actually be copied but then performance is abysmal. |
It's unfortunate that this project seems to be abandoned by the authors. It's really easy to setup and would be awesome for home use (unraid replacement); I think the only feasible alternative is mergerfs which is well maintained. |
My first tier is on an SSD which normally gives me write speeds >300MB/s.
But if I test the write-speeds on the autotier-fs I only get ~100MB/s even though it is writing to the SSD.
(Tested via dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync)
Do I have a problem in my setup (or my testing) or is such an overhead (66% performance loss) expected?
The text was updated successfully, but these errors were encountered: