juicefs performs much more worse than cephfs in scenario of buffer+end_fsync writing mode #3561
liujiangang01
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Test Consulsion:


using same backend storage pool(ceph pool), when execute buffer + end_fsync write fio test, results show that performance of cephfs is more better than juicefs.
Test Environment:
juicefs:juicefs + tikv + cephrados (tikv use sata ssd as data disk)
cephfs:cephfs + cephrados(cephfs metadata pool use sata ssd)
anotation:juicefs and cephfs use the same ceph pool
Test Result:
Thinking:
In the scenario of large file rand writing,we always use mode of buffer and end_fsync to write,but the result show that juicefs performs much more worse than cephfs ,"juicefs client made some trade-offs sacrificing the performance in this scenario?
Beta Was this translation helpful? Give feedback.
All reactions