You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just tried to use mountpoint-s3 to backup some compiled folder, however there a pidfile is created and unlinked for syncronization purposes among processes. That is so fast, that mount-s3 will fail with the below error message
2024-08-29T06:41:58.458810Z WARN unlink{req=1476 parent=111 name="2Dg1l_yCiv6.ji.pidfile"}: mountpoint_s3::inode: unlink on local file not allowed until write is complete parent=111 name="2Dg1l_yCiv6.ji.pidf
ile"
2024-08-29T06:41:58.458825Z WARN unlink{req=1476 parent=111 name="2Dg1l_yCiv6.ji.pidfile"}: mountpoint_s3::fuse: unlink failed: inode error: inode 112 (full key "test/v1.11/FillArrays/2Dg1l_yCiv6.ji.pidfile"
) cannot be unlinked while being written
Which gives an operation not permitted (EPERM) error at the client code, which breaks it.
It would be really awesome if mountpoint-s3 could manage this delay itself by blocking or async delete, or offline metadata, instead of throwing an error. Maybe behind a flag?
Any hint for a workaround is also highly appreciated. I tried enabling --cache but the same error occurs.
EDIT: Here a sketch to reproduce:
use julia docker image
mount an s3 bucket at /root/.julia/compiled (with logging enabled, allow-write and allow-delete enabled)
run julia -e 'import Pkg; Pkg.add("MeasureTheory")' and see it failing
The text was updated successfully, but these errors were encountered:
Hi, thanks for making this feature request and the steps to reproduce the problem. Currently this is working as intended in Mountpoint's semantics. We are considering what can be done in this area, but we do not have anything to share yet.
Tell us more about this new feature.
I just tried to use mountpoint-s3 to backup some compiled folder, however there a pidfile is created and unlinked for syncronization purposes among processes. That is so fast, that mount-s3 will fail with the below error message
Which gives an
operation not permitted (EPERM)
error at the client code, which breaks it.It would be really awesome if mountpoint-s3 could manage this delay itself by blocking or async delete, or offline metadata, instead of throwing an error. Maybe behind a flag?
Any hint for a workaround is also highly appreciated. I tried enabling
--cache
but the same error occurs.EDIT: Here a sketch to reproduce:
julia
docker image/root/.julia/compiled
(with logging enabled, allow-write and allow-delete enabled)julia -e 'import Pkg; Pkg.add("MeasureTheory")'
and see it failingThe text was updated successfully, but these errors were encountered: