-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrency issue leading to corrupted uploads #8
Comments
https://github.com/flowjs/flow-php-server/blob/master/src/Flow/File.php#L90 Save method won't trigger twice because of https://github.com/flowjs/flow-php-server/blob/master/src/Flow/File.php#L104 Also note, that this library is not made for Windows, php flock implementation differs for it. |
To be more specific, fopen can't be successful because other process will have an open lock for it. Ofcourse, I might be wrong, but you can check it. |
I have tested it like I described. And I saw errors in my logs. I have not spent much time on it yet. For now changing to Will get more info when I do more tests and debugging. |
I tried |
I have this issue as well: simultaneous pdf uploads |
Could you share your ruby on rails file handling code? Does it do locking? |
I has a similar issue with some uploads not arriving, but reported as uploaded. I was getting errors like Unable to clear session lock record... and Failed to read session data: memcached... thrown at the same time. Setting simultaneousUploads: from 3 to 1 solved the issue. |
I know the title is bit vague so let me explain.
During my tests I wanted to simulate big file uploads with relatively small files so on the front end I set:
simultaneousUploads: 3
(which is I believe default setting)`chunkSize: 2048 (which is 1024*1024 by default)
This gave me kind of simulation of multiple chunk uploads. Using the library on the back end you have to follow four steps:
If you have multiple concurrent uploads this may lead to corrupted files. Lets take a case of last two chunks being uploaded where x axis is time:
when
validateFile
is called on chunk1 thesaveChunk
for chunk2 already begun so chunk1::validateFile will return true and proceed to save file with chunk2 not fully saved.I saw this error in my logs already few times. The above example will also lead to double
save
call. And in my case it leads to duplicate key error in the database.To fix the problem library would have to implement locking not only in
save
but invalidateFile
methods . Its not enough to callfile_exists
invalidateFile
. We have to know the chunk is fully uploaded.The text was updated successfully, but these errors were encountered: