Skip to content

Commit

Permalink
Add better support in case of a race condition
Browse files Browse the repository at this point in the history
  • Loading branch information
iliajie committed Aug 8, 2024
1 parent 7c89110 commit 910ae5a
Showing 1 changed file with 52 additions and 31 deletions.
83 changes: 52 additions & 31 deletions stats-lib-funcs.pl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
#
use strict;
use feature 'state';
use Fcntl ':flock';
use Fcntl qw(:flock O_RDWR O_CREAT);

our $json;
eval "use JSON::XS";
Expand Down Expand Up @@ -78,38 +78,59 @@ sub stats_read_file_contents {
# Write file contents
sub stats_write_file_contents {
my ($filename, $contents) = @_;
# Open file for writing
my $fh;
if (!open($fh, '>', $filename)) {
error_stderr("Failed to open file '$filename' for writing: $!");
return 0;
}
# Acquire lock the file for writing
if (!flock($fh, LOCK_EX)) {
error_stderr("Failed to acquire exclusive lock on file '$filename': $!");
close($fh);
my $cleanup = sub {
my ($fh, $unlock, $success) = @_;
# Unlock the file. Not strictly necessary as Perl
# automatically releases locks on file closure
flock($fh, LOCK_UN) if ($unlock);
# Close file handle
close($fh)
or do {
error_stderr("Failed to close file '$filename': $!");
undef($fh);
return 0;
};
# Release memory. Generally not necessary as Perl's garbage collection
# should always handle this. However, it doesn't hurt and can be
# useful in certain scenarios, like long-running processes
undef($fh);

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 8, 2024

Collaborator

How does setting $fh to undef do anything if it's a local variable?

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 8, 2024

Author Collaborator

We have a long-running process. This is explicit resource management — it's a good thing. Our subroutine continues running for a long time after file operations. Using undef($fh) helps to explicitly undef the file handle, and trigger Perl's garbage collection, if this was the last reference to the file handle, which it is.

For such operations, we must write super clear code so anyone in the future can understand it. We stress the importance by being explicit, which in normal situations, where a program only lives a few seconds, can (and most times) be avoided ..

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 9, 2024

Collaborator

My point is that in this case $fh is a local variable to the cleanup sub, and so setting it to undef doesn't do anything if it still exists in the outer code block where it is created here : https://github.com/webmin/authentic-theme/blob/master/stats-lib-funcs.pl#L102

Unless undef($fh) has some special meaning in Perl that I don't understand, and is different from $fh = undef or just letting it go out of scope? If that's actually so, can you point to some docs on this?

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 9, 2024

Author Collaborator

My point is that in this case $fh is a local variable to the cleanup sub, and so setting it to undef doesn't do anything if it still exists in the outer code block where it is created here

Jamie, you're excellent! Thank you for catching this! Fixed here 4eb6f5f.

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 10, 2024

Collaborator

Excellent! Although now we know why the standard lock_file was taking up so much RAM, I expect this whole function can be removed.

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 10, 2024

Author Collaborator

I think, I'd like to use more straight forward functions for reading and writing the file in stats.pl server. And, also I put time into those, I don't really wish to remove them.

Or, is there a bug?

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 10, 2024

Collaborator

I don't think we should keep code that's not needed just because it took time to write! That's what we call a "sunk cost fallacy".

Basically just use lock_file / unlock_file and read_file_contents / write_file_contents to save these stats history files. It's a lot cleaner to use standard webmin functions rather than re-writing them.

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 10, 2024

Author Collaborator

Alrighty! Fixed here c351786.

That's what we call a "sunk cost fallacy".

This would apply if the idea no longer made sense. However, I believe it still does, meaning why not use Perl's pure implementation for file locking, without creating temporary .lock files?

Basically just use lock_file / unlock_file and read_file_contents / write_file_contents to save these stats history files. It's a lot cleaner to use standard webmin functions rather than re-writing them.

All those subs involve many additional subcalls to handle things we don't need in this mode. We don’t need file caching for example.

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 10, 2024

Collaborator

Sure an alternate implementation may be slightly more efficient, but the code for locking with .lock files already exists and has been tested for many many years in Webmin. And in future maybe it can change to not need .lock files ... but for now, it's an obvious choice to use existing well-tested code rather than having two different implementations of locking in the code base.

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 10, 2024

Author Collaborator

I agree! Thank you!

return 0;
}
# Return result
return $success ? 1 : 0;
};
# Open file for reading and writing without truncating it, or
# create it if it doesn't exist
sysopen(my $fh, $filename, O_RDWR | O_CREAT)
or do {
error_stderr("Failed to open file '$filename' for reading and writing: $!");
return 0;
};
# Acquire exclusive lock on the file for writing
flock($fh, LOCK_EX)
or do {
error_stderr("Failed to acquire exclusive lock on file '$filename': $!");
return $cleanup->($fh, 0);
};
# Truncate the file after acquiring the lock

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 8, 2024

Collaborator

This isn't necessary because truncation happens automatically when a file is opened for write with >

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 8, 2024

Collaborator

Oh never mind I see you're using sysopen

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 8, 2024

Author Collaborator

Please check the whole code. I change it. I did't open it as in my initial commit!

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 8, 2024

Author Collaborator

Oh never mind I see you're using sysopen

Ah, alright! Thanks!

truncate($fh, 0)
or do {
error_stderr("Failed to truncate file '$filename': $!");
return $cleanup->($fh, 1);
};
# Reset the file pointer to the beginning of the file
# preventing potential race conditions
seek($fh, 0, 0)

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 8, 2024

Collaborator

This is also doubly unnecessary because the open with > already sets the file pointer to the start of the file.

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 8, 2024

Author Collaborator

This is also doubly unnecessary because the open with > already sets the file pointer to the start of the file.

I used:

sysopen(my $fh, $filename, O_RDWR | O_CREAT)

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 8, 2024

Collaborator

It's still unnecessary since you're calling truncate though, right?

This comment has been minimized.

Copy link
@iliajie

iliajie Aug 8, 2024

Author Collaborator

It's still unnecessary since you're calling truncate though, right?

Not exactly.

The POSIX standard specifies that the file pointer position is not changed by a truncate operation.

Technically, truncate($fh, 0) is not intended to change the file pointer position. Although it will work, I believe it's good practice to add seek($fh, 0, 0) in this case scenario. This explicitly sets the file pointer to the beginning before writing, adding both a layer of safety and clarity for future readers. Besides, the seek operation is very fast and does not impact performance.

This comment has been minimized.

Copy link
@jcameron

jcameron Aug 9, 2024

Collaborator

Oh interesting I didn't realize that! That's surprising behavior, since I wouldn't expect anyone to want to write to a file that they've just truncated at any point other than the start ... but I'll take your word for it.

or do {
error_stderr("Failed to seek to the beginning of file '$filename': $!");
return $cleanup->($fh, 1);
};
# Write to file
if (!print($fh $contents)) {
error_stderr("Failed to write to file '$filename': $!");
close($fh);
undef($fh);
return 0;
}
# Unlock the file
flock($fh, LOCK_UN);
# Close file
if (!close($fh)) {
error_stderr("Failed to close file '$filename': $!");
undef($fh);
return 0;
}
# Release memory
undef($fh);
# Return success
return 1;
print($fh $contents)
or do {
error_stderr("Failed to write to file '$filename': $!");
return $cleanup->($fh, 1);
};
# Clean up and return success
return $cleanup->($fh, 1, 1);
}

# Check if feature is enabled
Expand Down

0 comments on commit 910ae5a

Please sign in to comment.