You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the confluence of a few different things that ultimately leads to watchpack blocking the process for long periods of time on osx.
1. FSWatcher.close is slow
Closing an FSWatch object can take a long time on osx; in a real-world scenario on an intel macbook pro I've seen times as high as 50ms. See fswatch-close-slow-repro.js to try yourself. The duration appears to be exponentially related to the number of FSWatch objects currently open, as seen on the plot below. There is an issue opened on the nodejs repo that hasn't seen any activity.
2. "Reducing" watchers can trigger a large number of FSWatcher.close calls
Watchpack will attempt to intelligently replace multiple DirectWatchers with a single RecursiveWatcher once a certain threshold of open FSWatchers is crossed. When this happens, all of the watchers being replaced will be closed in a loop. In the worst case you're looking at limit-1 total closes for a single watchEventSource.watch call. On macs where the default threshold is 2000, even at an average of 10ms per close you're potentially stalling the process for a full 20 seconds. Here's a trace where you can see this blocking in action.
3. Watching a directory does not batch watcher creation on subdirectories
When passing in a bare directory to Watchpack.watch, all nested subdirectories are also watched. The creation of these watchers happens asynchronously as the tree is traversed, so in cases where there are more than WATCHPACK_WATCHER_LIMIT subdirectories in the tree there is potentially a lot of churn. You can see an example of this behavior in watchpack-behavior-test-output.txt (created by this script); for 781 directories and a limit of 100 we end up with a single recursive watcher, but we create and close 174 watchers in the process.
Watchpack is able to "batch" creation of watchers for anything passed directly in to the call to watch, and webpack mostly avoids this watcher churn by passing in all files from the compilation. However, some other libraries using watchpack don't do this.
Workarounds
The easiest workaround is to set WATCHPACK_WATCHER_LIMIT high enough that it won't be hit. It would be nice to know that you're not hitting the limit, so maybe we could add an environment variable to disable SUPPORTS_RECURSIVE_WATCHING?
Solutions
The ideal solution would probably be for FSWatcher.close to be much faster, but I don't have the skills for that.
Since watchpack manages watcher creation globally, I can't think of any real solution where automatic "reducing" happens and the total number of watchers opened stays below WATCHPACK_WATCHER_LIMIT; one Watchpack instance could open 1999 watchers and another could open 5 and now we're potentially closing 1999 watchers.
We could try and spread these closes out over time, but then we're not staying under the limit. However, this would at least fix blocking for too long.
One improvement that I think would reduce the scope of this issue is to batch watcher creation on nested subdirectories. Currently when passing a bare directory to Watchpack.watch that has > limit subdirectories, you are basically guaranteed to open and close > limit watchers. If we batched all of these together we could avoid the closes, avoid unnecessary watcher churn, and drastically reduce calls to reducePlan. While it would still be possible to encounter this bug if multiple libraries were using watchpack, the impacted range of project sizes would be reduced; given one lib watches N directories and another watches M, this error would only happen if N < limit && (N + M) > limit, which is more narrow than N > limit || (N + M) > limit (especially given both libraries will probably be watching a lot of the same files).
Open to picking up work on this if there's demand, but figured I would share my investigation.
The text was updated successfully, but these errors were encountered:
heyimalex
changed the title
watchpack blocks process for a long time while closing FSWatchers on osx
watchpack blocks for a long time closing FSWatchers on osx
Aug 9, 2022
This is the confluence of a few different things that ultimately leads to watchpack blocking the process for long periods of time on osx.
1.
FSWatcher.close
is slowClosing an
FSWatch
object can take a long time on osx; in a real-world scenario on an intel macbook pro I've seen times as high as 50ms. Seefswatch-close-slow-repro.js
to try yourself. The duration appears to be exponentially related to the number ofFSWatch
objects currently open, as seen on the plot below. There is an issue opened on the nodejs repo that hasn't seen any activity.2. "Reducing" watchers can trigger a large number of
FSWatcher.close
callsWatchpack will attempt to intelligently replace multiple
DirectWatcher
s with a singleRecursiveWatcher
once a certain threshold of openFSWatcher
s is crossed. When this happens, all of the watchers being replaced will be closed in a loop. In the worst case you're looking atlimit-1
total closes for a singlewatchEventSource.watch
call. On macs where the default threshold is 2000, even at an average of 10ms per close you're potentially stalling the process for a full 20 seconds. Here's a trace where you can see this blocking in action.3. Watching a directory does not batch watcher creation on subdirectories
When passing in a bare directory to
Watchpack.watch
, all nested subdirectories are also watched. The creation of these watchers happens asynchronously as the tree is traversed, so in cases where there are more thanWATCHPACK_WATCHER_LIMIT
subdirectories in the tree there is potentially a lot of churn. You can see an example of this behavior inwatchpack-behavior-test-output.txt
(created by this script); for 781 directories and a limit of 100 we end up with a single recursive watcher, but we create and close 174 watchers in the process.Watchpack is able to "batch" creation of watchers for anything passed directly in to the call to
watch
, and webpack mostly avoids this watcher churn by passing in all files from the compilation. However, some other libraries using watchpack don't do this.Workarounds
The easiest workaround is to set
WATCHPACK_WATCHER_LIMIT
high enough that it won't be hit. It would be nice to know that you're not hitting the limit, so maybe we could add an environment variable to disableSUPPORTS_RECURSIVE_WATCHING
?Solutions
The ideal solution would probably be for
FSWatcher.close
to be much faster, but I don't have the skills for that.Since watchpack manages watcher creation globally, I can't think of any real solution where automatic "reducing" happens and the total number of watchers opened stays below
WATCHPACK_WATCHER_LIMIT
; oneWatchpack
instance could open 1999 watchers and another could open 5 and now we're potentially closing 1999 watchers.We could try and spread these closes out over time, but then we're not staying under the limit. However, this would at least fix blocking for too long.
One improvement that I think would reduce the scope of this issue is to batch watcher creation on nested subdirectories. Currently when passing a bare directory to
Watchpack.watch
that has > limit subdirectories, you are basically guaranteed to open and close > limit watchers. If we batched all of these together we could avoid the closes, avoid unnecessary watcher churn, and drastically reduce calls to reducePlan. While it would still be possible to encounter this bug if multiple libraries were using watchpack, the impacted range of project sizes would be reduced; given one lib watches N directories and another watches M, this error would only happen ifN < limit && (N + M) > limit
, which is more narrow thanN > limit || (N + M) > limit
(especially given both libraries will probably be watching a lot of the same files).Open to picking up work on this if there's demand, but figured I would share my investigation.
The text was updated successfully, but these errors were encountered: