Replies: 1 comment 2 replies
-
Hi @rajbos , just wanted to give you an update on this regarding the Docker caching as we've had an internal discussion about it regarding what makes the most sense to use and wanted to fill you in At the moment, we don't believe that moving it over to a docker image would save that much computation power/speed. Most of our caching needs is network traffic based, pulling mainly python, our xgboost model, and a single go library. Both the cached pip packages and our xgboost repository already exist within githubs internal network, and this is the majority of the network traffic we pull. Whether we cache as we do now, or make an image, we expect the same amount of MB of traffic to have to be pulled on each run, all from somewhere within github's internal network. For that reason, we don't really expect much of a speed or network savings. As for the rest, compiling the cpu reporter is very minimal (less than 100ms of compute time), as is pulling the ascii graph library (in cpu time and traffic). Additionally, in terms of overall energy savings, we expect a bit of overhead using docker as that would be a new image that needs to be created and stored somewhere, whereas the python pip packages are already cached in github's internal caching network. Though granted, that would be quite small in comparison to overall traffic shared across all repositories using this action. But still, we think the network traffic would be about the same. For this reason we're not keen on spending the dev time at the moment to rewrite the action, though if you have any more insights into this we'd be happy to hear them |
Beta Was this translation helpful? Give feedback.
-
From @rajbos
Beta Was this translation helpful? Give feedback.
All reactions