Replies: 9 comments 6 replies
-
Would love to have a bigger cache size for our monorepo as well, as 10 GB is simply way too small. A single build already uses approximately 1 GB of cache. Last time the cache size was increased was back in November 2021, perhaps it's time for another increase? |
Beta Was this translation helpful? Give feedback.
-
Same, we have to build a bunch of dependencies and are hitting our limit, which means 4 minute builds have become 45 minute builds, and also that a bunch of work that went into making fine-grained use of caches to save compute time doesn't end up helping, because the caches get evicted. 😬 |
Beta Was this translation helpful? Give feedback.
-
🕒 Discussion Activity Reminder 🕒 This Discussion has been labeled as dormant by an automated system for having no activity in the last 60 days. Please consider one the following actions: 1️⃣ Close as Out of Date: If the topic is no longer relevant, close the Discussion as 2️⃣ Provide More Information: Share additional details or context — or let the community know if you've found a solution on your own. 3️⃣ Mark a Reply as Answer: If your question has been answered by a reply, mark the most helpful reply as the solution. Note: This dormant notification will only apply to Discussions with the Thank you for helping bring this Discussion to a resolution! 💬 |
Beta Was this translation helpful? Give feedback.
-
Yeah, we ended up moving away from Github Actions because of this. Ridiculous this still hasn't been addressed, but it looks like Github Actions is very low priority for the folks over at Microsoft at the moment. |
Beta Was this translation helpful? Give feedback.
-
We are hitting this problem as well with our monorepo (MapLibre Native. The Qt downloads for each platform alone are 10GB, leaving no cache size available for the other workflows. |
Beta Was this translation helpful? Give feedback.
-
It's not only monorepos - this effectively makes all but the most ultra-optimised image builds uncacheable on more than a couple of branches as once. For typical apps in dependency-happy ecosystems (eg, JS or Ruby), we get no caching at all. Please, please, please let us pay to increase this limit on a per-repo basis. |
Beta Was this translation helpful? Give feedback.
-
You can double the cache size from 10GB to 20GB for free by switching from The free 20GB has been enough for us, but they have a "contact us if you need more" option. https://buildjet.com/for-github-actions/docs/about/pricing @tragiclifestories @louwers @tommy-vti (though you already migrated away) |
Beta Was this translation helpful? Give feedback.
-
🕒 Discussion Activity Reminder 🕒 This Discussion has been labeled as dormant by an automated system for having no activity in the last 60 days. Please consider one the following actions: 1️⃣ Close as Out of Date: If the topic is no longer relevant, close the Discussion as 2️⃣ Provide More Information: Share additional details or context — or let the community know if you've found a solution on your own. 3️⃣ Mark a Reply as Answer: If your question has been answered by a reply, mark the most helpful reply as the solution. Note: This dormant notification will only apply to Discussions with the Thank you for helping bring this Discussion to a resolution! 💬 |
Beta Was this translation helpful? Give feedback.
-
It would be great to get an update to the actions cache sizes. Larger projects cannot even start to use the cache, as the total cache size of a large gradle build can be over 10G easily. |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Question
Body
The actions cache size limit of 10GB isn't really fair or suitable for mono repos that can create large caches.
My org uses a mono repo as opposed to multiple small repos. If we had gone the multiple small repo route (say 10), our cache size limit across all repos would effectively be 100GB.
Instead, our cache limit is 10GB because we are mono repo. We can easily generate a 10GB cache, and in order to not do so, we have turned CI off on a number of things to keep the cache size down.
Some questions, but also hoping to have a discussion on the general topic:
1.) Why there is a cache size limit on a per-repo basis instead of a per-organization basis?
2.) Does GH have plans to increase the limit beyond 10GB (I understand it was increased from 5GB a couple years ago)?
3.) Has GH considered a pay-for-large-cache-size model?
Beta Was this translation helpful? Give feedback.
All reactions