Configuring Scan Concurrency #8577
Replies: 1 comment
-
Hi @addefisher! 🖖 There's no hard maximum concurrency limit from Prowler's perspective. This behavior is primarily driven by system resource management and the underlying architecture. Prowler App uses multiprocessing with Celery workers to handle scans. Each scan runs in an isolated process to prevent data leaks and ensure stability. The number of concurrent scans is determined by your system's CPU cores, as Celery automatically forks based on available cores. You can increase concurrency by adding more worker containers (if using Docker), increasing CPU cores and RAM on your deployment, or scaling the worker service independently of the API service. Keep in mind that each worker process imports everything it needs to run a scan, so higher CPU core count (and thus higher concurrency) means significantly higher minimum RAM usage for the worker service. Regarding resource requirements, we don't have specific recommendations yet, but the minimum baseline is around 2.5GB RAM per API and worker service. The main constraint you'll hit is memory exhaustion (OOM errors) rather than a hard concurrency limit, and resource requirements scale with the number of resources in your AWS accounts. Thanks for the question, I hope this helps! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using the Prowler app to scan AWS accounts. When I attempt to scan a large number of accounts in parallel, Prowler seems to top out at 8 concurrent scans, queuing up any overflow scans.
While I do not necessarily have any issue with this behavior, I would like to better understand the following:
Beta Was this translation helpful? Give feedback.
All reactions