-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
While scanning multiple domains, the reNgine celery queue gets stuck with a pool full of vulnerability_scan. Because of that, there is no room in the queue for the nuclei_scan task and the nuclei_individual_severity_module.
This is because in tasks.py both the nuclei_scan and vulnerability_scan methodes have the following code:
while not job.ready(): # wait for all jobs to complete time.sleep(5)
The only reason this task is awaiting in the queue is because it wants to complete the scan succesfully after that.
This is an inefficient way to manage the celery queue and can get the scan engines stuck.
Expected Behavior
Both nuclei_scan and vulnerability_scan shouldn't wait for the tasks to complete. After they have send the tasks to the message broker they should exit the celery queue.
Then after all scans are done it can be marked as completed. This should be a seperate method or check that doesn't impact the celery queue.
Steps To Reproduce
- Startup reNgine with a MAX_CONCURRENCY of 20.
- Scan multiple domains at the same time (preferably more than 20) with a scan engine that at least uses vulnerability scanning with nuclei.
- Wait for a while and look in the logs, you can notice that the scan will get stuck.
- In the celery container, inspect the queue to see what is active and u will see multiple vulnerability_scan methodes. These methodes don't do any scanning so it is an infinite loop.
Environment
- reNgine: 2.2.0
- OS: Ubuntu 24.10
- Python: 3.12.7
- Docker Engine: 28.1.1
- Docker Compose: 2.38.1
- Browser: Google Chrome 137.0.7151.122
Anything else?
No response