-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
📋 [TASK] Implement Multi-GPU Training Support #2258
Comments
Hey guys, this is presumably one of the most important missing features in Anomalib. |
Hi @haimat, I agree with you, but to enable multi-gpu, we had to go through a number of refactors here and there. You could check the PRs done to What is left to enable multi-gpu is metric refactor and visualization refactor, which we are currently working on. |
That sounds great, thanks for the update. |
@samet-akcay Hello, do you have any ideas when this might be released? |
@haimat, we figured this requires quite some changes within |
@samet-akcay Thanks for the update. |
we aim to release it by the end of this quarter |
Implement Multi-GPU Support in Anomalib
Depends on:
v2
AnomalibModule
#2365AnomalibModule
Attribute #2366AnomalibModule
Attribute #2367Background
Anomalib currently uses PyTorch Lightning under the hood, which provides built-in support for multi-GPU training. However, Anomalib itself does not yet expose this functionality to users. Implementing multi-GPU support would significantly enhance the library's capabilities, allowing for faster training on larger datasets and more complex models.
Proposed Feature
Enable multi-GPU support in Anomalib, allowing users to easily utilize multiple GPUs for training without changing their existing code structure significantly.
Example Usage
Users should be able to enable multi-GPU training by simply specifying the number of devices in the
Engine
configuration:This configuration should automatically distribute the training across two GPUs.
Implementation Goals
Implementation Steps
Engine
class to properly handle multi-GPU configurationsPotential Challenges
Discussion Points
Next Steps
Additional Considerations
We welcome input from the community on this feature. Please share your thoughts, concerns, or suggestions regarding the implementation of multi-GPU support in Anomalib.
The text was updated successfully, but these errors were encountered: