You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.
Azure Container Instances allow you to spin up a container workload and just define memory and CPU requirements. It would be great if this was possible with BatchAI to remove the idea of having a cluster.
To be able to deploy a job and in there define memory, CPU and GPU or more generally machine requirements and they be managed for you. Allow the data scientist/developer to just focus on the job itself.
Looking into it a bit, this seems similar to how Google run their ML engine jobs defining a scale tier, although I much prefer Batch AIs method of using custom containers vs ML Engines runtime versions to actually run the jobs 😄
The text was updated successfully, but these errors were encountered:
Azure Container Instances allow you to spin up a container workload and just define memory and CPU requirements. It would be great if this was possible with BatchAI to remove the idea of having a cluster.
To be able to deploy a job and in there define memory, CPU and GPU or more generally machine requirements and they be managed for you. Allow the data scientist/developer to just focus on the job itself.
Looking into it a bit, this seems similar to how Google run their ML engine jobs defining a scale tier, although I much prefer Batch AIs method of using custom containers vs ML Engines runtime versions to actually run the jobs 😄
The text was updated successfully, but these errors were encountered: