Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Support Model A/B Testing #1142

Open
mdemoret-nv opened this issue Aug 21, 2023 · 1 comment
Open

[FEA] Support Model A/B Testing #1142

mdemoret-nv opened this issue Aug 21, 2023 · 1 comment
Assignees

Comments

@mdemoret-nv
Copy link
Contributor

mdemoret-nv commented Aug 21, 2023

Goal: To support model; A/B testing. There are many ways this could be done, some ideas:

  • Utilize an ML Platform to run inference requests and track the results for each model in the ML Platform
  • Use ControlMessage to enqueue 2 inference tasks, comparing the results of both outputs
  • Create a ModelABStage which can split incoming requests up to multiple models and concat the output
@mdemoret-nv mdemoret-nv changed the title [FEA] Model A/B Testing [CONVERT TO REAL ISSUE] [FEA] Support Model A/B Testing Aug 21, 2023
@tgrunzweig-cpacket
Copy link

Just a thought on this:

First: Great and would be helpful for the stated objectives.

But also, one of the strengths of the morpheus pipeline is that its easy to re-train. Especially for something like DFP where thousands of models need to retrained regularly. AB testing would be a great addition, because it addresses a basic question: "are the retrained models better than the older ones?". Really, either morpheus does it in "out of the box", or us, the users, have to implement it somehow - I see no way around this.

Cheers,
Tzahi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

3 participants