Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llm_load_test] Make more generic for testing on other platforms #672

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

kpouget
Copy link
Contributor

@kpouget kpouget commented Feb 10, 2025

Relocating the llm-load-test visualization to a more logical place.

(I want to use the llm-load-test project outside of the kserve project)

/cc @mcharanrm

@openshift-ci openshift-ci bot requested a review from mcharanrm February 10, 2025 09:46
Copy link

openshift-ci bot commented Feb 10, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from kpouget. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kpouget kpouget force-pushed the llm-load-test branch 4 times, most recently from 6cb5f64 to 78bae64 Compare February 14, 2025 08:36
…command

and disabling the timeout on MacOS (command not provided by the OS ...)
…onfig.yaml: pass the model_id to openai_plugin config

and preserve the historical behavior (for kserve/vllm)
@kpouget kpouget changed the title [llm_load_test] Move the llm-load-test visualization from projects/kserve to projects/llm_load_test [llm_load_test] Make more generic for testing on other platforms Feb 14, 2025
@kpouget
Copy link
Contributor Author

kpouget commented Feb 14, 2025

Hello @mcharanrm @dagrayvid , could we get this PR tested to confirm that it doesn't break anything on the KServe testing? would be nice if we could have it merged asap

@@ -22,7 +22,7 @@ plugin_options:
{% if llm_load_test_run_plugin == 'openai_plugin' %}
host: "{{ llm_load_test_run_interface }}://{{ llm_load_test_run_host}}:{{ llm_load_test_run_port}}"
endpoint: "{{ llm_load_test_run_endpoint }}"
model_name: "/mnt/models/"
model_name: "{{ llm_load_test_run_model_id }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can move this up out of the if statement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants