generated from allenai/python-package-template
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sglang or vllm api interface #51
Comments
for every one may like run this model in openapi capability server
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🚀 The feature, motivation and pitch
here is how to launch a api server using sglang for it quick start page
Launch A Server
from sglang.test.test_utils import is_in_ci
from sglang.utils import wait_for_server, print_highlight, terminate_process
if is_in_ci():
from patch import launch_server_cmd
else:
from sglang.utils import launch_server_cmd
This is equivalent to running the following command in your terminal
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct --host 0.0.0.0
server_process, port = launch_server_cmd(
"""
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct
--host 0.0.0.0
"""
)
wait_for_server(f"http://localhost:{port}")
i want run olmocr over sglang server
is there such feature ?
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: