-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1] Simplify Shutdown #11659
base: main
Are you sure you want to change the base?
[V1] Simplify Shutdown #11659
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@WoosukKwon - this solves the logs you were seeing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Note: I'm seeing some shared memory leaks when running examples/offline_inference.py
with tensor_parallel_size=2
that we should fixup:
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
(Not seeing this when using vllm serve or when running with tensor_parallel_size 1)
SUMMARY:
shutdown
andweakref.finalize
__del__
(resolves weird logs whenLLM
is cleaned upLLM
,LLMEngine
, orAsyncLLM
to need to do anything special for shutdown, since all the objects responsible for managing their IPC and MP resources handle their own shutdown