Skip to content

Update vllm to v0.19.1 #1567

Update vllm to v0.19.1

Update vllm to v0.19.1 #1567

Triggered via issue April 21, 2026 06:57
@pan-x-cpan-x-c
commented on #529 9051d63
Status Failure
Total duration 27m 23s
Artifacts 1

unittest.yaml

on: issue_comment
Fit to window
Zoom out
Zoom in

Annotations

12 errors and 1 warning
unittest
Process completed with exit code 1.
Failed Test: tests/common/vllm_test.py::TestLogprobs::test_logprobs_api
tests/common/vllm_test.py::TestLogprobs::test_logprobs_api: The test failed in the call phase - self = <tests.common.vllm_test.TestLogprobs testMethod=test_logprobs_api> async def test_logprobs_api(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:623: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(4be83863cb8c58c19c80bd0862c6ed9b56c2a9c51000000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=1644, ip=172.21.0.2, actor_id=9c80bd0862c6ed9b56c2a9c510000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7fa7646e49b0>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestAPIServer::test_reasoning_content
tests/common/vllm_test.py::TestAPIServer::test_reasoning_content: The test failed in the call phase - self = <tests.common.vllm_test.TestAPIServer testMethod=test_reasoning_content> @unittest.skipIf( "Qwen3.5" not in os.getenv(MODEL_PATH_ENV_VAR, ""), "This test is only for Qwen3.5 series" ) async def test_reasoning_content(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:507: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(0d399e074934d08540acda403319d4988ddc41070f00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=757, ip=172.21.0.3, actor_id=40acda403319d4988ddc41070f000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7f125f1c0740>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestAPIServer::test_api
tests/common/vllm_test.py::TestAPIServer::test_api: The test failed in the call phase - self = <tests.common.vllm_test.TestAPIServer testMethod=test_api> async def test_api(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:424: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(8edc8338152aec07b58999c81749757201658f8e0f00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=1643, ip=172.21.0.2, actor_id=b58999c81749757201658f8e0f000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7fb0770b8ad0>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestMessageProcess::test_truncation_status
tests/common/vllm_test.py::TestMessageProcess::test_truncation_status: The test failed in the call phase - self = <tests.common.vllm_test.TestMessageProcess testMethod=test_truncation_status> def setUp(self): self.config = get_template_config() self.config.mode = "explore" self.config.model.model_path = get_model_path() self.config.model.max_model_len = 100 self.config.model.max_prompt_tokens = 50 self.config.model.max_response_tokens = 50 self.config.model.enable_prompt_truncation = True self.config.explorer.rollout_model.enable_openai_api = True self.config.check_and_update() > self.engines, self.auxiliary_engines = create_explorer_models(self.config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ tests/common/vllm_test.py:342: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ trinity/common/models/__init__.py:144: in create_explorer_models rollout_engines = create_vllm_inference_models( trinity/common/models/__init__.py:193: in create_vllm_inference_models .remote( /opt/venv/lib/python3.12/site-packages/ray/actor.py:1484: in remote return actor_cls._remote(args=args, kwargs=kwargs, **updated_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /opt/venv/lib/python3.12/site-packages/ray/_private/auto_init_hook.py:22: in auto_init_wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ /opt/venv/lib/python3.12/site-packages/ray/util/tracing/tracing_helper.py:384: in _invocation_actor_class_remote_span return method(self, args, kwargs, *_args, **_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <trinity.common.models.vllm_model.ActorClass(vLLMRolloutModel) object at 0x7fee19351820> args = () kwargs = {'config': InferenceModelConfig(model_path='/mnt/models/Qwen3.5-0.8B', name=None, trust_remote_code=False, engine_type...lora=False, enable_runtime_lora_updating=False, lora_modules=None, lora_kwargs={}, rope_scaling=None, rope_theta=None)} actor_options = {'_labels': None, 'accelerator_type': None, 'allow_out_of_order_execution': None, 'enable_task_events': True, ...} name = 'explorer_rollout_model_1', namespace = 'trinity_unittest' meta = <ray.actor._ActorClassMetadata object at 0x7fee19669340> is_asyncio = True, k = 'allow_out_of_order_execution' v = Option(type_constraint=(<class 'bool'>, <class 'NoneType'>), value_constraint=None, default_value=None) @wrap_auto_init @_tracing_actor_creation def _remote(self, args=None, kwargs=None, **actor_options) -> ActorProxy[T]: """Create an actor. This method allows more flexibility than the remote method because resource requirements can be specified and override the defaults in the decorator. Args: args: The arguments to forward to the actor constructor. kwargs: The keyword arguments to forward to the actor constructor. **actor_options: Keyword arguments for configuring the actor options. See ``ActorClass.options`` for more details. Returns: A handle to the newly created actor. """ name = actor_options.get("name") namespace = actor_options.get("namespace") if name is not None: if not isinstance(name, str): raise TypeError(f"name must be None or a string, got: '{type(name)}'.") elif name == "": raise ValueError("Actor name cannot be an empty string.") if namespace is not None: ray._private.utils.validate_namespace(namespace) # Handle the get-or-create case. if actor_options.get("get_if_exists"): try: return ray.get_actor(name, namespace=namespace) except ValueError: # Attempt to create it (may race with other attempts). updated_options
Failed Test: tests/common/vllm_test.py::TestMessageProcess::test_no_prompt_truncation
tests/common/vllm_test.py::TestMessageProcess::test_no_prompt_truncation: The test failed in the call phase - self = <tests.common.vllm_test.TestMessageProcess testMethod=test_no_prompt_truncation> async def test_no_prompt_truncation(self): """Test truncation status for multi-turn conversations in workflow.""" self.config.model.enable_prompt_truncation = False self.config.check_and_update() > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:376: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(185964db85743fe15471fa3aadfae4add60204480e00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=754, ip=172.21.0.3, actor_id=5471fa3aadfae4add60204480e000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7fe198ae8050>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestModelLenWithoutPromptTruncation::test_model_len
tests/common/vllm_test.py::TestModelLenWithoutPromptTruncation::test_model_len: The test failed in the call phase - self = <tests.common.vllm_test.TestModelLenWithoutPromptTruncation testMethod=test_model_len> async def test_model_len(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:298: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(a2a6a30c0195e5185650615391360db2ab85351f0d00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=751, ip=172.21.0.3, actor_id=5650615391360db2ab85351f0d000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7fb0788ec560>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestModelLen_2::test_model_len
tests/common/vllm_test.py::TestModelLen_2::test_model_len: The test failed in the call phase - self = <tests.common.vllm_test.TestModelLen_2 testMethod=test_model_len> async def test_model_len(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:195: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(95bf82ae2657e4873da9d2f0fec3c1ef76a04bd20c00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=1636, ip=172.21.0.2, actor_id=3da9d2f0fec3c1ef76a04bd20c000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7fdefecf8620>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestModelLen_1::test_model_len
tests/common/vllm_test.py::TestModelLen_1::test_model_len: The test failed in the call phase - self = <tests.common.vllm_test.TestModelLen_1 testMethod=test_model_len> async def test_model_len(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:195: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(8d74f81a509921f131efd9d5e7dff36e9c1c05680b00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=748, ip=172.21.0.3, actor_id=31efd9d5e7dff36e9c1c05680b000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7f4043fb8500>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/vllm_test.py::TestModelLen_0::test_model_len
tests/common/vllm_test.py::TestModelLen_0::test_model_len: The test failed in the call phase - self = <tests.common.vllm_test.TestModelLen_0 testMethod=test_model_len> async def test_model_len(self): > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/vllm_test.py:195: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/vllm_test.py:43: in prepare_engines await asyncio.gather(*prepare_model_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(9e8ff71cd2a3283e1fdb00a3f9e89f9d0124b0c90a00000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=1630, ip=172.21.0.2, actor_id=1fdb00a3f9e89f9d0124b0c90a000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7f43c7b56420>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
Failed Test: tests/common/external_model_test.py::TestExternalModel::test_external_model
tests/common/external_model_test.py::TestExternalModel::test_external_model: The test failed in the call phase - self = <tests.common.external_model_test.TestExternalModel testMethod=test_external_model> async def asyncSetUp(self): model_path = get_model_path() # Part 1: bootstrap a local OpenAI-compatible endpoint via vLLM. config = get_template_config() config.mode = "explore" config.model.model_path = model_path config.explorer.rollout_model.engine_type = "vllm" config.explorer.rollout_model.engine_num = 1 config.explorer.rollout_model.tensor_parallel_size = 1 config.explorer.rollout_model.enable_openai_api = True config.check_and_update() self.engines, self.auxiliary_engines = create_explorer_models(config) self.vllm_wrapper = ModelWrapper(self.engines[0], enable_history=False) > await prepare_engines(self.engines, self.auxiliary_engines) tests/common/external_model_test.py:49: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/common/external_model_test.py:22: in prepare_engines await asyncio.gather(*prepare_refs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ awaitable = ObjectRef(e5cbd90b7f1fb776f55220798f285b518d28c4dc0200000001000000) async def _wrap_awaitable(awaitable): > return await awaitable ^^^^^^^^^^^^^^^ E ray.exceptions.RayTaskError(ModuleNotFoundError): ray::vLLMRolloutModel.prepare() (pid=1637, ip=172.21.0.2, actor_id=f55220798f285b518d28c4dc02000000, repr=<trinity.common.models.vllm_model.vLLMRolloutModel object at 0x7f39ce0c0470>) E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result E return self.__get_result() E ^^^^^^^^^^^^^^^^^^^ E File "/root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result E raise self._exception E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 174, in prepare E await self.run_api_server() E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_model.py", line 557, in run_api_server E self.api_server = get_api_server( E ^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 89, in get_api_server E run_api_server_in_ray_actor = _get_api_server_runner(vllm_version) E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E File "/workspace/trinity/common/models/vllm_patch/__init__.py", line 68, in _get_api_server_runner E from trinity.common.models.vllm_patch.api_patch_v17 import ( E ModuleNotFoundError: No module named 'trinity.common.models.vllm_patch.api_patch_v17' /root/.local/share/uv/python/cpython-3.12.13-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py:684: RayTaskError(ModuleNotFoundError)
unittest
Process completed with exit code 1.
unittest
Node.js 20 actions are deprecated. The following actions are running on Node.js 20 and may not work as expected: actions/checkout@v4, actions/upload-artifact@v4. Actions will be forced to run with Node.js 24 by default starting June 2nd, 2026. Node.js 20 will be removed from the runner on September 16th, 2026. Please check if updated versions of these actions are available that support Node.js 24. To opt into Node.js 24 now, set the FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true environment variable on the runner or in your workflow file. Once Node.js 24 becomes the default, you can temporarily opt out by setting ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION=true. For more information see: https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/

Artifacts

Produced during runtime
Name Size Digest
pytest-results
7.82 KB
sha256:f50fcdc0536c8f982a925cd247d0e41ac988c958d5c6ad3ccbfb5c8f531f7f71