Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Launching a server with --enable-torch-compile produce torch dynamo error #1923

Open
5 tasks done
msublee opened this issue Nov 5, 2024 · 1 comment
Open
5 tasks done

Comments

@msublee
Copy link

msublee commented Nov 5, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

I'm using SGLang, but running into some issues when I launch a server with --enable-torch-compile. This issue does not occur without --enable-torch-compile. One strange thing is that this problem did not occur in previous versions (v0.3.1 or v0.3.2), but this error seems to occur starting from v0.3.4.

  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lists.py", line 106, in getitem_const
    assert isinstance(index, (int, torch.SymInt))
AssertionError:

from user code:
   File "/sgl-workspace/sglang/python/sglang/srt/layers/attention/__init__.py", line 58, in torch_dynamo_resume_in_forward_at_57
    return self.forward_decode(q, k, v, layer, forward_batch)
  File "/sgl-workspace/sglang/python/sglang/srt/layers/attention/flashinfer_backend.py", line 273, in forward_decode
    decode_wrapper = self.forward_metadata[0][self._get_wrapper_idx(layer)]

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True


W1104 22:57:20.399000 140422133311232 torch/_inductor/compile_worker/subproc_pool.py:126] SubprocPool unclean exit
/usr/lib/python3.10/multiprocessing/resource_tracker.py:104: UserWarning: resource_tracker: process died unexpectedly, relaunching.  Some resources might leak.
  warnings.warn('resource_tracker: process died unexpectedly, '
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
    cache[rtype].remove(name)
KeyError: '/mp-0je2dy8c'

Reproduction

I use docker-compose (docker compose -f docker-compose.yml up)

services:
  sglang:
    image: lmsysorg/sglang:v0.3.5-cu121
    container_name: sglang
    volumes:
      - <my_custom_model_path>:/models:ro
    restart: always
    network_mode: host
    # Or you can only publish port 30000
    # ports:
    #   - 30000:30000
    environment:
      HF_TOKEN: <secret>
    entrypoint: python3 -m sglang.launch_server
    command:
      --model-path /models
      --tokenizer-path /models
      --port 30000
      --tokenizer-mode auto
      --dtype bfloat16
      --served-model-name sglang
      --mem-fraction-static 0.5
      --random-seed 0
      --enable-torch-compile
      # --skip-tokenizer-init
      # --log-requests
    ulimits:
      memlock: -1
      stack: 67108864
    ipc: host
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ["0"]
              capabilities: [gpu]

And the model is finetuned version of gemma-2-2b.

Environment

Python: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
CUDA_HOME: /home/matt/miniconda3/envs/sglang
NVCC: Cuda compilation tools, release 12.4, V12.4.99
CUDA Driver Version: 550.90.07
PyTorch: 2.4.0+cu121
sglang: 0.3.5
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.46.1
requests: 2.32.3
tqdm: 4.66.6
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.4
hf_transfer: 0.1.8
huggingface_hub: 0.26.2
interegular: 0.3.3
packaging: 24.1
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.17
openai: 1.54.0
anthropic: 0.39.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 48-63,176-191 3 N/A
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 48-63,176-191 3 N/A
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS 16-31,144-159 1 N/A
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS 16-31,144-159 1 N/A
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS 112-127,240-255 7 N/A
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS 112-127,240-255 7 N/A
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS 80-95,208-223 5 N/A
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS 80-95,208-223 5 N/A
NIC0 PXB PXB SYS SYS SYS SYS SYS SYS X PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC1 PXB PXB SYS SYS SYS SYS SYS SYS PXB X SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC2 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS X PXB SYS SYS SYS SYS SYS SYS SYS SYS
NIC3 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS PXB X SYS SYS SYS SYS SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X PIX SYS SYS SYS SYS SYS SYS
NIC5 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX X SYS SYS SYS SYS SYS SYS
NIC6 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS X PXB SYS SYS SYS SYS
NIC7 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB X SYS SYS SYS SYS
NIC8 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS X PXB SYS SYS
NIC9 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB X SYS SYS
NIC10 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X PIX
NIC11 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11

ulimit soft: 500000
I1105 17:53:35.421000 139824578339904 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:
I1105 17:53:35.421000 139824578339904 torch/_dynamo/utils.py:335] Function, Runtimes (s)
V1105 17:53:35.422000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.422000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1105 17:53:35.422000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.423000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.424000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.424000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1105 17:53:35.424000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1105 17:53:35.424000 139824578339904 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)

@YJHMITWEB
Copy link

Same errors here. But simply running python -m sglang.bench_latency --model-path meta-llama/Meta-Llama-3-8B-Instruct --batch 32 --input-len 256 --output-len 32 will give the error.

(sglang) [sglang]$ python -m sglang.bench_latency --model-path meta-llama/Meta-Llama-3-8B-Instruct --batch 32 --input-len 256 --output-len 32
[2024-11-08 14:29:10 TP0] Init torch distributed begin.
[2024-11-08 14:29:11 TP0] Load weight begin. avail mem=92.49 GB
[2024-11-08 14:29:12 TP0] lm_eval is not installed, GPTQ may not be usable
INFO 11-08 14:29:14 weight_utils.py:243] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:35<01:45, 35.08s/it]
Loading safetensors checkpoint shards:  50% Completed | 2/4 [01:13<01:14, 37.18s/it]
Loading safetensors checkpoint shards:  75% Completed | 3/4 [01:47<00:35, 35.85s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [01:56<00:00, 24.86s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [01:56<00:00, 29.00s/it]

[2024-11-08 14:31:10 TP0] Load weight end. type=LlamaForCausalLM, dtype=torch.bfloat16, avail mem=77.42 GB
[2024-11-08 14:31:10 TP0] Memory pool end. avail mem=10.92 GB
[2024-11-08 14:31:11 TP0] Capture cuda graph begin. This can take up to several minutes.
max_total_num_tokens=543329
Warmup ...
Prefill. latency: 0.90693 s, throughput:   9032.63 token/s
Decode.  latency: 0.02569 s, throughput:   1245.73 token/s
Decode.  latency: 0.01008 s, throughput:   3174.12 token/s
Decode.  latency: 0.00996 s, throughput:   3213.72 token/s
Decode.  latency: 0.00993 s, throughput:   3220.97 token/s
Decode.  latency: 0.00995 s, throughput:   3217.65 token/s
Decode.  median latency: 0.00995 s, median throughput:   3217.65 token/s
Total. latency:  0.992 s, throughput:   8513.06 token/s
Benchmark ...
Prefill. latency: 0.18725 s, throughput:  43748.52 token/s
Decode.  latency: 0.01033 s, throughput:   3097.21 token/s
Decode.  latency: 0.01023 s, throughput:   3129.42 token/s
Decode.  latency: 0.01009 s, throughput:   3172.92 token/s
Decode.  latency: 0.01008 s, throughput:   3173.75 token/s
Decode.  latency: 0.01001 s, throughput:   3196.80 token/s
Decode.  median latency: 0.00990 s, median throughput:   3230.81 token/s
Total. latency:  0.496 s, throughput:  18586.90 token/s
[rank0]:W1108 14:31:27.438000 22660914546240 torch/_inductor/compile_worker/subproc_pool.py:126] SubprocPool unclean exit
/sglang/lib/python3.10/multiprocessing/resource_tracker.py:104: UserWarning: resource_tracker: process died unexpectedly, relaunching.  Some resources might leak.
  warnings.warn('resource_tracker: process died unexpectedly, '
Traceback (most recent call last):
  File "/sglang/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
    cache[rtype].remove(name)
KeyError: '/mp-p3y8y9vr'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants