Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: A problem when use llm.generate() for several times in one LLM case #12651

Open
1 task done
KleinMoretti07 opened this issue Feb 1, 2025 · 0 comments
Open
1 task done
Labels
usage How to use vllm

Comments

@KleinMoretti07
Copy link

KleinMoretti07 commented Feb 1, 2025

Your current environment

The output of `python collect_env.py`

How would you like to use vllm

I' m trying to test the performance of vllm so I need to test the time need for vllm when the input and output tokens numbers are fixed. Here I tried to run each situation for 20 times and get the average. But I got a problems that the answer returned after the first round become very strange(they are not nature language any more). It seems that the returns after the first round were interfered. What' s the problem here, what should I do to solve the problem? Thank!

### Here's my code:

import time
import os
from vllm import LLM, SamplingParams
import torch

os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"

prompt1 = """
        """

prompt2 = """
        """

prompt3 = """           
        """

prompt4 = """
        """

prompt5 = """
        """

prompt6 = """
        """


prompt_list = [prompt1, prompt2, prompt3, prompt4, prompt5, prompt6]

def count_tokens(text, tokenizer):
    input_ids = tokenizer.encode(text, add_special_tokens=False)
    return len(input_ids)

def get_generation_time(prompts, required_tokens):
    sampling_params = SamplingParams(temperature=1, max_tokens=required_tokens, min_tokens=required_tokens)
    torch.cuda.empty_cache()

    start_time = time.time()
    output = llm.generate(prompts, sampling_params=sampling_params)
    end_time = time.time()

    tokenizer = llm.get_tokenizer()
    print(output[0].outputs[0].text)
    actual_tokens = count_tokens(output[0].outputs[0].text, tokenizer)
    print(required_tokens, actual_tokens)
    assert actual_tokens == required_tokens, "The tokens number is not equal to required_tokens"
    return end_time - start_time



input_length_list = [128, 256, 512, 1024, 2048, 4096]
output_length_list = [128, 256, 512]
timing = [[0 for _ in range(6)] for _ in range(3)]

llm = LLM(
    model = 'qwen2.5-3B/Qwen2.5-3B-Instruct',
    gpu_memory_utilization = 0.9,
    enable_prefix_caching = False,
    enforce_eager=True  # 
    # tensor_parallel_size=tensor_parallel_size,
)

for i, input_tokens in enumerate(input_length_list):
    for j, output_tokens in enumerate(output_length_list):
        prompt = prompt_list[i]
        tensor_parallel_size = 2

        time_list = [get_generation_time(prompt, output_tokens) for _ in range(21) ]

        print("input_length: ", input_tokens)
        print("output_length: ", output_tokens)
        print(time_list)
        average_time = sum(time_list[1:21]) / 20
        print(average_time)
        timing[i][j] = average_time

        del time_list
        torch.cuda.empty_cache()

print(timing)

### And the first and second returns

Processed prompts: 100%|█| 1/1 [00:01<00:00, 1.25s/it, est. speed input: 102.01 toks/s, output: 102.01 t
"

Assistant: ### Travel to Spain: A Comprehensive Guide

Traveling to Spain is a rewarding experience thanks to its rich history, vibrant culture, delicious cuisine, and across-the-board impressive tourist attractions. While the country boasts remarkable attractions like the Alhambra, Barcelona, and the beaches of the Costa del Sol, one must pack a bit of knowledge to truly enjoy the exploration. Here’s a comprehensive guide covering essential travel tips, cultural nuances, and foodie recommendations.

Before You Go

1. Visa Requirements

Spain has a relatively lenient visa system, especially for citizens of many Western European countries and several others.
128 128
Processed prompts: 100%|█| 1/1 [00:01<00:00, 1.31s/it, est. speed input: 97.96 toks/s, output: 97.96 tok
"""

    output = processor.process(content)
    self.assertIn("Translate", output)
    self.assertIn("Spain", output)
    self.assertIn("is", output)
    self.assertIn("situated in Southern Europe", output)
    self.assertIn("The country is bordered by Portugal", output)
    self.assertIn("Andorra", output)
    self.assertIn("Mediterranean Sea", output)
    self.assertIn("Spain's", output)
    self.assertIn("rich history", output)
    self.assertIn("diverse culture", output)
    self.assertIn("beautiful landscapes", output)
    self.assertIn("various civilizations,", output

128 129
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/zhy/vllm/timing/timing_3B_single_prefill.py", line 163, in
[rank0]: time_list = [get_generation_time(prompt, output_tokens) for _ in range(21) ]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/zhy/vllm/timing/timing_3B_single_prefill.py", line 140, in get_generation_time
[rank0]: assert count_tokens(output[0].outputs[0].text) == required_tokens, "The tokens number is not equal to required_tokens"
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AssertionError: The tokens number is not equal to required_tokens
[rank0]:[W202 04:15:38.310929913 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@KleinMoretti07 KleinMoretti07 added the usage How to use vllm label Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant