Skip to content

KV cache quantization support in fp8 rollout in GRPO #1185

@guyueh1

Description

@guyueh1

In long context scenario the attention can take >50% time in rollout, current fp8 GRPO recipe only quantizes linear layers but not attention thus is not helpful for attention performance. This is to track the support of vllm's existing kv quant algorithms in GRPO.

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions