You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In long context scenario the attention can take >50% time in rollout, current fp8 GRPO recipe only quantizes linear layers but not attention thus is not helpful for attention performance. This is to track the support of vllm's existing kv quant algorithms in GRPO.