Skip to content

Commit

Permalink
doc: improve the docstring of append_paged_kv_cache (#606)
Browse files Browse the repository at this point in the history
Remove unnecessary note.
  • Loading branch information
yzh119 authored Nov 11, 2024
1 parent fe4f898 commit be10bbd
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions python/flashinfer/page.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ def append_paged_kv_cache(
>>> kv_append_length = torch.tensor([45, 8, 25, 22], dtype=torch.int32, device="cuda:0")
>>> kv_append_indptr = torch.cat(
... [torch.zeros(1).int().to(0), torch.cumsum(kv_append_length, dim=0)]
... ).int()
... ).int() # [0, 45, 53, 78, 100]
>>> max_num_pages = 1000
>>> page_size = 16
>>> paged_kv_cache = torch.randn(max_num_pages, 2, page_size, num_kv_heads, head_dim).half().to(0)
Expand Down Expand Up @@ -303,9 +303,6 @@ def append_paged_kv_cache(
Note
----
Please refer to the :ref:`tutorial <recursive-attention>` for a detailed
explanation of the log-sum-exp function and attention states.
The function assumes that the space for appended k/v have already been allocated,
which means :attr:`kv_indices`, :attr:`kv_indptr`, :attr:`kv_last_page_len` has
incorporated appended k/v.
Expand Down

0 comments on commit be10bbd

Please sign in to comment.