Skip to content

Commit 5650442

Browse files
authored
refine context length (#1813)
### What problem does this PR solve? #1594 ### Type of change - [x] Performance Improvement
1 parent 5b013da commit 5650442

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

graphrag/index.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ def build_knowlege_graph_chunks(tenant_id: str, chunks: List[str], callback, ent
6868
llm_bdl = LLMBundle(tenant_id, LLMType.CHAT)
6969
ext = GraphExtractor(llm_bdl)
7070
left_token_count = llm_bdl.max_length - ext.prompt_token_count - 1024
71-
left_token_count = llm_bdl.max_length * 0.4
71+
left_token_count = max(llm_bdl.max_length * 0.8, left_token_count)
7272

7373
assert left_token_count > 0, f"The LLM context length({llm_bdl.max_length}) is smaller than prompt({ext.prompt_token_count})"
7474

0 commit comments

Comments
 (0)