fix(core): use getBufferString for message summarization #9739
+220
−56
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Port of langchain-ai/langchain#34607
Problem
Fixes token inflation in
SummarizationMiddlewarethat caused context window overflow during summarization.Root cause: When formatting messages for the summary prompt,
JSON.stringify(messages)was being used, which includes all metadata fields (usage_metadata,response_metadata,additional_kwargs, etc.). This caused the stringified representation to use ~2.5x more tokens thancountTokensApproximatelyestimates.Symptoms:
countTokensApproximatelyJSON.stringify(messages)in the prompt uses significantly more tokensSolution
Use
getBufferString()to format messages, which produces compact output:Instead of verbose JSON representation:
[ { "type": "human", "content": "What's the weather?", "additional_kwargs": {}, "response_metadata": {} }, ... ]Changes
@langchain/core: UpdatedgetBufferString()to use message'stextproperty for compact content extraction, and added tool_calls/function_call support for AI messageslangchain: UpdatedcreateSummary()in summarization middleware to usegetBufferString()instead ofJSON.stringify()