Skip to content

Conversation

@chrispy-snps
Copy link
Contributor

@chrispy-snps chrispy-snps commented Jan 1, 2026

Title

feat(core): support custom message separator in get_buffer_string()

Description

Small enhancement to support user-configurable message separator in get_buffer_string().

Issue Addressed

Currently, get_buffer_string() uses a single newline to separate messages. If messages contain complex content with newlines, then the message boundaries become obfuscated in the output:

from langchain_core.messages import SystemMessage, HumanMessage, AIMessage, get_buffer_string

messages = [
    HumanMessage(content="I have a question.\n\nCan you help me?"),
    AIMessage(content="Sure!\n\nI can help you."),
    HumanMessage(content="What is the capital of France?"),
]

print(get_buffer_string(messages))
# Human: I have a question.
#
# Can you help me?
# AI: Sure!
#
# I can help you.
# Human: What is the capital of France?

Changes

  • Implemented a message_separator argument for the get_buffer_string() function.
    • The name was chosen to be consistent with other *_separator arguments in the code base.
  • Updated unit tests to test the new argument.

Example

print(get_buffer_string(messages, message_separator="\n\n"))
# Human: I have a question.
#
# Can you help me?
#
# AI: Sure!
#
# I can help you.
#
# Human: What is the capital of France?

@chrispy-snps chrispy-snps requested a review from eyurtsev as a code owner January 1, 2026 17:37
@github-actions github-actions bot added feature For PRs that implement a new feature; NOT A FEATURE REQUEST core `langchain-core` package issues & PRs and removed feature For PRs that implement a new feature; NOT A FEATURE REQUEST labels Jan 1, 2026
@codspeed-hq
Copy link

codspeed-hq bot commented Jan 1, 2026

CodSpeed Performance Report

Merging #34569 will improve performance by 27.42%

Comparing chrispy-snps:feat/buffer-string-message-separator (f507007) with master (a7aad60)

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

Summary

⚡ 10 improvements
✅ 3 untouched
⏩ 21 skipped1

Benchmarks breakdown

Mode Benchmark BASE HEAD Efficiency
WallTime test_import_time[tool] 666.8 ms 529.9 ms +25.84%
WallTime test_import_time[Runnable] 636.1 ms 499.2 ms +27.42%
WallTime test_import_time[RunnableLambda] 625.8 ms 503.9 ms +24.21%
WallTime test_import_time[HumanMessage] 312.3 ms 264.7 ms +17.96%
WallTime test_import_time[Document] 217.7 ms 189.4 ms +14.91%
WallTime test_import_time[InMemoryVectorStore] 790.7 ms 632.2 ms +25.08%
WallTime test_import_time[LangChainTracer] 563.9 ms 458.2 ms +23.07%
WallTime test_import_time[CallbackManager] 588.6 ms 466.4 ms +26.2%
WallTime test_import_time[BaseChatModel] 661.8 ms 538.6 ms +22.89%
WallTime test_import_time[ChatPromptTemplate] 741.8 ms 608.1 ms +22%

Footnotes

  1. 21 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@ccurme ccurme merged commit 0c7b7e0 into langchain-ai:master Jan 7, 2026
95 checks passed
@chrispy-snps
Copy link
Contributor Author

Thank you @ccurme!

nomore8797 added a commit to nomore8797/langchain that referenced this pull request Jan 8, 2026
* test(text-splitters): add edge case tests for CharacterTextSplitter (langchain-ai#34628)

* chore(groq): document vision support (langchain-ai#34620)

* feat(core): support custom message separator in get_buffer_string() (langchain-ai#34569)

* chore(langchain): fix types in test_wrap_model_call (langchain-ai#34573)

* fix: handle empty assistant content in Responses API (langchain-ai#34272) (langchain-ai#34296)

* fix(openai): raise proper exception `OpenAIRefusalError` on structured output refusal (langchain-ai#34619)

* release(openai): 1.1.7 (langchain-ai#34640)

* fix(core): fix strict schema generation for functions with optional args (langchain-ai#34599)

* test(core): add edge case for empty examples in LengthBasedExampleSelector (langchain-ai#34641)

* fix(langchain): handle parallel usage of the todo tool in planning middleware (langchain-ai#34637)

The agent should only make a single call to update the todo list at a
time. A parallel call doesn't make sense, but also cannot work as
there's no obvious reducer to use.

On parallel calls of the todo tool, we return ToolMessage containing to
guide the LLM to not call the tool in parallel.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>

* release(langchain): release 1.2.2 (langchain-ai#34643)

Release langchain 1.2.2

* fix(langchain): add test to verify version (langchain-ai#34644)

verify version in langchain to avoid accidental drift

---------

Co-authored-by: Manas karthik <manaskarthik200@gmail.com>
Co-authored-by: Aarav Dugar <79491557+AaravDugar123@users.noreply.github.com>
Co-authored-by: Chris Papademetrious <chrispy@synopsys.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Sujal M H <63709163+sujalmh@users.noreply.github.com>
Co-authored-by: OysterMax <sma@esri.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
@mdrxy mdrxy added the external label Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core `langchain-core` package issues & PRs external feature For PRs that implement a new feature; NOT A FEATURE REQUEST

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants