-
Notifications
You must be signed in to change notification settings - Fork 2.9k
fix(ollama): separate thinking content into reasoning_content field #9180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(ollama): separate thinking content into reasoning_content field #9180
Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
|
I need to update my branch with main since main has changed in the area of my work |
|
Is this ready/able to be merged? |
|
@djholt not yet - I need to align it to the v1 code |
|
@pawel-twardziak how can i help you get this across the finish line? |
|
@brianneisler You already did it :) Your need motivates me to update this PR! :) Hold my beer bro... |
What has been done: - Store Ollama model "thinking" responses in additional_kwargs.reasoning_content instead of mixing with main content - Preserve additional_kwargs when constructing non-chunk messages from streams - Add unit test verifying thinking content is properly separated - Add tests ensuring reasoning_content is populated when think=true and absent when think=false - Fixes issue where thinking tags or content were appearing in the main response content Fixes langchain-ai#9089
289a522 to
c61c8a8
Compare
|
@djholt @brianneisler updated |
|
Hey Langchain team, any updates on when this can be merged? |
What has been done:
Fixes #9089