Skip to content

Conversation

@smart-cau
Copy link

@smart-cau smart-cau commented Oct 24, 2025

Description

Fixes blocking I/O errors when using ChatGoogleGenerativeAI in async contexts (e.g., LangGraph applications). The issue occurred because importlib.metadata.version() performs blocking file I/O during async client initialization, which blocks the event loop and triggers BlockingError in async-aware environments.

Solution: Cache package version at module import time instead of reading it on every get_user_agent() call. This eliminates blocking I/O in async contexts while improving performance.

Relevant issues

Fixes #1231

Type

🐛 Bug Fix

Changes

Core Changes

  • libs/genai/langchain_google_genai/_common.py
    • Added module-level _LANGCHAIN_GENAI_VERSION cache
    • Modified get_user_agent() to use cached version

Test Changes

  • libs/genai/tests/unit_tests/test_common.py
    • Added test_version_is_cached_at_module_level()
    • Added test_get_user_agent_no_blocking_in_async_context()
    • Added test_async_context_execution()
    • Updated existing tests to use cached version

Testing

Unit Tests

  • ✅ All existing unit tests pass in libs/genai
  • ✅ New tests verify version caching behavior
  • ✅ New tests ensure no blocking I/O in async contexts

Integration Testing

Test Commands Run

cd libs/genai && make test

Note

This PR is part of the fix for issue #1231. A companion PR (#1267) addresses the same issue in langchain-google-vertexai (related to #873).

The solution:

  • ✅ No breaking changes to public APIs
  • ✅ Backward compatible
  • ✅ Performance improvement (version read only once)
  • ✅ Zero overhead for subsequent calls
  • ✅ Safe in all contexts (sync and async)

@smart-cau smart-cau changed the title fix(genai,vertexai): resolve blocking I/O in async contexts fix(genai,vertex): resolve blocking I/O in async contexts Oct 24, 2025
@lkuligin
Copy link
Collaborator

could you split the PR into two parts, one touching genai and another touching vertexai, please?

Cache package version at module import time to eliminate blocking I/O
during async client initialization. This fixes BlockingError when using
ChatGoogleGenerativeAI in async contexts like LangGraph.

- Add module-level version caching in _common.py
- Update get_user_agent() to use cached version
- Add tests to verify caching behavior and async safety

Fixes langchain-ai#1231
@smart-cau smart-cau force-pushed the smart-cau/fix/blocking-error branch from b858a3e to 1759202 Compare October 26, 2025 11:10
@smart-cau smart-cau changed the title fix(genai,vertex): resolve blocking I/O in async contexts fix(genai): resolve blocking I/O in async contexts Oct 26, 2025
@smart-cau
Copy link
Author

smart-cau commented Oct 26, 2025

could you split the PR into two parts, one touching genai and another touching vertexai, please?

Done! @lkuligin I've split this PR into two separate PRs as requested:

  • This PR now only touches genai package
  • Created #1267 for vertexai package with the same fix

Both PRs address the same blocking I/O issue in async contexts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BlockingError when using ChatGoogleGenerativeAI with async in LangGraph dev

2 participants