Skip to content

BlockingError when using ChatGoogleGenerativeAI with async in LangGraph dev #1231

@Trinkes

Description

@Trinkes

Bug: BlockingError when using ChatGoogleGenerativeAI with async in LangGraph dev

Description

When using ChatGoogleGenerativeAI with async/await in a LangGraph application, langgraph dev throws a blockbuster.blockbuster.BlockingError due to blocking I/O calls during async client initialization. The blocking call occurs when langchain_google_genai reads package metadata using importlib.metadata.version().

Environment

  • langchain-google-genai: 2.1.12
  • langgraph: 0.6.8
  • langgraph-api: 0.4.37
  • langgraph-cli: 0.4.2
  • Python: 3.13.5

Minimal Reproducible Example

# minimal_example.py
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import StateGraph, MessagesState


async def my_node(state: MessagesState):
    llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash", api_key="YOUR_API_KEY")
    response = await llm.ainvoke(state["messages"])
    return {"messages": [response]}


graph = StateGraph(MessagesState)
graph.add_node("chat", my_node)
graph.set_entry_point("chat")
graph.set_finish_point("chat")

minimal_graph = graph.compile()
// langgraph.json
{
  "dependencies": ["."],
  "graphs": {
    "minimal": "./minimal_example.py:minimal_graph"
  },
  "env": ".env"
}

Steps to reproduce:

  1. Run langgraph dev
  2. Invoke the graph

Stack Trace

Traceback (most recent call last):
File "./.venv/lib/python3.13/site-packages/langgraph_api/worker.py", line 142, in wrap_user_errors
await consume(
    stream, run_id, resumable, stream_modes, thread_id=run["thread_id"]
)
File "./.venv/lib/python3.13/site-packages/langgraph_api/stream.py", line 501, in consume
raise e
File "./.venv/lib/python3.13/site-packages/langgraph_api/stream.py", line 484, in consume
async for mode, payload in stream:
    ...<6 lines>...
)
File "./.venv/lib/python3.13/site-packages/langgraph_api/stream.py", line 369, in astream_state
event = await wait_if_not_done(anext(stream, sentinel), done)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "./.venv/lib/python3.13/site-packages/langgraph_api/asyncio.py", line 89, in wait_if_not_done
raise e.exceptions[0] from None
File "./.venv/lib/python3.13/site-packages/langgraph/pregel/main.py", line 2976, in astream
async for _ in runner.atick(
        ...<13 lines>...
yield o
File "./.venv/lib/python3.13/site-packages/langgraph/pregel/_runner.py", line 401, in atick
_panic_or_proceed(
~~~~~~~~~~~~~~~~~^
futures.done.union(f for f, t in futures.items() if t is not None),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
timeout_exc_cls=asyncio.TimeoutError,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
panic=reraise,
^^^^^^^^^^^^^^
)
^
File "./.venv/lib/python3.13/site-packages/langgraph/pregel/_runner.py", line 511, in _panic_or_proceed
raise exc
File "./.venv/lib/python3.13/site-packages/langgraph/pregel/_retry.py", line 137, in arun_with_retry
return await task.proc.ainvoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 706, in ainvoke
input = await asyncio.create_task(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
        step.ainvoke(input, config, **kwargs), context=context
                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "./.venv/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 474, in ainvoke
ret = await self.afunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.claude/tmp/minimal_example.py", line 7, in my_node
response = await llm.ainvoke(state["messages"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 417, in ainvoke
llm_result = await self.agenerate_prompt(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
             ...<8 lines>...
)
^
File "./.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1036, in agenerate_prompt
return await self.agenerate(
       ^^^^^^^^^^^^^^^^^^^^^
       prompt_messages, stop=stop, callbacks=callbacks, **kwargs
                                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "./.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 994, in agenerate
raise exceptions[0]
File "./.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1153, in _agenerate_with_cache
async for chunk in self._astream(messages, stop=stop, **kwargs):
    ...<7 lines>...
chunks.append(chunk)
File "./.venv/lib/python3.13/site-packages/langchain_google_genai/chat_models.py", line 1921, in _astream
if not self.async_client:
    ^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/langchain_google_genai/chat_models.py", line 1624, in async_client
client_info=get_client_info(f"ChatGoogleGenerativeAI:{self.model}"),
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/langchain_google_genai/_common.py", line 164, in get_client_info
client_library_version, user_agent = get_user_agent(module)
~~~~~~~~~~~~~~^^^^^^^^
File "./.venv/lib/python3.13/site-packages/langchain_google_genai/_common.py", line 143, in get_user_agent
langchain_version = metadata.version("langchain-google-genai")
File "/opt/homebrew/Cellar/[email protected]/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/importlib/metadata/__init__.py", line 987, in version
return distribution(distribution_name).version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/importlib_metadata/__init__.py", line 557, in version
return md_none(self.metadata)['Version']
^^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/importlib_metadata/__init__.py", line 527, in metadata
self.read_text('METADATA')
~~~~~~~~~~~~~~^^^^^^^^^^^^
File "./.venv/lib/python3.13/site-packages/importlib_metadata/__init__.py", line 998, in read_text
return self._path.joinpath(filename).read_text(encoding='utf-8')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/pathlib/_local.py", line 546, in read_text
return PathBase.read_text(self, encoding, errors, newline)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/pathlib/_abc.py", line 633, in read_text
return f.read()
~~~~~~^^
File "./.venv/lib/python3.13/site-packages/blockbuster/blockbuster.py", line 109, in wrapper
raise BlockingError(func_name)

blockbuster.blockbuster.BlockingError: Blocking call to io.TextIOWrapper.read

What I've Tried

  1. ✗ Passing API key directly instead of loading from file - still blocks on metadata read

Related Issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions