-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Bug Description
The add_memory operation in the MCP server utilizes a hardcoded small_model (defaulting to gpt-4.1-mini) when using the OpenAI provider. This occurs in the LLMClientFactory, which ignores any potential user configuration for the secondary model used for lighter tasks like entity extraction. Users cannot override this default to use other models (e.g., standard gpt-4o-mini or gpt-3.5-turbo) without modifying the source code.
Steps to Reproduce
Provide a minimal code example that reproduces the issue:
- Configure the Graphiti MCP server with the OpenAI provider.
- Set the main
modeltogpt-4o(or any non-reasoning model). - Start the MCP server.
- Trigger the
add_memorytool with any content.
# In mcp_server/src/services/factories.py, line ~110
# The factory logic explicitly ignores config and sets:
small_model = (
'gpt-5-nano' if is_reasoning_model else 'gpt-4.1-mini'
)Expected Behavior
The MCP server should allow configuration of the small_model (via config.yaml, environment variables, or CLI arguments). If no configuration is provided, it can fall back to a default, but it should not hardcode the value without an override mechanism.
Actual Behavior
The LLMClientFactory.create method hardcodes small_model to 'gpt-4.1-mini' for OpenAI providers when the main model is not a reasoning model (o1, gpt-5, etc.). This forces the use of this specific model name, which may not be desired or available to all users.
Environment
- Graphiti Version: Development (Source) / Latest
- Python Version: [User to insert, e.g. 3.12]
- Operating System: Linux
- Database Backend: [User to insert, e.g. FalkorDB]
- LLM Provider & Model: OpenAI / gpt-4o (Main)
Installation Method
- pip install
- uv add
- Development installation (git clone)
Error Messages/Traceback
N/A - This is a logical limitation, not a crash.
However, logs will show initialization of the client using "gpt-4.1-mini" for the small model role regardless of configuration.
Configuration
# mcp_server/config/config.yaml or equivalent
llm:
provider: "openai"
model: "gpt-4o"
# No option exists here to set "small_model"Additional Context
- Does this happen consistently or intermittently? Consistently.
- Which component are you using? MCP Server
- Any recent changes to your environment? N/A
- Related issues or similar problems you've encountered? Linked to issue [BUG] Small Model setting is not adhered instead always defaulting to gpt-4.1-nano #791 regarding small model settings.
Possible Solution
Modify mcp_server/src/services/factories.py to check for a configuration value before falling back to the default.
Proposed Logic:
- Update
LLMConfiginschema.pyto include an optionalsmall_modelfield. - Update
factories.py:
# Pseudocode fix
small_model = config.small_model or (
'gpt-5-nano' if is_reasoning_model else 'gpt-4.1-mini'
)