-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
[Bug] /v1/responses endpoint forwards unsupported user parameter to upstream, causing 400 error
Description
When clients (e.g., LobeHub v2) send requests to the /v1/responses endpoint with a user parameter, Sub2API forwards this parameter to the upstream OpenAI API without filtering it out. The upstream API rejects the request with a 400 Bad Request error: Unsupported parameter: user.
This issue affects models like gpt-5.4, gpt-5.3-codex, etc., when accessed through the Responses API endpoint.
Steps to Reproduce
- Send a POST request to
/v1/responses(or/responses) with a body that includes theuserfield:
{
"model": "gpt-5.4",
"stream": true,
"input": [
{"role": "user", "content": "hello"}
],
"user": "user_123"
}-
Sub2API forwards the request to the upstream OpenAI API, including the
userparameter. -
The upstream API returns:
400 Bad Request - Unsupported parameter: user
Error Log
2026-03-24T19:44:46.925+0800 WARN handler/openai_gateway_handler.go:337 openai.forward_failed
{
"component": "handler.openai_gateway.responses",
"model": "gpt-5.4",
"stream": true,
"error": "upstream error: 400 message=Unsupported parameter: user"
}
Expected Behavior
The user parameter should be stripped from the request body before forwarding to the upstream API, similar to how other unsupported parameters (prompt_cache_retention, safety_identifier) are already handled.
Suggested Fix
In internal/service/openai_gateway_service.go, add "user" to the unsupportedFields list:
// Current code (line ~1914):
unsupportedFields := []string{"prompt_cache_retention", "safety_identifier"}
// Suggested fix:
unsupportedFields := []string{"prompt_cache_retention", "safety_identifier", "user"}Environment
- Sub2API version: 0.1.104 (commit: 4f7629a, built: 2026-03-20)
- Client: LobeHub v2.1.44
- Upstream: OpenAI API (OAuth accounts)
- Affected endpoint:
POST /v1/responsesandPOST /responses - Affected models: All models accessed via Responses API (gpt-5.4, gpt-5.3-codex, etc.)
Additional Context
- The
/v1/chat/completionsendpoint is NOT affected — only/v1/responses. - LobeHub v2 sends the
userparameter for user tracking purposes. The upstream OpenAI Responses API does not support this parameter. - Similar issues have been fixed before:
max_output_tokenswas handled in a similar way (ref: /v1/responses 不兼容 max_output_tokens,导致 OpenClaw + gpt-5.4 请求返回 400 #831, PR fix(openai): normalize OAuth passthrough /responses for OpenClaw clients #1107).