-
Notifications
You must be signed in to change notification settings - Fork 57
feat(models): add GPT-5 Pro model definition #994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Added definitions for the new GPT-5 Pro model, including pricing, provider details, and capabilities.
❗ Preview Environment deployment failed on BunnyshellSee: Environment Details | Pipeline Logs Available commands (reply to this comment):
|
WalkthroughAdds a new OpenAI model configuration entry with id Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/models/src/models/openai.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}
: This is a pure TypeScript project—do not useany
oras any
unless absolutely necessary
Always use top-level ES imports; never userequire
or dynamicimport()
Files:
packages/models/src/models/openai.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Always use top-level import; never use require() or dynamic import()
Files:
packages/models/src/models/openai.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: e2e-shards (4)
- GitHub Check: e2e-shards (5)
- GitHub Check: e2e-shards (2)
- GitHub Check: e2e-shards (1)
- GitHub Check: test / run
- GitHub Check: generate / run
- GitHub Check: build / run
- GitHub Check: lint / run
- GitHub Check: autofix
🔇 Additional comments (2)
packages/models/src/models/openai.ts (2)
577-593
: Verify whether a second provider should be added.Other GPT-5 family models (gpt-5, gpt-5-mini, gpt-5-nano) have two providers: "openai" and "routeway-discount". This model only has one provider. Please confirm whether this is intentional or if the "routeway-discount" provider should be added for consistency.
If a second provider should be added, refer to the pattern used in gpt-5 (lines 397-421) for the structure.
571-595
: Validate GPT-5 Pro pricing and capacity
- Pricing matches official API: $15 / 1M input, $120 / 1M output
- ContextSize 400 000 and maxOutput 272 000 tokens aren’t documented publicly—please confirm in the official model spec docs.
{ | ||
test: "only", | ||
providerId: "openai", | ||
modelName: "gpt-5-pro", | ||
inputPrice: 15 / 1e6, | ||
outputPrice: 120.0 / 1e6, | ||
requestPrice: 0, | ||
contextSize: 400000, | ||
maxOutput: 272000, | ||
streaming: true, | ||
reasoning: true, | ||
reasoningOutput: "omit", | ||
vision: true, | ||
tools: true, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add cachedInputPrice
consistent with other GPT-5 models.
All other GPT-5 family models (gpt-5, gpt-5-mini, gpt-5-nano) define cachedInputPrice
. This model should follow the same pattern if cached input pricing is available.
Apply this diff to add the missing field (assuming 10% of input price as per gpt-5 pattern):
{
test: "only",
providerId: "openai",
modelName: "gpt-5-pro",
inputPrice: 15 / 1e6,
outputPrice: 120.0 / 1e6,
+ cachedInputPrice: 1.5 / 1e6,
requestPrice: 0,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{ | |
test: "only", | |
providerId: "openai", | |
modelName: "gpt-5-pro", | |
inputPrice: 15 / 1e6, | |
outputPrice: 120.0 / 1e6, | |
requestPrice: 0, | |
contextSize: 400000, | |
maxOutput: 272000, | |
streaming: true, | |
reasoning: true, | |
reasoningOutput: "omit", | |
vision: true, | |
tools: true, | |
}, | |
{ | |
test: "only", | |
providerId: "openai", | |
modelName: "gpt-5-pro", | |
inputPrice: 15 / 1e6, | |
outputPrice: 120.0 / 1e6, | |
cachedInputPrice: 1.5 / 1e6, | |
requestPrice: 0, | |
contextSize: 400000, | |
maxOutput: 272000, | |
streaming: true, | |
reasoning: true, | |
reasoningOutput: "omit", | |
vision: true, | |
tools: true, | |
}, |
🤖 Prompt for AI Agents
In packages/models/src/models/openai.ts around lines 578 to 592, the gpt-5-pro
model object is missing the cachedInputPrice field used by other GPT-5 entries;
add cachedInputPrice with the same 10%-of-input-price convention (i.e., set
cachedInputPrice to 1.5 / 1e6 given inputPrice is 15 / 1e6) so the model follows
the same pricing pattern as gpt-5, gpt-5-mini, and gpt-5-nano.
Add supportedParameters
array consistent with other GPT-5 models.
All other GPT-5 family models (gpt-5, gpt-5-mini, gpt-5-nano) include a supportedParameters
array documenting which API parameters are supported. This model should follow the same pattern.
Apply this diff to add the missing field:
vision: true,
tools: true,
+ supportedParameters: [
+ "temperature",
+ "top_p",
+ "frequency_penalty",
+ "presence_penalty",
+ "response_format",
+ "tools",
+ "tool_choice",
+ ],
},
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{ | |
test: "only", | |
providerId: "openai", | |
modelName: "gpt-5-pro", | |
inputPrice: 15 / 1e6, | |
outputPrice: 120.0 / 1e6, | |
requestPrice: 0, | |
contextSize: 400000, | |
maxOutput: 272000, | |
streaming: true, | |
reasoning: true, | |
reasoningOutput: "omit", | |
vision: true, | |
tools: true, | |
}, | |
{ | |
test: "only", | |
providerId: "openai", | |
modelName: "gpt-5-pro", | |
inputPrice: 15 / 1e6, | |
outputPrice: 120.0 / 1e6, | |
requestPrice: 0, | |
contextSize: 400000, | |
maxOutput: 272000, | |
streaming: true, | |
reasoning: true, | |
reasoningOutput: "omit", | |
vision: true, | |
tools: true, | |
supportedParameters: [ | |
"temperature", | |
"top_p", | |
"frequency_penalty", | |
"presence_penalty", | |
"response_format", | |
"tools", | |
"tool_choice", | |
], | |
}, |
🤖 Prompt for AI Agents
In packages/models/src/models/openai.ts around lines 578 to 592, the gpt-5-pro
model entry is missing the supportedParameters array present on other GPT-5
models; add a supportedParameters field for this model immediately inside the
object with the same parameter names and values used by the other GPT-5 family
entries (gpt-5, gpt-5-mini, gpt-5-nano) so the list of supported API parameters
is consistent and follows the same format as those models.
Removed the 'test' property from the OpenAI provider configuration.
Added definitions for the new GPT-5 Pro model, including pricing, provider details, and capabilities.
Summary by CodeRabbit
New Features
Chores