feat: Enhance streaming API timeout handling with mathematical modeling #492
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR addresses GitHub issue #239 by implementing a comprehensive mathematical modeling approach to understand and solve the streaming API timeout issue that occurs after 64 seconds.
Problem
The streaming API setup was timing out after 64 seconds, causing user frustration and limiting the tool's effectiveness for large requests. The error message provided generic troubleshooting tips but didn't offer specific solutions based on the request characteristics.
Solution
We've implemented a comprehensive mathematical modeling approach to understand and solve this timeout issue:
1. Mathematical Modeling
We created a
StreamingTimeoutModel
that calculates expected streaming request times based on:This allows us to predict when timeouts will occur and recommend appropriate solutions.
2. Adaptive Timeout Calculation
Instead of fixed timeouts, we now calculate adaptive timeouts based on request characteristics:
3. Enhanced Error Messaging
When timeouts occur, we now provide more specific troubleshooting guidance based on the request characteristics:
4. CLI Configuration Options
New CLI options allow users to configure timeout and retry behavior:
--openai-timeout
: Set API timeout in milliseconds--openai-max-retries
: Set maximum retry attempts5. Configuration Recommendations
The system now provides configuration recommendations based on analysis of current settings.
Technical Implementation
Core Changes
--openai-timeout
and--openai-max-retries
configuration optionsFiles Modified
packages/core/src/models/streamingTimeoutModel.ts
- New mathematical modelpackages/core/src/models/streamingTimeoutModel.test.ts
- Tests for the modelpackages/core/src/models/streamingTimeoutModel.verification.test.ts
- Formal verification testspackages/core/src/core/openaiContentGenerator.ts
- Enhanced timeout handlingpackages/cli/src/config/config.ts
- Added CLI optionspackages/core/src/index.ts
- Export for public APIUsage Examples
CLI Usage
Configuration File
Testing
All tests pass, including new tests for the streaming timeout model:
Future Improvements
This solution transforms a frustrating timeout issue into an opportunity for intelligent, adaptive system behavior that improves the user experience for large and complex requests.
Fixes #239