-
Notifications
You must be signed in to change notification settings - Fork 1.4k
fix: fix task nonstopping error if model key is expired #1017
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
+257
−1
Closed
Changes from all commits
Commits
Show all changes
26 commits
Select commit
Hold shift + click to select a range
834c024
fix task nonstopping error if model key is expired
da4a0a4
update
dc1974d
update
239b446
update
523b51f
Merge branch 'main' into fix/fix_model_key_ui_nonstopping
bytecii 6ff0fe9
update
a0fcd12
update
e0d034e
update
04514b0
Merge branch 'main' into fix/fix_model_key_ui_nonstopping
bytecii c2dd965
update
b040f51
update
c71aece
update
71cc904
update
38ed502
Merge branch 'main' into fix/fix_model_key_ui_nonstopping
bytecii 773a092
Merge branch 'main' into fix/fix_model_key_ui_nonstopping
bytecii 0ca76d3
update
33fadc8
update
54c7a72
update
ef38bb2
update
746a7e0
update
da3d655
update
2d76b71
update
15eed95
update
bytecii 766c8f5
update
bytecii 5f5d3d0
update
bytecii 2480ea0
Merge branch 'main' into fix/fix_model_key_ui_nonstopping
nitpicker55555 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -25,9 +25,11 @@ | |
| from pydantic import BaseModel | ||
| from typing_extensions import TypedDict | ||
|
|
||
| from app.component.model_validation import create_agent | ||
| from app.exception.exception import ProgramException | ||
| from app.model.chat import ( | ||
| AgentModelConfig, | ||
| Chat, | ||
| McpServers, | ||
| SupplementChat, | ||
| UpdateData, | ||
|
|
@@ -674,3 +676,52 @@ def set_process_task(process_task_id: str): | |
| yield | ||
| finally: | ||
| process_task.reset(origin) | ||
|
|
||
|
|
||
| async def validate_model_before_task(options: Chat) -> tuple[bool, str | None]: | ||
| """ | ||
| Validate model configuration before starting a task. | ||
| Makes a simple test request to ensure the API key and model are valid. | ||
|
|
||
| Args: | ||
| options (Chat): Chat options containing model configuration. | ||
|
|
||
| Returns: | ||
| (is_valid, error_message) | ||
| - is_valid: True if validation passed | ||
| - error_message: Raw error message if validation failed, | ||
| None otherwise | ||
| """ | ||
| try: | ||
| logger.info( | ||
| f"Validating model configuration for task {options.task_id}" | ||
| ) | ||
|
|
||
| # Create test agent with same config as task will use | ||
| agent = create_agent( | ||
| model_platform=options.model_platform, | ||
| model_type=options.model_type, | ||
| api_key=options.api_key, | ||
| url=options.api_url, | ||
| model_config_dict=options.model_config, | ||
| ) | ||
|
|
||
| # Make a simple test call in executor to avoid blocking | ||
| loop = asyncio.get_event_loop() | ||
| await loop.run_in_executor(None, lambda: agent.step("test")) | ||
|
|
||
|
Comment on lines
+711
to
+712
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we also add a timeout in external? like: |
||
| logger.info(f"Model validation passed for task {options.task_id}") | ||
| return True, None | ||
|
|
||
| except Exception as e: | ||
| error_msg = str(e) | ||
| logger.error( | ||
| f"Model validation failed for task {options.task_id}: {error_msg}", | ||
| extra={ | ||
| "project_id": options.project_id, | ||
| "task_id": options.task_id, | ||
| "error": error_msg, | ||
| }, | ||
| exc_info=True, | ||
| ) | ||
| return False, error_msg | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,176 @@ | ||
| """ | ||
| Unit tests for validate_model_before_task function. | ||
|
|
||
| TODO: Rename this file to test_task.py after fixing errors | ||
| in backend/tests/unit/service/test_task.py | ||
| """ | ||
|
|
||
| from unittest.mock import Mock, patch | ||
|
|
||
| import pytest | ||
| from camel.types import ModelPlatformType | ||
|
|
||
| from app.model.chat import Chat | ||
| from app.service.task import validate_model_before_task | ||
|
|
||
| # Test data constants | ||
| TEST_PROJECT_ID = "test_project" | ||
| TEST_TASK_ID = "test_task_123" | ||
| TEST_QUESTION = "Test question" | ||
| TEST_EMAIL = "test@example.com" | ||
| TEST_MODEL_PLATFORM = ModelPlatformType.OPENAI | ||
| TEST_MODEL_TYPE = "gpt-4o" | ||
| TEST_API_URL = "https://api.openai.com/v1" | ||
| TEST_VALID_API_KEY = "sk-valid-key" | ||
| TEST_INVALID_API_KEY = "sk-invalid-key" | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_validate_model_success(): | ||
| """Test successful model validation.""" | ||
| options = Chat( | ||
| project_id=TEST_PROJECT_ID, | ||
| task_id=TEST_TASK_ID, | ||
| question=TEST_QUESTION, | ||
| email=TEST_EMAIL, | ||
| model_platform=TEST_MODEL_PLATFORM, | ||
| model_type=TEST_MODEL_TYPE, | ||
| api_key=TEST_VALID_API_KEY, | ||
| api_url=TEST_API_URL, | ||
| model_config={}, | ||
| ) | ||
|
|
||
| # Mock the create_agent and agent.step | ||
| mock_agent = Mock() | ||
| mock_agent.step = Mock(return_value="test response") | ||
|
|
||
| with patch("app.service.task.create_agent", return_value=mock_agent): | ||
| is_valid, error_msg = await validate_model_before_task(options) | ||
|
|
||
| assert is_valid is True | ||
| assert error_msg is None | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_validate_model_invalid_api_key(): | ||
| """Test model validation with invalid API key.""" | ||
| options = Chat( | ||
| project_id=TEST_PROJECT_ID, | ||
| task_id=TEST_TASK_ID, | ||
| question=TEST_QUESTION, | ||
| email=TEST_EMAIL, | ||
| model_platform=TEST_MODEL_PLATFORM, | ||
| model_type=TEST_MODEL_TYPE, | ||
| api_key=TEST_INVALID_API_KEY, | ||
| api_url=TEST_API_URL, | ||
| model_config={}, | ||
| ) | ||
|
|
||
| # Mock the create_agent to raise authentication error | ||
| with patch("app.service.task.create_agent") as mock_create: | ||
| mock_agent = Mock() | ||
| mock_agent.step = Mock( | ||
| side_effect=Exception("Error code: 401 - Invalid API key") | ||
| ) | ||
| mock_create.return_value = mock_agent | ||
|
|
||
| is_valid, error_msg = await validate_model_before_task(options) | ||
|
|
||
| assert is_valid is False | ||
| assert error_msg is not None | ||
| assert "401" in error_msg or "Invalid API key" in error_msg | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_validate_model_network_error(): | ||
| """Test model validation with network error.""" | ||
| options = Chat( | ||
| project_id=TEST_PROJECT_ID, | ||
| task_id=TEST_TASK_ID, | ||
| question=TEST_QUESTION, | ||
| email=TEST_EMAIL, | ||
| model_platform=TEST_MODEL_PLATFORM, | ||
| model_type=TEST_MODEL_TYPE, | ||
| api_key=TEST_VALID_API_KEY, | ||
| api_url="https://invalid-url.com", | ||
| model_config={}, | ||
| ) | ||
|
|
||
| # Mock the create_agent to raise network error | ||
| with patch("app.service.task.create_agent") as mock_create: | ||
| mock_agent = Mock() | ||
| mock_agent.step = Mock(side_effect=Exception("Connection error")) | ||
| mock_create.return_value = mock_agent | ||
|
|
||
| is_valid, error_msg = await validate_model_before_task(options) | ||
|
|
||
| assert is_valid is False | ||
| assert error_msg is not None | ||
| assert "Connection error" in error_msg | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_validate_model_with_custom_config(): | ||
| """Test model validation with custom model configuration.""" | ||
| custom_config = {"temperature": 0.7, "max_tokens": 1000} | ||
|
|
||
| options = Chat( | ||
| project_id=TEST_PROJECT_ID, | ||
| task_id=TEST_TASK_ID, | ||
| question=TEST_QUESTION, | ||
| email=TEST_EMAIL, | ||
| model_platform=TEST_MODEL_PLATFORM, | ||
| model_type=TEST_MODEL_TYPE, | ||
| api_key=TEST_VALID_API_KEY, | ||
| api_url=TEST_API_URL, | ||
| model_config=custom_config, | ||
| ) | ||
|
|
||
| mock_agent = Mock() | ||
| mock_agent.step = Mock(return_value="test response") | ||
|
|
||
| with patch( | ||
| "app.service.task.create_agent", return_value=mock_agent | ||
| ) as mock_create: | ||
| is_valid, error_msg = await validate_model_before_task(options) | ||
|
|
||
| # Verify create_agent was called | ||
| mock_create.assert_called_once() | ||
| call_args = mock_create.call_args | ||
| assert call_args.kwargs["model_platform"] == options.model_platform | ||
| assert call_args.kwargs["model_type"] == options.model_type | ||
| assert call_args.kwargs["api_key"] == options.api_key | ||
| assert call_args.kwargs["url"] == options.api_url | ||
|
|
||
| assert is_valid is True | ||
| assert error_msg is None | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_validate_model_rate_limit_error(): | ||
| """Test model validation with rate limit error.""" | ||
| options = Chat( | ||
| project_id=TEST_PROJECT_ID, | ||
| task_id=TEST_TASK_ID, | ||
| question=TEST_QUESTION, | ||
| email=TEST_EMAIL, | ||
| model_platform=TEST_MODEL_PLATFORM, | ||
| model_type=TEST_MODEL_TYPE, | ||
| api_key=TEST_VALID_API_KEY, | ||
| api_url=TEST_API_URL, | ||
| model_config={}, | ||
| ) | ||
|
|
||
| # Mock the create_agent to raise rate limit error | ||
| with patch("app.service.task.create_agent") as mock_create: | ||
| mock_agent = Mock() | ||
| mock_agent.step = Mock( | ||
| side_effect=Exception("Error code: 429 - Rate limit exceeded") | ||
|
Comment on lines
+167
to
+168
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we treat rate limit as invalid signal? I think rate limit is temporal |
||
| ) | ||
| mock_create.return_value = mock_agent | ||
|
|
||
| is_valid, error_msg = await validate_model_before_task(options) | ||
|
|
||
| assert is_valid is False | ||
| assert error_msg is not None | ||
| assert "429" in error_msg or "Rate limit" in error_msg | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change to get_running_loop()? Since get_event_loop() is about deprecation