LCORE-1239: Integration tests for models REST API endpoint#1157
Conversation
WalkthroughIntroduces integration tests for the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/integration/endpoints/test_model_list.py (2)
131-141: Copy-pasted docstrings across filter tests don't describe the specific scenario.The docstrings for
test_models_list_filter_model_type_llm,test_models_list_filter_model_type_embedding, andtest_models_list_filter_model_type_unknownare all identical and say "Model list handler." Consider updating each to reflect the specific filter behavior being tested (e.g., "filters by LLM model type", "returns empty list for unknown model type").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 131 - 141, The three tests test_models_list_filter_model_type_llm, test_models_list_filter_model_type_embedding, and test_models_list_filter_model_type_unknown currently share a generic docstring; update each function's docstring to describe the specific scenario being tested (e.g., "filters models by LLM model type" for test_models_list_filter_model_type_llm, "filters models by embedding model type" for test_models_list_filter_model_type_embedding, and "returns empty list or no matches for unknown model type" for test_models_list_filter_model_type_unknown) so the docstrings clearly reflect the filter behavior under test.
244-249: Brittle assertion: exact error message match may break on minor format changes.The
matchstring is passed tore.search()and contains unescaped regex metacharacters (.,{,}). While these happen to work here, this assertion couples the test tightly to the exact string representation ofHTTPException. Consider asserting on the exception'sstatus_codeanddetailattributes instead, which is more robust:Proposed refactor
- expected = "503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}" - with pytest.raises(HTTPException, match=expected): + with pytest.raises(HTTPException) as exc_info: await models_endpoint_handler( request=test_request, auth=test_auth, model_type=ModelFilter(model_type=None), ) + assert exc_info.value.status_code == 503 + assert "Unable to connect to Llama Stack" in str(exc_info.value.detail)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 244 - 249, The current test uses a brittle regex match on the stringified HTTPException; instead capture the exception with pytest.raises(...) as excinfo when calling models_endpoint_handler (with test_request, test_auth, ModelFilter) and assert on excinfo.value.status_code == 503 and that excinfo.value.detail (or str(excinfo.value.detail)) contains the expected substrings like "Unable to connect to Llama Stack" and "Connection error" to avoid relying on exact formatting; update assertions to check these attributes rather than using match on the full message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 222-250: Update the test_models_list_on_api_connection_error
docstring and inline comment to correctly describe that the test asserts an
error path (expects a 503 HTTPException) instead of a successful response,
reference the actual fixture name mock_llama_stack_client_failing in the
Parameters section (replace mock_llama_stack_client), and fix the typo in the
inline comment ("we should caught" → "we should catch") so the docstring and
comment accurately describe the behavior of models_endpoint_handler when the
Llama Stack client cannot connect.
---
Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 131-141: The three tests test_models_list_filter_model_type_llm,
test_models_list_filter_model_type_embedding, and
test_models_list_filter_model_type_unknown currently share a generic docstring;
update each function's docstring to describe the specific scenario being tested
(e.g., "filters models by LLM model type" for
test_models_list_filter_model_type_llm, "filters models by embedding model type"
for test_models_list_filter_model_type_embedding, and "returns empty list or no
matches for unknown model type" for test_models_list_filter_model_type_unknown)
so the docstrings clearly reflect the filter behavior under test.
- Around line 244-249: The current test uses a brittle regex match on the
stringified HTTPException; instead capture the exception with pytest.raises(...)
as excinfo when calling models_endpoint_handler (with test_request, test_auth,
ModelFilter) and assert on excinfo.value.status_code == 503 and that
excinfo.value.detail (or str(excinfo.value.detail)) contains the expected
substrings like "Unable to connect to Llama Stack" and "Connection error" to
avoid relying on exact formatting; update assertions to check these attributes
rather than using match on the full message.
df16c91 to
3b4546f
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/integration/endpoints/test_model_list.py (2)
244-251: Thematchstring is interpreted as a regex — unescaped.could cause fragile matching.
pytest.raises(match=...)usesre.searchunder the hood. The.in'Connection error.'matches any character, not just a literal dot. While unlikely to cause a false positive here, usingre.escape()would make the intent clearer and the test more robust against message changes.Suggested fix
+ import re # we should catch HTTPException, not APIConnectionError! - expected = "503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}" + expected = re.escape("503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}") with pytest.raises(HTTPException, match=expected):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 244 - 251, The test's expected string is passed as a regex to pytest.raises(match=...), so escape it to avoid regex metacharacters: wrap the expected value with re.escape() before passing to pytest.raises (or assign escaped_expected = re.escape(expected)) and import the re module; update the pytest.raises call that wraps models_endpoint_handler (and references ModelFilter) to use the escaped string so the assertion matches the literal error message.
131-134: Docstrings are copy-pasted and don't describe each test's specific purpose.All four success-path tests share the identical docstring "Test that models endpoint returns successful response" / "Model list handler". Consider tailoring each to describe what it actually verifies (e.g., "filters by LLM model type", "returns empty list for unknown model type").
Also applies to: 163-166, 197-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 131 - 134, Several tests use the generic docstring "Test that models endpoint returns successful response" / "Model list handler"; replace each with a specific description of what that test verifies (e.g., "returns models filtered by LLM model type", "returns empty list for unknown model type", "includes pagination metadata", "returns all available models when no filter provided") so the intent of each success-path test is clear; locate the tests by the existing docstring text and update all occurrences (including the other two identical docstrings) to reflect the precise behavior asserted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 244-251: The test's expected string is passed as a regex to
pytest.raises(match=...), so escape it to avoid regex metacharacters: wrap the
expected value with re.escape() before passing to pytest.raises (or assign
escaped_expected = re.escape(expected)) and import the re module; update the
pytest.raises call that wraps models_endpoint_handler (and references
ModelFilter) to use the escaped string so the assertion matches the literal
error message.
- Around line 131-134: Several tests use the generic docstring "Test that models
endpoint returns successful response" / "Model list handler"; replace each with
a specific description of what that test verifies (e.g., "returns models
filtered by LLM model type", "returns empty list for unknown model type",
"includes pagination metadata", "returns all available models when no filter
provided") so the intent of each success-path test is clear; locate the tests by
the existing docstring text and update all occurrences (including the other two
identical docstrings) to reflect the precise behavior asserted.
Description
LCORE-1239: Integration tests for models REST API endpoint
Type of change
Tools used to create PR
Related Tickets & Documents
Summary by CodeRabbit