Skip to content

LCORE-1239: Integration tests for models REST API endpoint#1157

Merged
tisnik merged 1 commit intolightspeed-core:mainfrom
tisnik:lcore-1238-integration-tests-for-models-endpoint
Feb 17, 2026
Merged

LCORE-1239: Integration tests for models REST API endpoint#1157
tisnik merged 1 commit intolightspeed-core:mainfrom
tisnik:lcore-1238-integration-tests-for-models-endpoint

Conversation

@tisnik
Copy link
Contributor

@tisnik tisnik commented Feb 17, 2026

Description

LCORE-1239: Integration tests for models REST API endpoint

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

  • Assisted-by: N/A
  • Generated by: N/A

Related Tickets & Documents

  • Related Issue #LCORE-1239

Summary by CodeRabbit

  • Tests
    • Added integration tests for the models endpoint covering listing and filtering by model type (llm, embedding, unknown)
    • Simulated external client behaviors (success and connection failure) to validate response structure and error handling (expects 503 on upstream failure)
    • Async test coverage included for endpoint behavior under varied filter inputs

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

Walkthrough

Introduces integration tests for the /models endpoint using a mocked Llama Stack client. Adds fixtures for successful responses and APIConnectionError scenarios, covering model listing, filtering by model_type, and error handling behavior.

Changes

Cohort / File(s) Summary
Models Endpoint Integration Tests
tests/integration/endpoints/test_model_list.py
New test file with fixtures and asyncio test cases for the /models endpoint. Covers listing all models, filtering by model_type (llm, embedding, unknown), and API connection error handling (expects HTTPException with 503).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding integration tests for the models REST API endpoint, which aligns with the changeset that introduces a comprehensive test suite for the /models endpoint.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/integration/endpoints/test_model_list.py (2)

131-141: Copy-pasted docstrings across filter tests don't describe the specific scenario.

The docstrings for test_models_list_filter_model_type_llm, test_models_list_filter_model_type_embedding, and test_models_list_filter_model_type_unknown are all identical and say "Model list handler." Consider updating each to reflect the specific filter behavior being tested (e.g., "filters by LLM model type", "returns empty list for unknown model type").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/endpoints/test_model_list.py` around lines 131 - 141, The
three tests test_models_list_filter_model_type_llm,
test_models_list_filter_model_type_embedding, and
test_models_list_filter_model_type_unknown currently share a generic docstring;
update each function's docstring to describe the specific scenario being tested
(e.g., "filters models by LLM model type" for
test_models_list_filter_model_type_llm, "filters models by embedding model type"
for test_models_list_filter_model_type_embedding, and "returns empty list or no
matches for unknown model type" for test_models_list_filter_model_type_unknown)
so the docstrings clearly reflect the filter behavior under test.

244-249: Brittle assertion: exact error message match may break on minor format changes.

The match string is passed to re.search() and contains unescaped regex metacharacters (., {, }). While these happen to work here, this assertion couples the test tightly to the exact string representation of HTTPException. Consider asserting on the exception's status_code and detail attributes instead, which is more robust:

Proposed refactor
-    expected = "503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}"
-    with pytest.raises(HTTPException, match=expected):
+    with pytest.raises(HTTPException) as exc_info:
         await models_endpoint_handler(
             request=test_request,
             auth=test_auth,
             model_type=ModelFilter(model_type=None),
         )
+    assert exc_info.value.status_code == 503
+    assert "Unable to connect to Llama Stack" in str(exc_info.value.detail)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/endpoints/test_model_list.py` around lines 244 - 249, The
current test uses a brittle regex match on the stringified HTTPException;
instead capture the exception with pytest.raises(...) as excinfo when calling
models_endpoint_handler (with test_request, test_auth, ModelFilter) and assert
on excinfo.value.status_code == 503 and that excinfo.value.detail (or
str(excinfo.value.detail)) contains the expected substrings like "Unable to
connect to Llama Stack" and "Connection error" to avoid relying on exact
formatting; update assertions to check these attributes rather than using match
on the full message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 222-250: Update the test_models_list_on_api_connection_error
docstring and inline comment to correctly describe that the test asserts an
error path (expects a 503 HTTPException) instead of a successful response,
reference the actual fixture name mock_llama_stack_client_failing in the
Parameters section (replace mock_llama_stack_client), and fix the typo in the
inline comment ("we should caught" → "we should catch") so the docstring and
comment accurately describe the behavior of models_endpoint_handler when the
Llama Stack client cannot connect.

---

Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 131-141: The three tests test_models_list_filter_model_type_llm,
test_models_list_filter_model_type_embedding, and
test_models_list_filter_model_type_unknown currently share a generic docstring;
update each function's docstring to describe the specific scenario being tested
(e.g., "filters models by LLM model type" for
test_models_list_filter_model_type_llm, "filters models by embedding model type"
for test_models_list_filter_model_type_embedding, and "returns empty list or no
matches for unknown model type" for test_models_list_filter_model_type_unknown)
so the docstrings clearly reflect the filter behavior under test.
- Around line 244-249: The current test uses a brittle regex match on the
stringified HTTPException; instead capture the exception with pytest.raises(...)
as excinfo when calling models_endpoint_handler (with test_request, test_auth,
ModelFilter) and assert on excinfo.value.status_code == 503 and that
excinfo.value.detail (or str(excinfo.value.detail)) contains the expected
substrings like "Unable to connect to Llama Stack" and "Connection error" to
avoid relying on exact formatting; update assertions to check these attributes
rather than using match on the full message.

@tisnik tisnik force-pushed the lcore-1238-integration-tests-for-models-endpoint branch from df16c91 to 3b4546f Compare February 17, 2026 07:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/integration/endpoints/test_model_list.py (2)

244-251: The match string is interpreted as a regex — unescaped . could cause fragile matching.

pytest.raises(match=...) uses re.search under the hood. The . in 'Connection error.' matches any character, not just a literal dot. While unlikely to cause a false positive here, using re.escape() would make the intent clearer and the test more robust against message changes.

Suggested fix
+    import re
     # we should catch HTTPException, not APIConnectionError!
-    expected = "503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}"
+    expected = re.escape("503: {'response': 'Unable to connect to Llama Stack', 'cause': 'Connection error.'}")
     with pytest.raises(HTTPException, match=expected):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/endpoints/test_model_list.py` around lines 244 - 251, The
test's expected string is passed as a regex to pytest.raises(match=...), so
escape it to avoid regex metacharacters: wrap the expected value with
re.escape() before passing to pytest.raises (or assign escaped_expected =
re.escape(expected)) and import the re module; update the pytest.raises call
that wraps models_endpoint_handler (and references ModelFilter) to use the
escaped string so the assertion matches the literal error message.

131-134: Docstrings are copy-pasted and don't describe each test's specific purpose.

All four success-path tests share the identical docstring "Test that models endpoint returns successful response" / "Model list handler". Consider tailoring each to describe what it actually verifies (e.g., "filters by LLM model type", "returns empty list for unknown model type").

Also applies to: 163-166, 197-200

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/endpoints/test_model_list.py` around lines 131 - 134,
Several tests use the generic docstring "Test that models endpoint returns
successful response" / "Model list handler"; replace each with a specific
description of what that test verifies (e.g., "returns models filtered by LLM
model type", "returns empty list for unknown model type", "includes pagination
metadata", "returns all available models when no filter provided") so the intent
of each success-path test is clear; locate the tests by the existing docstring
text and update all occurrences (including the other two identical docstrings)
to reflect the precise behavior asserted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 244-251: The test's expected string is passed as a regex to
pytest.raises(match=...), so escape it to avoid regex metacharacters: wrap the
expected value with re.escape() before passing to pytest.raises (or assign
escaped_expected = re.escape(expected)) and import the re module; update the
pytest.raises call that wraps models_endpoint_handler (and references
ModelFilter) to use the escaped string so the assertion matches the literal
error message.
- Around line 131-134: Several tests use the generic docstring "Test that models
endpoint returns successful response" / "Model list handler"; replace each with
a specific description of what that test verifies (e.g., "returns models
filtered by LLM model type", "returns empty list for unknown model type",
"includes pagination metadata", "returns all available models when no filter
provided") so the intent of each success-path test is clear; locate the tests by
the existing docstring text and update all occurrences (including the other two
identical docstrings) to reflect the precise behavior asserted.

@tisnik tisnik merged commit 4cf5b1a into lightspeed-core:main Feb 17, 2026
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments