LCORE-1239: better query parameter definition#1134
LCORE-1239: better query parameter definition#1134tisnik merged 3 commits intolightspeed-core:mainfrom
Conversation
WalkthroughThis PR refactors the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/openapi.json (1)
4349-4373:⚠️ Potential issue | 🟠 MajorMake the /a2a POST operationId unique.
Line 4355 and Line 4373 currently use the same operationId (handle_a2a_jsonrpc_a2a_get). OpenAPI requires unique operationIds, and this will break client codegen and tooling. Set the POST operationId to a distinct value (e.g.,_post).🔧 Proposed fix
- "operationId": "handle_a2a_jsonrpc_a2a_get", + "operationId": "handle_a2a_jsonrpc_a2a_post",
🤖 Fix all issues with AI agents
In `@src/models/requests.py`:
- Around line 526-531: The docstring for the ModelFilter class is inconsistent:
it calls model_type "Required" but the field model_type is Optional[str] with
default None and described elsewhere as an optional filter; update the class
docstring (ModelFilter) Attributes section so model_type is described as an
optional filter (e.g., "Optional model type, such as 'llm', 'embeddings', etc.")
and remove or replace the word "Required" to reflect the Optional[str] typing
and default None.
In `@tests/unit/app/endpoints/test_models.py`:
- Around line 52-57: The assertions inside the pytest.raises context are
unreachable; move the two asserts that check e.value.status_code and
e.value.detail outside the with block so they run after the exception is caught.
Keep the with pytest.raises(HTTPException) as e: wrapper around the await
models_endpoint_handler(...) call (which uses ModelFilter(model_type=None)),
then dedent the two assertions so they execute against e.value after the with
block completes.
- Around line 468-473: The two assertions that check the raised HTTPException's
status and detail are incorrectly placed inside the pytest.raises context and
therefore never executed; after calling models_endpoint_handler inside the with
pytest.raises(HTTPException) as e: block, move the asserts that reference
e.value (status_code and detail["response"]) to immediately after the with block
so they run against the captured exception (reference: models_endpoint_handler
and the pytest.raises usage in this test).
🧹 Nitpick comments (3)
src/app/endpoints/models.py (1)
80-80: Consider renaming the parameter tomodel_filterfor clarity.Currently
model_type.model_type(line 121/125) reads awkwardly because the parameter and the field share the same name. Since FastAPI derives query-parameter names from theBaseModelfield names (not the function-parameter name), renaming this won't change the API contract.Proposed rename
async def models_endpoint_handler( request: Request, auth: Annotated[AuthTuple, Depends(get_auth_dependency())], - model_type: Annotated[ModelFilter, Query()], + model_filter: Annotated[ModelFilter, Query()], ) -> ModelsResponse:And update references at lines 121/125:
- if model_type.model_type is not None: + if model_filter.model_type is not None: parsed_models = [ model for model in parsed_models - if model["model_type"] == model_type.model_type + if model["model_type"] == model_filter.model_type ]src/models/requests.py (1)
533-538: Confirm the strict query parameter validation is intentional and properly documented.The
extra: "forbid"configuration onModelFilteris technically correct and will reject unexpected query parameters with a 422 response. However, this strict validation isn't documented in the OpenAPI schema (which shows only 200, 401, 403, 500, 503 for/v1/models) nor is there a test verifying this behavior.If this strictness is intentional (and the "Reject unknown fields" comment suggests it is), either:
- Add
422to the OpenAPI responses for the/modelsendpoint, or- Add a test case explicitly verifying that extra query parameters are rejected
This prevents API consumers from being unexpectedly rejected with a 422 when they send unknown query parameters.
docs/openapi.md (1)
270-270: Missing type formodel_typeparameter.The
Typecolumn is empty for themodel_typeparameter. Consider specifyingstringto match the underlyingOptional[str]field inModelFilter.-| model_type | | False | Optional filter to return only models matching this type | +| model_type | string | False | Optional filter to return only models matching this type |
| class ModelFilter(BaseModel): | ||
| """Model representing a query parameter to select models by its type. | ||
|
|
||
| Attributes: | ||
| model_type: Required model type, such as 'llm', 'embeddings' etc. | ||
| """ |
There was a problem hiding this comment.
Docstring says "Required" but the field is Optional.
The Attributes section states "Required model type" but model_type is Optional[str] with a default of None. The field description also says "Optional filter." Update the docstring for consistency.
Proposed fix
- """Model representing a query parameter to select models by its type.
+ """Model representing a query parameter to filter models by type.
Attributes:
- model_type: Required model type, such as 'llm', 'embeddings' etc.
+ model_type: Optional model type filter, such as 'llm', 'embeddings' etc.
"""🤖 Prompt for AI Agents
In `@src/models/requests.py` around lines 526 - 531, The docstring for the
ModelFilter class is inconsistent: it calls model_type "Required" but the field
model_type is Optional[str] with default None and described elsewhere as an
optional filter; update the class docstring (ModelFilter) Attributes section so
model_type is described as an optional filter (e.g., "Optional model type, such
as 'llm', 'embeddings', etc.") and remove or replace the word "Required" to
reflect the Optional[str] typing and default None.
| with pytest.raises(HTTPException) as e: | ||
| await models_endpoint_handler(request=request, auth=auth) | ||
| await models_endpoint_handler( | ||
| request=request, auth=auth, model_type=ModelFilter(model_type=None) | ||
| ) | ||
| assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR | ||
| assert e.value.detail["response"] == "Configuration is not loaded" # type: ignore |
There was a problem hiding this comment.
Assertions inside pytest.raises block are unreachable.
Lines 56-57 are inside the with pytest.raises(HTTPException) block. When the await on line 53 raises HTTPException, the context manager catches it and control exits the with block — the assertions on lines 56-57 never execute. This is a pre-existing issue, but since you're modifying this block, consider moving the assertions outside:
Proposed fix
with pytest.raises(HTTPException) as e:
await models_endpoint_handler(
request=request, auth=auth, model_type=ModelFilter(model_type=None)
)
- assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
- assert e.value.detail["response"] == "Configuration is not loaded" # type: ignore
+ assert e.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
+ assert e.value.detail["response"] == "Configuration is not loaded" # type: ignore🤖 Prompt for AI Agents
In `@tests/unit/app/endpoints/test_models.py` around lines 52 - 57, The assertions
inside the pytest.raises context are unreachable; move the two asserts that
check e.value.status_code and e.value.detail outside the with block so they run
after the exception is caught. Keep the with pytest.raises(HTTPException) as e:
wrapper around the await models_endpoint_handler(...) call (which uses
ModelFilter(model_type=None)), then dedent the two assertions so they execute
against e.value after the with block completes.
| with pytest.raises(HTTPException) as e: | ||
| await models_endpoint_handler(request=request, auth=auth) | ||
| await models_endpoint_handler( | ||
| request=request, auth=auth, model_type=ModelFilter(model_type=None) | ||
| ) | ||
| assert e.value.status_code == status.HTTP_503_SERVICE_UNAVAILABLE | ||
| assert e.value.detail["response"] == "Unable to connect to Llama Stack" # type: ignore |
There was a problem hiding this comment.
Same unreachable-assertions bug as above.
Lines 472-473 are inside the with pytest.raises block and will never execute. Move them outside the with block, consistent with the correct pattern used in test_models_endpoint_handler_configuration_loaded (lines 121-126).
Proposed fix
with pytest.raises(HTTPException) as e:
await models_endpoint_handler(
request=request, auth=auth, model_type=ModelFilter(model_type=None)
)
- assert e.value.status_code == status.HTTP_503_SERVICE_UNAVAILABLE
- assert e.value.detail["response"] == "Unable to connect to Llama Stack" # type: ignore
- assert "Unable to connect to Llama Stack" in e.value.detail["cause"] # type: ignore
+ assert e.value.status_code == status.HTTP_503_SERVICE_UNAVAILABLE
+ assert e.value.detail["response"] == "Unable to connect to Llama Stack" # type: ignore
+ assert "Unable to connect to Llama Stack" in e.value.detail["cause"] # type: ignore🤖 Prompt for AI Agents
In `@tests/unit/app/endpoints/test_models.py` around lines 468 - 473, The two
assertions that check the raised HTTPException's status and detail are
incorrectly placed inside the pytest.raises context and therefore never
executed; after calling models_endpoint_handler inside the with
pytest.raises(HTTPException) as e: block, move the asserts that reference
e.value (status_code and detail["response"]) to immediately after the with block
so they run against the captured exception (reference: models_endpoint_handler
and the pytest.raises usage in this test).
Description
LCORE-1239: better query parameter definition
Type of change
Tools used to create PR
Related Tickets & Documents
Summary by CodeRabbit
Release Notes
New Features
model_typequery parameter to the Models endpoint for filtering models by type (supports "llm" and "embeddings").Documentation