-
Notifications
You must be signed in to change notification settings - Fork 7
Enhancing swagger and redocs #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
nishika26
wants to merge
12
commits into
main
Choose a base branch
from
enhancement/swagger_docs
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
9d2fd3a
rearranging endpoints for swagger and redocs
nishika26 763e798
fixing onboarding file
nishika26 1851982
refactoring existing docs
nishika26 e364144
Merge branch 'main' into enhancement/swagger_docs
nishika26 258bbfc
adding left endpoints docs
nishika26 ee5baf7
pr review and shifting api version to env example
nishika26 88bbf4a
adding api version to env
nishika26 f97c2ef
coderabbit pr review
nishika26 ef4ca7a
coderabbit pr review
nishika26 d8f37af
rephrasing the docs a little
nishika26 4fa355d
coderabbit review fixes
nishika26 c76b79a
Merge branch 'main' into enhancement/swagger_docs
nishika26 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Create a new API key for programmatic access to the platform. | ||
|
|
||
| The raw API key is returned **only once during creation**. Store it securely as it cannot be retrieved again. Only the key prefix will be visible in subsequent requests for security reasons. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Delete an API key by its ID. | ||
|
|
||
| Permanently revokes the API key. Any requests using this key will fail immediately after deletion. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| List all API keys for the current project. | ||
|
|
||
| Returns a paginated list of API keys with key prefix for security. The full key is only shown during creation and cannot be retrieved afterward. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| Retrieve detailed information about a specific collection by its collection id. This endpoint returns the collection object including its project, organization, | ||
| timestamps, and associated LLM service details (`llm_service_id` and `llm_service_name`). | ||
|
|
||
| Additionally, if the `include_docs` flag in the request body is true then you will get a list of document IDs associated with a given collection as well. Note that, documents returned are not only stored by the AI platform, but also by OpenAI. | ||
| Additionally, if the `include_docs` flag in the request body is true then you will get a list of document IDs associated with a given collection as well. Note that, documents returned are not only stored by the AI platform, but also by Vector store provider. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,5 @@ | ||
| List _active_ collections -- collections that have been created but | ||
| not deleted | ||
| List all _active_ collections that have been created and are not deleted | ||
|
|
||
| If a vector store was created - `llm_service_name` and `llm_service_id` in the response denote the name of the vector store (eg. 'openai vector store') and its id. | ||
| If a vector store was created - `llm_service_name` and `llm_service_id` in the response denotes the name of the vector store (eg. 'openai vector store') and its id respectively. | ||
|
|
||
| [To be deprecated] If an assistant was created, `llm_service_name` and `llm_service_id` in the response denotes the name of the model used in the assistant (eg. 'gpt-4o') and assistant id. | ||
| [Deprecated] If an assistant was created, `llm_service_name` and `llm_service_id` in the response denotes the name of the model used in the assistant (eg. 'gpt-4o') and assistant id. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Persist new credentials for the current organization and project. | ||
|
|
||
| Credentials are encrypted and stored securely for provider integrations (OpenAI, Langfuse, etc.). Only one credential per provider is allowed per organization-project combination. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Delete all credentials for current organization and project. | ||
|
|
||
| Permanently removes all provider credentials from the current organization and project. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Delete credentials for a specific provider. | ||
|
|
||
| Permanently removes credentials for a specific provider from the current organization and project. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Get credentials for a specific provider. | ||
|
|
||
| Retrieves decrypted credentials for a specific provider (e.g., `openai`, `langfuse`) for the current organization and project. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Get all credentials for current organization and project. | ||
|
|
||
| Returns list of all provider credentials associated with your organization and project. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| Update credentials for a specific provider. | ||
|
|
||
| Updates existing provider credentials for the current organization and project. Provider and credential fields must be provided. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1,3 @@ | ||
| Retrieve all information about a given document. If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the retrieved document. If you don't set it to true, the URL will not be included in the response. | ||
| Retrieve all information about a given document. | ||
|
|
||
| If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the retrieved document. If you don't set it to true, the URL will not be included in the response. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1,3 @@ | ||
| Get the status and details of a document transformation job. If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the transformed document if the job has been successful. If you don't set it to true, the URL will not be included in the response. | ||
| Get the status and details of a document transformation job. | ||
|
|
||
| If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the transformed document if the job has been successful. If you don't set it to true, the URL will not be included in the response. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1,3 @@ | ||
| Get the status and details of multiple document transformation jobs by IDs. If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the transformed document for successful jobs. If you don't set it to true, the URL will not be included in the response. | ||
| Get the status and details of multiple document transformation jobs by IDs. | ||
|
|
||
| If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the transformed document for successful jobs. If you don't set it to true, the URL will not be included in the response. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1,3 @@ | ||
| List documents uploaded to the AI platform. If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the retrieved documents. If you don't set it to true, the URL will not be included in the response. | ||
| List documents uploaded to the AI platform. | ||
|
|
||
| If you set the ``include_url`` parameter to true, a signed URL will be included in the response, which is a clickable link to access the retrieved documents. If you don't set it to true, the URL will not be included in the response. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,80 +1,46 @@ | ||
| Start an evaluation using OpenAI Batch API. | ||
| Start an evaluation run using the OpenAI Batch API. | ||
|
|
||
| This endpoint: | ||
| 1. Fetches the dataset from database and validates it has Langfuse dataset ID | ||
| 2. Creates an EvaluationRun record in the database | ||
| 3. Fetches dataset items from Langfuse | ||
| 4. Builds JSONL for batch processing (config is used as-is) | ||
| 5. Creates a batch job via the generic batch infrastructure | ||
| 6. Returns the evaluation run details with batch_job_id | ||
| Evaluations allow you to systematically test LLM configurations against | ||
| predefined datasets with automatic progress tracking and result collection. | ||
|
|
||
| The batch will be processed asynchronously by Celery Beat (every 60s). | ||
| Use GET /evaluations/{evaluation_id} to check progress. | ||
| **Key Features:** | ||
| * Fetches dataset items from Langfuse and creates batch processing job via OpenAI Batch API | ||
| * Asynchronous processing with automatic progress tracking (checks every 60s) | ||
| * Supports configuration from direct parameters or existing assistants | ||
| * Stores results for comparison and analysis | ||
| * Use `GET /evaluations/{evaluation_id}` to monitor progress and retrieve results of evaluation. | ||
|
|
||
| ## Request Body | ||
|
|
||
| - **dataset_id** (required): ID of the evaluation dataset (from /evaluations/datasets) | ||
| - **experiment_name** (required): Name for this evaluation experiment/run | ||
| - **config** (optional): Configuration dict that will be used as-is in JSONL generation. Can include any OpenAI Responses API parameters like: | ||
| - model: str (e.g., "gpt-4o", "gpt-5") | ||
| - instructions: str | ||
| - tools: list (e.g., [{"type": "file_search", "vector_store_ids": [...]}]) | ||
| - reasoning: dict (e.g., {"effort": "low"}) | ||
| - text: dict (e.g., {"verbosity": "low"}) | ||
| - temperature: float | ||
| - include: list (e.g., ["file_search_call.results"]) | ||
| - Note: "input" will be added automatically from the dataset | ||
| - **assistant_id** (optional): Assistant ID to fetch configuration from. If provided, configuration will be fetched from the assistant in the database. Config can be passed as empty dict {} when using assistant_id. | ||
|
|
||
| ## Example with config | ||
| **Example: Using Direct Configuration** | ||
|
|
||
| ```json | ||
| { | ||
| "dataset_id": 123, | ||
| "experiment_name": "test_run", | ||
| "config": { | ||
| "model": "gpt-4.1", | ||
| "instructions": "You are a helpful FAQ assistant.", | ||
| "tools": [ | ||
| { | ||
| "type": "file_search", | ||
| "vector_store_ids": ["vs_12345"], | ||
| "max_num_results": 3 | ||
| } | ||
| ], | ||
| "include": ["file_search_call.results"] | ||
| } | ||
| "dataset_id": 123, | ||
| "experiment_name": "gpt4_file_search_test", | ||
| "config": { | ||
| "model": "gpt-4o", | ||
| "instructions": "You are a helpful FAQ assistant for farmers.", | ||
| "tools": [ | ||
| { | ||
| "type": "file_search", | ||
| "vector_store_ids": ["vs_abc123"], | ||
| "max_num_results": 5 | ||
| } | ||
| ], | ||
| "temperature": 0.7, | ||
| "include": ["file_search_call.results"] | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Example with assistant_id | ||
| **Example: Using Existing Assistant** | ||
|
|
||
| ```json | ||
| { | ||
| "dataset_id": 123, | ||
| "experiment_name": "test_run", | ||
| "config": {}, | ||
| "assistant_id": "asst_xyz" | ||
| "dataset_id": 123, | ||
| "experiment_name": "production_assistant_eval", | ||
| "config": {}, | ||
| "assistant_id": "asst_xyz789" | ||
| } | ||
| ``` | ||
|
|
||
| ## Returns | ||
|
|
||
| EvaluationRunPublic with batch details and status: | ||
| - id: Evaluation run ID | ||
| - run_name: Name of the evaluation run | ||
| - dataset_name: Name of the dataset used | ||
| - dataset_id: ID of the dataset used | ||
| - config: Configuration used for the evaluation | ||
| - batch_job_id: ID of the batch job processing this evaluation | ||
| - status: Current status (pending, running, completed, failed) | ||
| - total_items: Total number of items being evaluated | ||
| - completed_items: Number of items completed so far | ||
| - results: Evaluation results (when completed) | ||
| - error_message: Error message if failed | ||
|
|
||
| ## Error Responses | ||
|
|
||
| - **404**: Dataset or assistant not found or not accessible | ||
| - **400**: Missing required credentials (OpenAI or Langfuse), dataset missing Langfuse ID, or config missing required fields | ||
| - **500**: Failed to configure API clients or start batch evaluation | ||
| **Note:** When using `assistant_id`, configuration is fetched from the assistant in the database. You can pass `config` as an empty object `{}`. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,18 +1,3 @@ | ||
| Delete a dataset by ID. | ||
|
|
||
| This will remove the dataset record from the database. The CSV file in object store (if exists) will remain for audit purposes, but the dataset will no longer be accessible for creating new evaluations. | ||
|
|
||
| ## Path Parameters | ||
|
|
||
| - **dataset_id**: ID of the dataset to delete | ||
|
|
||
| ## Returns | ||
|
|
||
| Success message with deleted dataset details: | ||
| - message: Confirmation message | ||
| - dataset_id: ID of the deleted dataset | ||
|
|
||
| ## Error Responses | ||
|
|
||
| - **404**: Dataset not found or not accessible to your organization/project | ||
| - **400**: Dataset cannot be deleted (e.g., has active evaluation runs) | ||
| This will remove the dataset record from the database. The CSV file in object store (if exists) will remain there for audit purposes, but the dataset will no longer be accessible for creating new evaluations. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,22 +1,3 @@ | ||
| Get details of a specific dataset by ID. | ||
|
|
||
| Retrieves comprehensive information about a dataset including metadata, object store URL, and Langfuse integration details. | ||
|
|
||
| ## Path Parameters | ||
|
|
||
| - **dataset_id**: ID of the dataset to retrieve | ||
|
|
||
| ## Returns | ||
|
|
||
| DatasetUploadResponse with dataset details: | ||
| - dataset_id: Unique identifier for the dataset | ||
| - dataset_name: Name of the dataset (sanitized) | ||
| - total_items: Total number of items including duplication | ||
| - original_items: Number of original items before duplication | ||
| - duplication_factor: Factor by which items were duplicated | ||
| - langfuse_dataset_id: ID of the dataset in Langfuse | ||
| - object_store_url: URL to the CSV file in object storage | ||
|
|
||
| ## Error Responses | ||
|
|
||
| - **404**: Dataset not found or not accessible to your organization/project | ||
| Returns comprehensive dataset information including metadata (ID, name, item counts, duplication factor), Langfuse integration details (dataset ID), and the object store URL for the CSV file. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,19 +1,3 @@ | ||
| List all datasets for the current organization and project. | ||
|
|
||
| Returns a paginated list of dataset records ordered by most recent first. | ||
|
|
||
| ## Query Parameters | ||
|
|
||
| - **limit**: Maximum number of datasets to return (default 50, max 100) | ||
| - **offset**: Number of datasets to skip for pagination (default 0) | ||
|
|
||
| ## Returns | ||
|
|
||
| List of DatasetUploadResponse objects, each containing: | ||
| - dataset_id: Unique identifier for the dataset | ||
| - dataset_name: Name of the dataset (sanitized) | ||
| - total_items: Total number of items including duplication | ||
| - original_items: Number of original items before duplication | ||
| - duplication_factor: Factor by which items were duplicated | ||
| - langfuse_dataset_id: ID of the dataset in Langfuse | ||
| - object_store_url: URL to the CSV file in object storage | ||
| Returns a paginated list of datasets ordered by most recent first. Each dataset includes metadata (ID, name, item counts, duplication factor), Langfuse integration details, and object store URL. |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.