Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ RUN uv pip install -e .

# Verify llama command is available
RUN uv run python -c "import llama_stack; print('llama-stack imported successfully')"
RUN uv run llama --help

# Expose port
EXPOSE 8321
Expand Down
11 changes: 4 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Manual procedure, assuming an existing PyPI API token available:
### Prerequisites

- Python >= 3.12
- Llama Stack == 0.2.22
- Llama Stack == 0.4.3
- pydantic >= 2.10.6

### Installation
Expand Down Expand Up @@ -163,13 +163,13 @@ shields:

```bash
# Test the redaction shield
curl -X POST "http://localhost:8321/v1/safety/run_shield" \
curl -X POST "http://localhost:8321/v1/safety/run-shield" \
-H "Content-Type: application/json" \
-d '{
"shield_id": "redaction-shield",
"messages": [
{
"role": "user",
"role": "user",
"content": "My API key is abc123xyz and password is secret456"
}
]
Expand All @@ -180,11 +180,8 @@ curl -X POST "http://localhost:8321/v1/safety/run_shield" \
1. **Create provider directory**
```bash
mkdir -p ./providers.d/inline/safety/
mkdir -p ./providers.d/remote/tool_runtime/
curl -o ./providers.d/inline/safety/lightspeed_question_validity.yaml https://raw.githubusercontent.com/lightspeed-core/lightspeed- providers/refs/heads/main/resources/external_providers/inline/safety/lightspeed_question_validity.yaml
curl -o ./providers.d/inline/safety/lightspeed_question_validity.yaml https://raw.githubusercontent.com/lightspeed-core/lightspeed- providers/refs/heads/main/resources/external_providers/inline/safety/lightspeed_redaction.yaml
curl -o ./providers.d/remote/tool_runtime/lightspeed.yaml https://raw.githubusercontent.com/lightspeed-core/lightspeed-providers/refs/heads/main/resources/external_providers/remote/tool_runtime/lightspeed.yaml

curl -o ./providers.d/inline/safety/lightspeed_question_validity.yaml https://raw.githubusercontent.com/lightspeed-core/lightspeed- providers/refs/heads/main/resources/external_providers/inline/safety/lightspeed_redaction.yaml
```
3. **Add external provider definition**
```yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,30 +17,32 @@ agents:
- provider_id: lightspeed_inline_agent
provider_type: inline::lightspeed_inline_agent
config:
persistence_store:
type: sqlite
namespace: null
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/agents_store.db
responses_store:
type: sqlite
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/responses_store.db
persistence:
agent_state:
namespace: lightspeed_agents
backend: kv_default
responses:
table_name: lightspeed_responses
backend: sql_default
tools_filter:
# Optional: Whether to enable tools filtering, default value is true
# Optional: Whether to enable tools filtering, default value is true
enabled: true
# Optional: The model to use for filtering, the default value is the inference model used
model_id: ${env.INFERENCE_MODEL_FILTER:=}
# Optional: From how much tools we start filtering, default value is 10
# Optional: From how much tools we start filtering, default value is 10
min_tools: 10
# Optional: the file path of the system prompt, default value is None
system_prompt_path: ${env.FILTER_SYSTEM_PROMPT_PATH:=}
# Optional: the system prompt if not in a file,
# Optional: the system prompt if not in a file,
# when system_prompt_path is defined system_prompt will be the content of the indicated file
# when system_prompt is empty, the default filtering system prompt is used
# when system_prompt is empty, the default filtering system prompt is used
system_prompt: ${env.FILTER_SYSTEM_PROMPT:=}
# Optional: tools to always include, for example rag tools, to not be filtered out,
# Optional: tools to always include, for example rag tools, to not be filtered out,
# the default is an empty list
always_include_tools:
- knowledge_search
- knowledge_search
# Optional: Override temperature for this agent (default: None)
chatbot_temperature_override: 1.0
...
external_providers_dir: ~/.llama/distributions/ollama/external_providers/
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,16 @@
from typing import Any

from llama_stack.providers.datatypes import Api
from llama_stack.core.datatypes import AccessRule
from llama_stack_api import Api

from .config import LightspeedAgentsImplConfig


async def get_provider_impl(config: LightspeedAgentsImplConfig, deps: dict[Api, Any]):
async def get_provider_impl(
config: LightspeedAgentsImplConfig,
deps: dict[Api, Any],
policy: list[AccessRule],
):
# Configure litellm to drop unsupported params for models that reject them (e.g., top_p).
# This is safe to set globally since it only affects models that don't support these params.
import litellm
Expand All @@ -18,10 +23,13 @@ async def get_provider_impl(config: LightspeedAgentsImplConfig, deps: dict[Api,
config,
deps[Api.inference],
deps[Api.vector_io],
deps[Api.safety],
deps.get(Api.safety),
deps[Api.tool_runtime],
deps[Api.tool_groups],
[],
deps[Api.conversations],
deps[Api.prompts],
deps[Api.files],
policy,
)
await impl.initialize()
return impl

This file was deleted.

Loading
Loading