Skip to content

Latest commit

 

History

History
176 lines (122 loc) · 6.11 KB

README.md

File metadata and controls

176 lines (122 loc) · 6.11 KB

Prompt Sail

LLM’s proxy for prompt and response governance, monitoring, and analysis. 📊🔍

⚠️ Prompt Sail is currently in Development: Expect breaking changes and bugs! Feedback and contributions are welcome. Please see the Contributing Guide for more information.

GitHub License GitHub Actions Workflow Status GitHub commit activity Github Last Commit Github Contributors GitHub closed issues

What is Prompt Sail?

Prompt Sail is a proxy for Large Language Models (LLMs) API's such as OpenAI GPT models, Azure OpenAI, Anthropic Clude etc. that allows you to record prompts and responses, analyze costs, generation speed, compare and track trends and changes across various models and projects over time.

Prompt Sail dashboard

To learn more about Prompt Sail’s features and capabilities, see

Getting started 🚀

The easiest way is to test our demo at https://try-promptsail.azurewebsites.net/ (every new deployment erases the database)

Check out the documentation how to run PromptSail locally via docker.

Run Prompt Sail locally via Docker Compose 🐳

To try out Start Prompt on your own machine, we recommend using docker-compose.

Requirements 📋

  • installed docker and docker-compose on your machine Windows | Mac | Linux
  • git clone repository and navigate to main directory
git clone https://github.com/PromptSail/prompt_sail.git
cd prompt_sail

Run docker images 🏗️

Build and run the docker image:

docker-compose -f docker-compose-build.yml up --build

Pull and run the Docker images from GHCR:

docker-compose -f docker-compose.yml up

Create a project 📝

Navigate to http://localhost:80 and add you AI provider of choice.

Modify your code to use Prompt Sail proxy 👨‍💻

To use Prompt Sail with openai Python library, you need to set OPENAI_API_BASE environment variable, or modify openai.api_base parameter to point to your Prompt Sail project.

from openai import OpenAI
import os
from dotenv import load_dotenv
from pprint import pprint

load_dotenv()

openai_key = os.getenv("OPENAI_API_KEY")
openai_org_id = os.getenv("OPENAI_ORG_ID")

api_base = "http://localhost:8000/project1/openai/"

ps_client = OpenAI(
    base_url=api_base,
    api_key=openai_key,
)

response = ps_client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "system",
            "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair.",
        },
        {
            "role": "user",
            "content": "Compose a poem that explains the concept of recursion in programming.",
        },
    ],
)

pprint(response.choices[0].message)

Using Prompt Sail with langchain is similar:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)
from langchain.schema import HumanMessage, SystemMessage

haiku_prompt = [
    SystemMessage(
        content="You are a poetic assistant, skilled in explaining complex programming concepts with creative flair.",
    ),
    HumanMessage(
        content="Compose a haiku that explains the concept of recursion in programming.",
    ),
]
chat = ChatOpenAI(
    temperature=0.9,
    openai_api_key=openai_key,
    openai_organization=openai_org_id,
    model="gpt-3.5-turbo-1106",
)

chat(haiku_prompt)

Contact 📞

License 📜

Prompt Sail is free and open source, under the MIT license.