Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Perplexity copilot to Terminal Pro #16

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions perplexity-copilot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Perplexity AI Copilot

This example provides a basic copilot that utilizes the Perplexity AI API for natural language processing and generation.

## Overview

This implementation uses a FastAPI application as the backend for the copilot. The core functionality is powered by the Perplexity AI API, which offers advanced language models and various capabilities.

You can adapt this implementation to suit your needs or preferences. The key is to adhere to the schema defined by the `/query` endpoint and the specifications in `copilot.json`.

This repository serves as a starting point for you to experiment, modify, and extend. You can build copilots with various capabilities, leveraging Perplexity AI's features such as different model options, chat completions, and more.

## Getting Started

Follow these steps to set up and run your Perplexity AI-powered copilot:

### Prerequisites

- Python 3.7 or higher
- Poetry (for dependency management)
- A Perplexity AI API key (sign up at https://www.perplexity.ai/ if you don't have one)


### Installation and Running

1. Clone this repository to your local machine.
2. Set the Perplexity API key as an environment variable in your .bashrc or .zshrc file:

``` sh
# in .zshrc or .bashrc
export PERPLEXITY_API_KEY=pplx=<your-api-key>
```

3. Install the necessary dependencies:

``` sh
poetry install --no-root
```

4.Start the API server:

``` sh
poetry run uvicorn perplexity_copilot.main:app --port 7777 --reload
```

This command runs the FastAPI application, making it accessible on your network.

### Testing the Copilot

The example copilot has a small, basic test suite to ensure it's
working correctly. As you develop your copilot, you are highly encouraged to
expand these tests.

You can run the tests with:

``` sh
pytest tests
```

### Accessing the Documentation

Once the API server is running, you can view the documentation and interact with the API by visiting: http://localhost:7777/docs
Empty file.
13 changes: 13 additions & 0 deletions perplexity-copilot/perplexity_copilot/copilots.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"llama_copilot": {
"name": "Perplexity Copilot",
"description": "Enables users to search for information on the web using Perplexity.",
"image": "https://github.com/user-attachments/assets/f079bda8-aff1-4da7-8385-4657824020e8",
"hasStreaming": true,
"hasDocuments": false,
"hasFunctionCalling": false,
"endpoints": {
"query": "http://localhost:7777/v1/query"
}
}
}
92 changes: 92 additions & 0 deletions perplexity-copilot/perplexity_copilot/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
import re
import json
import os
from pathlib import Path
from typing import AsyncGenerator

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from openai import OpenAI
from sse_starlette.sse import EventSourceResponse

from dotenv import load_dotenv
from .models import AgentQueryRequest
from .prompts import SYSTEM_PROMPT


load_dotenv(".env")
app = FastAPI()

origins = [
"http://localhost",
"http://localhost:1420",
"http://localhost:5050",
"https://pro.openbb.dev",
"https://pro.openbb.co",
]

app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

client = OpenAI(
api_key=os.getenv("PERPLEXITY_API_KEY"),
base_url="https://api.perplexity.ai"
)


def sanitize_message(message: str) -> str:
"""Sanitize a message by escaping forbidden characters."""
cleaned_message = re.sub(r"(?<!\{)\{(?!{)", "{{", message)
cleaned_message = re.sub(r"(?<!\})\}(?!})", "}}", cleaned_message)
return cleaned_message


async def create_message_stream(
content: AsyncGenerator[str, None],
) -> AsyncGenerator[dict, None]:
async for chunk in content:
yield {"event": "copilotMessageChunk", "data": {"delta": chunk}}


@app.get("/copilots.json")
def get_copilot_description():
"""Widgets configuration file for the OpenBB Terminal Pro"""
return JSONResponse(
content=json.load(open((Path(__file__).parent.resolve() / "copilots.json")))
)


@app.post("/v1/query")
async def query(request: AgentQueryRequest) -> EventSourceResponse:
"""Query the Copilot."""

messages = [{"role": "system", "content": SYSTEM_PROMPT}]
for message in request.messages:
role = message.role.lower()
if role not in ['system', 'user', 'assistant']:
role = 'user' # Default to 'user' if an invalid role is provided
messages.append({
"role": role,
"content": sanitize_message(message.content)
})

async def generate():
response_stream = client.chat.completions.create(
model="llama-3-sonar-large-32k-online",
messages=messages,
stream=True,
)
for response in response_stream:
if response.choices[0].delta.content is not None:
yield response.choices[0].delta.content

return EventSourceResponse(
content=create_message_stream(generate()),
media_type="text/event-stream",
)
48 changes: 48 additions & 0 deletions perplexity-copilot/perplexity_copilot/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from typing import Any
from uuid import UUID
from pydantic import BaseModel, Field, field_validator
from enum import Enum


class RoleEnum(str, Enum):
ai = "ai"
human = "human"


class LlmMessage(BaseModel):
role: RoleEnum = Field(
description="The role of the entity that is creating the message"
)
content: str = Field(description="The content of the message")


class BaseContext(BaseModel):
uuid: UUID = Field(description="The UUID of the widget.")
name: str = Field(description="The name of the widget.")
description: str = Field(
description="A description of the data contained in the widget"
)
content: Any = Field(description="The data content of the widget")
metadata: dict[str, Any] | None = Field(
default=None,
description="Additional widget metadata (eg. the selected ticker, etc)",
)


class AgentQueryRequest(BaseModel):
messages: list[LlmMessage] = Field(
description="A list of messages to submit to the copilot."
)
context: list[BaseContext] | None = Field(
default=None,
description="Additional context.",
)
use_docs: bool = Field(
default=None, description="Set True to use uploaded docs when answering query."
)

@field_validator("messages")
def check_messages_not_empty(cls, value):
if not value:
raise ValueError("messages list cannot be empty.")
return value
19 changes: 19 additions & 0 deletions perplexity-copilot/perplexity_copilot/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
SYSTEM_PROMPT = """\n
You are a helpful financial assistant working for Example Co.
Your name is "Llama 3.1 Copilot", and you were created by Example Co.
You will do your best to answer the user's query.

Use the following guidelines:
- Formal and Professional Tone: Maintain a business-like, sophisticated tone, suitable for a professional audience.
- Clarity and Conciseness: Keep explanations clear and to the point, avoiding unnecessary complexity.
- Focus on Expertise and Experience: Emphasize expertise and real-world experiences, using direct quotes to add a personal touch.
- Subject-Specific Jargon: Use industry-specific terms, ensuring they are accessible to a general audience through explanations.
- Narrative Flow: Ensure a logical flow, connecting ideas and points effectively.
- Incorporate Statistics and Examples: Support points with relevant statistics, examples, or case studies for real-world context.

## Context
Use the following context to help formulate your answer:

{context}

"""
Loading