Skip to content

yoyoitsevan/promptbook

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“– Prompt book

Library to supercharge your use of large language models

License of πŸ“– Prompt book Known Vulnerabilities Issues

⚠ Warning: This library is still in early development.

Promptbook full of ideas

πŸ’β€β™‚οΈ Concept

When you have a simple single prompt to ChatGPT / GPT completion, it doesn't matter how it is integrated, whether it's direct calling of Rest API or using Open Ai library and hardcoding prompt in source code or importing text file.

If you need something more advanced or want to extend the capabilities of LLMs, you generally have 3 ways to come:

  1. Fine-tune the model to your perfection or even train your own.
  2. Tune the prompt to your perfection
  3. Use multishot approach with multiple prompts to get the best result

In any of these situations, this library can make your life easier:

  • Separation of concerns between prompt engineer and programmer; between code files and prompt files; and between prompts, templates, templating pipelines, and their execution logic.
  • Set up a common format for prompts that is interchangeable between project and language/technology stacks.
  • Simplify your code to be DRY and not repeat all the boilerplate code for each prompt.
  • Versioning of prompt template pipelines
  • Reuse parts of prompt template pipelines in/between projects
  • Logging the results of the prompt template pipelines
  • Caching calls to LLMs to save money and time
  • A/B testing to determine which prompt works best for the job
  • Leverage the streaming to make super cool UI/UX

WebGPT

πŸ§” Prompt template pipelines (for prompt-engeneers)

Prompt book markdown file (PTBK for short, or .ptbk.md) is document that describes a series of prompts that are chained together to form somewhat reciepe for transforming natural language input. Inside a PTBK you can use chat prompts, completion prompts, scripting or trigger interaction with user to ask for additional information.

  • Multiple PTBKs forms a library which will become a part of your application codebase.
  • Theese pipelines are designed such as they can be written by non-programmers.

Sample:

File write-website-content.ptbk.md:

# 🌍 Create website content

Instructions for creating web page content using [πŸ“– Prompt template pipelines](https://github.com/webgptorg/promptbook).

-   PTBK URL https://ptbk.webgpt.com/en/[email protected]
-   PTBK version 0.0.1
-   MODEL VARIANT Chat
-   Use GPT-3.5
-   Input param `{rawTitle}` Automatically suggested a site name or empty text
-   Input param `{rawAssigment}` Automatically generated site entry from image recognition
-   Output param `{content}` Web content

## πŸ‘€ Specifying the assigment

What is your web about?

-   Prompt dialog

\`\`\`text
{rawAssigment}
\`\`\`

`-> {assigment}` Website assignment and specification

## πŸ’¬ Improvement of the web title

-   Postprocessing `unwrapResult`

\`\`\`markdown
As an experienced marketing specialist, you have been entrusted with improving the name of your client's business.

A suggested name from a client:
"{rawTitle}"

Assignment from customer:

> {assigment}

## Instructions:

-   Write only one name suggestion
-   The name will be used on the website, business cards, visuals, etc.
    \`\`\`

`-> {enhancedTitle}` Enhanced title

## πŸ‘€ SchvΓ‘lenΓ­ nΓ‘zvu uΕΎivatelem

Is the title for your website okay?

-   Prompt dialog

\`\`\`text
{enhancedTitle}
\`\`\`

`-> {title}` Title for the website

## πŸ’¬ Cunning subtitle

-   Postprocessing `unwrapResult`

\`\`\`markdown
As an experienced copywriter, you have been entrusted with creating a claim for the "{title}" web page.

A website assignment from a customer:

> {assigment}

## Instructions:

-   Write only one name suggestion
-   Claim will be used on website, business cards, visuals, etc.
-   Claim should be punchy, funny, original
    \`\`\`

`-> {claim}` Claim for the web

## πŸ’¬ Keyword analysis

\`\`\`markdown
As an experienced SEO specialist, you have been entrusted with creating keywords for the website "{title}".

Website assignment from the customer:

> {assigment}

## Instructions:

-   Write a list of keywords
-   Keywords are in basic form

## Example:

-   Ice cream
-   Olomouc
-   Quality
-   Family
-   Tradition
-   Italy
-   Craft
    \`\`\`

`-> {keywords}` Keywords

## πŸ”— VytvoΕ™enΓ­ začÑtku obsahu webu

-   Simple template

\`\`\`text

# {title}

> {claim}

\`\`\`

`-> {contentBeginning}` Beginning of web content

## πŸ–‹ Writing web content

-   MODEL VARIANT Completion
-   MODEL NAME `gpt-3.5-turbo-instruct`

\`\`\`markdown
As an experienced copywriter and web designer, you have been entrusted with creating text for a new website {title}.

A website assignment from a customer:

> {assigment}

## Instructions:

-   Text formatting is in Markdown
-   Be concise and to the point
-   Use keywords, but they should be naturally in the text
-   This is the complete content of the page, so don't forget all the important information and elements the page should contain
-   Use headings, bullets, text formatting

## Keywords:

{keywords}

## Web Content:

{contentBeginning}
\`\`\`

`-> {contentBody}` Middle of the web content

## πŸ”— Combine content

-   Simple template

\`\`\`markdown
{contentBeginning}

{contentBody}
\`\`\`

`-> {content}`

More template samples

Note: We are using postprocessing functions like unwrapResult that can be used to postprocess the result.

πŸ“š Dictionary

The following glossary is used to clarify certain basic concepts:

Prompt

Prompt in a text along with model requirements, but without any execution or templating logic.

For example:

{
    "request": "Which sound does a cat make?",
    "modelRequirements": {
        "variant": "CHAT"
    }
}
{
    "request": "I am a cat.\nI like to eat fish.\nI like to sleep.\nI like to play with a ball.\nI l",
    "modelRequirements": {
        "variant": "COMPLETION"
    }
}

Prompt Template

Similar concept to Prompt, but with templating logic.

For example:

{
    "request": "Which sound does a {animalName} make?",
    "modelRequirements": {
        "variant": "CHAT"
    }
}

Model Requirements

Abstract way to specify the LLM. It does not specify the LLM with concrete version itself, only the requirements for the LLM. NOT chatgpt-3.5-turbo BUT CHAT variant of GPT-3.5.

For example:

{
    "variant": "CHAT",
    "version": "GPT-3.5",
    "temperature": 0.7
}

Execution type

Each block of prompt template pipeline can have a different execution type. It is specified in list of requirements for the block. By default, it is Prompt template

  • (default) Prompt template The block is a prompt template and is executed by LLM (OpenAI, Azure,...)
  • Simple template The block is a simple text template which is just filled with parameters
  • Script The block is a script that is executed by some script runtime, the runtime is determined by block type, currently only javascript is supported but we plan to add python and typescript in the future.
  • Prompt dialog Ask user for input

Parameters

Parameters that are placed in the prompt template and replaced to create the prompt. It is a simple key-value object.

{
    "animalName": "cat",
    "animalSound": "Meow!"
}

There are three types of template parameters, depending on how they are used in the prompt template pipeline:

  • Input parameters are required to execute the prompt template pipeline.
  • Intermediate parameters are used internally in the prompt template pipeline.
  • Output parameters are not used internally in the prompt template pipeline, but are returned as the result of the prompt template pipeline execution.

Prompt Template Pipeline

Prompt template pipeline is the core concept of this library. It represents a series of prompt templates chained together to form a pipeline / one big prompt template with input and result parameters.

Internally it can have 3 formats:

  • .ptbk.md file in custom markdown format described above
  • (internal) JSON format, parsed from the .ptbk.md file
  • (internal) Object which is created from JSON format and bound with tools around (but not the execution logic)

Prompt Template Pipeline Library

Library of prompt template pipelines that groups together prompt template pipelines for an application. This is a very thin wrapper around the Array / Set of prompt template pipelines.

Prompt Template Pipeline library is a useful helper in execution, it can be shared between execution and consumer parts of the app and make common knowledge about prompt template pipelines.

It allows to create executor functions from prompt template pipelines in the library.

Prompt Result

Prompt result is the simplest concept of execution. It is the result of executing one prompt (NOT a template).

For example:

{
    "response": "Meow!",
    "model": "chatgpt-3.5-turbo"
}

Execution Tools

ExecutionTools is an interface which contains all the tools needed to execute prompts (template pipelines). It contais 3 subtools:

  • NaturalExecutionTools
  • ScriptExecutionTools
  • UserInterfaceTools

Which are described below:

Natural Execution Tools

NaturalExecutionTools is a container for all the tools needed to execute prompts to large language models like GPT-4. On its interface it exposes common methods for prompt execution. Internally it calls OpenAI, Azure, GPU, proxy, cache, logging,...

NaturalExecutionTools an abstract interface that is implemented by concrete execution tools:

  • OpenAiExecutionTools
  • (Not implemented yet) AzureOpenAiExecutionTools
  • (Not implemented yet) BardExecutionTools
  • (Not implemented yet) LamaExecutionTools
  • (Not implemented yet) GpuExecutionTools
  • And a special case are RemoteNaturalExecutionTools that connect to a remote server and run one of the above execution tools on that server.
  • The second special case is MockedEchoNaturalExecutionTools that is used for testing and mocking.
  • The third special case is LogNaturalExecutionToolsWrapper that is technically also an execution tools but it is more proxy wrapper around other execution tools that logs all calls to execution tools.

Script Execution Tools

ScriptExecutionTools is an abstract container that represents all the tools needed to execute scripts. It is implemented by concrete execution tools:

  • JavascriptExecutionTools is a wrapper around vm2 module that executes javascript code in a sandbox.
  • JavascriptEvalExecutionTools is wrapper around eval function that executes javascript. It is used for testing and mocking NOT intended to use in the production due to its unsafe nature, use JavascriptExecutionTools instead.
  • (Not implemented yet) TypescriptExecutionTools executes typescript code in a sandbox.
  • (Not implemented yet) PythonExecutionTools executes python code in a sandbox.

There are postprocessing functions that can be used to postprocess the result.

User Interface Tools

UserInterfaceTools is an abstract container that represents all the tools needed to interact with the user. It is implemented by concrete execution tools:

  • (Not implemented yet) ConsoleInterfaceTools is a wrapper around readline module that interacts with the user via console.
  • SimplePromptInterfaceTools is a wrapper around window.prompt synchronous function that interacts with the user via browser prompt. It is used for testing and mocking NOT intended to use in the production due to its synchronous nature.
  • CallbackInterfaceTools delagates the user interaction to a async callback function. You need to provide your own implementation of this callback function and its bind to UI.

Executor

Executor is a simple async function that takes input parameters and returns output parameters (along with all intermediate parameters and input parameters = it extends input object).

Executor is made by combining execution tools and prompt template pipeline library. It can be done in two ways:

  • From PromptTemplatePipelineLibrary.getExecutor method
  • createPtpExecutor utility function

Postprocessing functions

All postprocessing functions are listed here.

TODO: Write more about postprocessing functions

Remote server

Remote server is a proxy server that uses its execution tools internally and exposes the executor interface externally.

You can simply use RemoteExecutionTools on client-side javascript and connect to your remote server. This is useful to make all logic on browser side but not expose your API keys or no need to use customer's GPU.

πŸ‘¨β€πŸ’» Usage and integration (for developers)

πŸ§™β€β™‚οΈ Using wizzard

First you need to install this library:

npm install --save @promptbook/wizzard

TODO: !!! Write the Wizzard sample

Usage samples

πŸ”Œ Advanced usage

Install all the components:

npm install --save @promptbook/core @promptbook/wizzard @promptbook/openai @promptbook/execute-javascript @promptbook/remote-client @promptbook/remote-server @promptbook/utils @promptbook/types

TODO: !!! Write the remote sample

Usage samples

❔ FAQ

If you have a question start a discussion, open an issue or write me an email.

Why not just use the OpenAI library?

Different levels of abstraction. OpenAI library is for direct use of OpenAI API. This library is for a higher level of abstraction. It is for creating prompt templates and prompt template pipelines that are independent of the underlying library, LLM model, or even LLM provider.

How is it different from the Langchain library?

Langchain is primarily aimed at ML developers working in Python. This library is for developers working in javascript/typescript and creating applications for end users.

We are considering creating a bridge/converter between these two libraries.

GPTs

...

⌚ Changelog

See CHANGELOG.md

🎯 TODOs

  • [🧠] Figure out the best name for this library - Prompt Template Pipeline, Prompt Template Engine, Prompt Template Processor, Open Prompt Initiative

  • Make from this folder a separate repository + npm package

  • Add tests

  • Annotate all entities

  • Make internal string aliases

  • Make branded types instead of pure string aliases

  • Remove all anys

  • Make promptbooks non-linear

  • Logging pipeline name, version, step,...

  • No circular dependencies

  • [ ][🧠] Wording: "param" vs "parameter" vs "variable" vs "argument"

  • All entities must have public / private / protected modifiers

  • Everything not needed should be private or not exported

  • Refactor circular dependencies

  • Importing subtemplates

  • Use spaceTrim more effectively

  • [πŸ€Ήβ€β™‚οΈ] Allow chats to be continued with previous message

  • [🧠][πŸ€Ήβ€β™‚οΈ] How to mark continued chat in .ptbk.md format?

  • Use newest version of socket.io for remote server

  • [🧠] Allow to use and define function calling

  • Register .ptbk file extension

  • Fix error content.js:73 Uncaught (in promise) TypeError: object null is not iterable (cannot read property Symbol(Symbol.iterator))

  • !!! Go through the README

  • Aborting execution, maybe use native AbortController

  • Change import {...} from '...'; to import type {...} from '...'; when importing only types

  • Wrap OpenAI billing errors:

    • "Billing hard limit has been reached"
    • "You exceeded your current quota, please check your plan and billing details."

πŸ–‹οΈ Contributing

I am open to pull requests, feedback, and suggestions. Or if you like this utility, you can β˜• buy me a coffee or donate via cryptocurrencies.

You can also ⭐ star the promptbook package, follow me on GitHub or various other social networks.

✨ Partners

SigmaStamp logo Β Β Β  Collboard logo

Become a partner

About

Library to supercharge your use of large language models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 99.5%
  • JavaScript 0.5%