Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Prompty doc #2982

Merged
merged 13 commits into from
Apr 25, 2024
Merged
82 changes: 59 additions & 23 deletions docs/how-to-guides/develop-a-prompty/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,14 @@ After this front matter is the prompt template, articulated in the `Jinja` forma

Fields in the front matter:

| Field | Description |
|-------------|----------------------------------------------------------------------------------------------------------|
| name | The name of the prompt. |
| description | A description of the prompt. |
| model | Details the prompty's model configuration, including connection info and parameters for the LLM request. |
| sample | Offers a dictionary or JSON file containing sample data for inputs and outputs. |
| Field | Description |
|-------------|-----------------------------------------------------------------------------------------------------------|
| name | The name of the prompt. |
| description | A description of the prompt. |
| model | Details the prompty's model configuration, including connection info and parameters for the LLM request. |
| inputs | The input definition that passed to prompt template. |
| outputs | Specify the fields in prompty result. (Only works when response_format is json_object). |
| sample | Offers a dictionary or JSON file containing sample data for inputs. |

```yaml
---
Expand All @@ -37,11 +39,19 @@ model:
parameters:
max_tokens: 128
temperature: 0.2
inputs:
wangchao1230 marked this conversation as resolved.
Show resolved Hide resolved
first_name:
type: string
default: John
lalala123123 marked this conversation as resolved.
Show resolved Hide resolved
last_name:
type: string
default: Doe
question:
type: string
sample:
lalala123123 marked this conversation as resolved.
Show resolved Hide resolved
inputs:
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -81,11 +91,19 @@ model:
parameters:
max_tokens: 128
temperature: 0.2
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
lalala123123 marked this conversation as resolved.
Show resolved Hide resolved
inputs:
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -159,11 +177,19 @@ model:
parameters:
max_tokens: 128
temperature: 0.2
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
inputs:
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -300,12 +326,22 @@ model:
parameters:
max_tokens: 128
temperature: 0.2
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
chat_history:
type: list
sample:
inputs:
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
chat_history: [ { "role": "user", "content": "what's the capital of France?" }, { "role": "assistant", "content": "Paris" } ]
first_name: John
last_name: Doe
question: Who is the most famous person in the world?
chat_history: [ { "role": "user", "content": "what's the capital of France?" }, { "role": "assistant", "content": "Paris" } ]
---
system:
You are an AI assistant who helps people find information.
Expand Down
178 changes: 111 additions & 67 deletions docs/how-to-guides/develop-a-prompty/prompty-output-format.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,27 @@ By default, prompty returns the message from the first choice in the response. B
name: Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
connection: <connection_name>
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 128
temperature: 0.2
api: chat
configuration:
type: azure_openai
connection: <connection_name>
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 128
temperature: 0.2
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
inputs:
first_name: John
last_name: Doh
question: what is the meaning of life?
first_name: John
last_name: Doe
question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -68,21 +76,29 @@ Here’s how to configure a prompty for JSON object output:
name: Json Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo-0125
connection: open_ai_connection
parameters:
max_tokens: 128
temperature: 0.2
response_format:
type: json_object
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo-0125
connection: open_ai_connection
parameters:
max_tokens: 128
temperature: 0.2
response_format:
type: json_object
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
inputs:
first_name: John
last_name: Doh
question: what is the meaning of life?
first_name: John
last_name: Doe
question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -111,23 +127,32 @@ Users can also specify the fields to be returned by configuring the output secti
name: Json Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo-0125
connection: open_ai_connection
parameters:
max_tokens: 128
temperature: 0.2
response_format:
type: json_object
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo-0125
connection: open_ai_connection
parameters:
max_tokens: 128
temperature: 0.2
response_format:
type: json_object
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
outputs:
answer:
type: string
sample:
inputs:
first_name: John
last_name: Doh
question: what is the meaning of life?
outputs:
answer: The meaning of life is a philosophical question
first_name: John
last_name: Doe
question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
Expand Down Expand Up @@ -156,21 +181,29 @@ In certain scenarios, users may require access to the original response from the
name: All Choices Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
connection: open_ai_connection
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 128
temperature: 0.2
n: 3
response: all
api: chat
configuration:
type: azure_openai
connection: open_ai_connection
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 128
temperature: 0.2
n: 3
response: all
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
inputs:
first_name: John
last_name: Doh
question: what is the meaning of life?
first_name: John
last_name: Doe
question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
Expand All @@ -197,17 +230,28 @@ Here’s how to configure a prompty for streaming text output:
name: Stream Mode Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
connection: open_ai_connection
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 512
temperature: 0.2
stream: true
sample:
"question": "What's the steps to get rich?"
api: chat
configuration:
type: azure_openai
connection: open_ai_connection
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 512
temperature: 0.2
stream: true
inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
sample:
first_name: John
last_name: Doe
question: What's the steps to get rich?
---
system:
You are an AI assistant who helps people find information.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,22 @@ model:
parameters:
temperature: 0.2
stream: true

inputs:
first_name:
type: string
default: John
last_name:
type: string
default: Doe
question:
type: string
chat_history:
type: list
sample:
inputs:
question: What is Prompt flow?
first_name: John
last_name: Doe
question: What is Prompt flow?
chat_history: [ { "role": "user", "content": "what's the capital of France?" }, { "role": "assistant", "content": "Paris" } ]
---
system:
Expand Down
Loading