Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Example] Add example flow for activate config #525

Merged
merged 20 commits into from
Sep 18, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions examples/flows/standard/conditional-flow-for-if-else/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Conditional flow for if-else scenario
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved

This example is a conditonal flow for if-else scenario.

In this flow, it checks if an input query passes content safety check. If it's denied, we'll return a default response; otherwise, we'll call LLM to get a response and then summarize the final results.

:::{admonition} Notice
The 'content_safety_check' and 'llm_result' node in this flow are dummy nodes that do not actually use connections of `Azure Content Safty` and `Azure Open AI`. You can replace them with the real ones. Learn more: [Manage connections](../../../connections/connection.ipynb)
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
:::

By following this example, you will learn how to create a conditional flow using the `activate config`.

## Prerequisites

Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```

## Run flow

- Test flow/node
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .

# test with flow inputs
pf flow test --flow . --inputs query="What is Prompt flow?"
```

- List and show run meta
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
```bash
# list created run
pf run list

# get a sample run name
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_default")) | .name'| head -n 1 | tr -d '"')

# show specific run detail
pf run show --name $name

# show output
pf run show-details --name $name

# visualize run in browser
pf run visualize --name $name
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from promptflow import tool
import random


@tool
def content_safety_check(text: str) -> str:
return random.choice([True, False])
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{"query": "What is Prompt flow?"}
{"query": "What is ChatGPT?"}
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
from promptflow import tool


@tool
def default_result(request: str) -> str:
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
return f"I'm not familiar with your query: {request}."
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
inputs:
query:
type: string
default: What is Prompt flow?
outputs:
answer:
type: string
reference: ${generate_result.output}
nodes:
- name: content_safety_check
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
type: python
source:
type: code
path: content_safety_check.py
inputs:
text: ${inputs.query}
- name: llm_result
type: python
source:
type: code
path: llm_result.py
inputs:
request: ${inputs.query}
activate:
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
when: ${content_safety_check.output}
is: true
- name: default_result
type: python
source:
type: code
path: default_result.py
inputs:
request: ${inputs.query}
activate:
when: ${content_safety_check.output}
is: false
- name: generate_result
type: python
source:
type: code
path: generate_result.py
inputs:
llm_result: ${llm_result.output}
default_result: ${default_result.output}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from promptflow import tool


@tool
def generate_result(llm_result="", default_result="") -> str:
if llm_result:
return llm_result
else:
return default_result
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from promptflow import tool


@tool
def llm_result(request: str) -> str:
PeiwenGaoMS marked this conversation as resolved.
Show resolved Hide resolved
return (
"Prompt flow is a suite of development tools designed to streamline "
"the end-to-end development cycle of LLM-based AI applications."
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
promptflow
promptflow-tools