Skip to content

Commit

Permalink
[doc] Refine tracing doc by separating tracing and devkit (#3002)
Browse files Browse the repository at this point in the history
# Description

This pull request splits tracing index into two pages, one for
`promptflow-tracing`, one for `promptflow-devkit`.

# All Promptflow Contribution checklist:
- [x] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [x] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [x] Title of the pull request is clear and informative.
- [x] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [ ] Pull request includes test coverage for the included changes.
  • Loading branch information
zhengfeiwang authored Apr 26, 2024
1 parent 98d3e69 commit 4351ea8
Show file tree
Hide file tree
Showing 4 changed files with 133 additions and 91 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ python path/to/entry.py
User can also leverage promptflow to test the class as a `flow`.

```bash
pf flow test --flow file:ChatFlow --init init.json --inputs "question=What is ChatGPT?"
pf flow test --flow file:ChatFlow --init init.json --inputs question="What is ChatGPT?"
```

With the `flow` concept, user can further do a rich set of tasks, like:
Expand Down
113 changes: 23 additions & 90 deletions docs/how-to-guides/tracing/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,15 @@
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::

Prompt flow provides the trace feature to capture and visualize the internal execution details for all flows.
Traces records specific events or the state of an application during execution. It can include data about function calls, variable values, system events and more. Traces help break down an application's components into discrete inputs and outputs, which is crucial for debugging and understanding an application. You can learn more from [here](https://opentelemetry.io/docs/concepts/signals/traces/) on traces.

For `DAG flow`, user can track and visualize node level inputs/outputs of flow execution, it provides critical insights for developer to understand the internal details of execution.
Prompt flow provides the trace feature to enable user to trace LLM call or function, and LLM frameworks like `LangChain` and `AutoGen`, following [OpenTelemetry specification](https://opentelemetry.io/docs/specs/otel/).

For `Flex flow` developers, who might use different frameworks (langchain, semantic kernel, OpenAI, kinds of agents) to create LLM based applications, prompt flow allow user to instrument their code in a [OpenTelemetry](https://opentelemetry.io/) compatible way, and visualize using UI provided by promptflow devkit.
## Installing the package

```bash
pip install promptflow-tracing
```

## Instrumenting user's code

Expand All @@ -18,7 +22,7 @@ Let's start with the simplest example, add single line code **`start_trace()`**
from openai import OpenAI
from promptflow.tracing import start_trace

# start_trace() will print a url for trace detail visualization
# instrument OpenAI
start_trace()

client = OpenAI()
Expand All @@ -34,18 +38,7 @@ completion = client.chat.completions.create(
print(completion.choices[0].message)
```

Running above python script will produce below example output:
```
Prompt flow service has started...
You can view the traces from local: http://localhost:<port>/v1.0/ui/traces/?#collection=basic
```

Click the trace url, user will see a trace list that corresponding to each LLM calls:
![LLM-trace-list](../../media/trace/LLM-trace-list.png)


Click on one line record, the LLM detail will be displayed with chat window experience, together with other LLM call params:
![LLM-trace-detail](../../media/trace/LLM-trace-detail.png)
Then OpenAI is instrumented, and as prompt flow follows OpenTelemetry specification, user can fully leverage the OpenTelemetry knowledge to use these traces during the OpenAI calls.

### Trace on any function
A more common scenario is the application has complicated code structure, and developer would like to add trace on critical path that they would like to debug and monitor.
Expand All @@ -56,6 +49,7 @@ Execute below command will get an URL to display the trace records and trace det

```python
from promptflow.tracing import trace

# trace your function
@trace
def code_gen(client: AzureOpenAI, question: str) -> str:
Expand Down Expand Up @@ -83,87 +77,26 @@ def code_gen(client: AzureOpenAI, question: str) -> str:
python math_to_code.py
```

## Trace visualization in flow test and batch run

### Flow test
If your application is created with DAG flow, all flow test and batch run will be automatically enable trace function. Take the **[chat_with_pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf/)** as example.

Run `pf flow test --flow .`, each flow test will generate single line in the trace UI:
![flow-trace-record](../../media/trace/flow-trace-records.png)

Click a record, the trace details will be visualized as tree view.

![flow-trace-detail](../../media/trace/flow-trace-detail.png)

### Evaluate against batch data
Keep using **[chat_with_pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)** as example, to trigger a batch run, you can use below commands:

```shell
pf run create -f batch_run.yaml
```
Or
```shell
pf run create --flow . --data "./data/bert-paper-qna.jsonl" --column-mapping chat_history='${data.chat_history}' pdf_url='${data.pdf_url}' question='${data.question}'
```
Then you will get a run related trace URL, e.g. http://localhost:<port>/v1.0/ui/traces?run=chat_with_pdf_20240226_181222_219335

![batch_run_record](../../media/trace/batch_run_record.png)

### Search

Trace UI supports simple Python expression for search experience, which is demonstrated in below GIF:

![advanced_search](../../media/trace/advanced-search.gif)
## Trace LLM and frameworks

Currently it supports bool operator `and` and `or`, compare operator `==`, `!=`, `>`, `>=`, `<`, `<=`; and the fields that are searchable: `name`, `kind`, `status`, `start_time`, `cumulative_token_count.total`, `cumulative_token_count.prompt` and `cumulative_token_count.completion`. You can find the hints by clicking the button right to the search edit box.
Prompt flow tracing works not only for general LLM application, but also for more frameworks like `autogen` and `langchain`. Besides basic tracing capability, prompt flow also provides several trace toolkits that can improve the tracing experience (e.g., trace UI for visualization).

![search_hint](../../media/trace/trace-ui-search-hint.png)

## Local trace management

### Delete

Prompt flow provides capability to delete traces in local storage, user can delete traces by collection, time range or prompt flow run with both CLI and SDK:

::::{tab-set}
:::{tab-item} CLI
:sync: CLI

```bash
pf trace delete --collection <collection-name> # delete specific collection
pf trace delete --collection <collection-name> --started-before '2024-03-01T16:00:00.123456' # delete traces started before the time in specific collection
pf trace delete --run <run-name> # delete traces originated from specific prompt flow run
```
:::

:::{tab-item} SDK
:sync: SDK

```python
from promptflow.client import PFClient

pf = PFClient()
pf.traces.delete(collection="<collection-name>") # delete specific collection
pf.traces.delete(collection="<collection-name>", started_before="2024-03-01T16:00:00.123456") # delete traces started before the time in specific collection
pf.traces.delete(run="<run-name>") # delete traces originated from specific prompt flow run
```

:::

::::

## Trace with prompt flow

Prompt flow tracing works not only for general LLM application, but also for more frameworks like `autogen` and `langchain`:

1. Example: **[Add trace for LLM](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/tracing/llm)**
1. Example: **[Add trace for LLM](https://microsoft.github.io/promptflow/tutorials/trace-llm.html)**

![llm-trace-detail](../../media/trace/llm-app-trace-detail.png)

2. Example: **[Add trace for Autogen](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/tracing/autogen-groupchat/)**
2. Example: **[Add trace for Autogen](https://microsoft.github.io/promptflow/tutorials/trace-autogen-groupchat.html)**

![autogen-trace-detail](../../media/trace/autogen-trace-detail.png)

3. Example: **[Add trace for Langchain](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/tracing/langchain)**
3. Example: **[Add trace for Langchain](https://microsoft.github.io/promptflow/tutorials/trace-langchain.html)**

![langchain-trace-detail](../../media/trace/langchain-trace-detail.png)

```{toctree}
:maxdepth: 1
:hidden:
trace-ui
manage
```
40 changes: 40 additions & 0 deletions docs/how-to-guides/tracing/manage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Manage traces

:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::

Prompt flow provides several trace toolkits in `promptflow-devkit`. This page will introduce how to delete traces in local storage with CLI/SDK.

## Local trace management

### Delete

Prompt flow provides capability to delete traces in local storage, user can delete traces by collection (a bucket of traces, can be specified with `start_trace`), time range or prompt flow run with both CLI and SDK:

::::{tab-set}
:::{tab-item} CLI
:sync: CLI

```bash
pf trace delete --collection <collection-name> # delete specific collection
pf trace delete --collection <collection-name> --started-before '2024-03-01T16:00:00.123456' # delete traces started before the time in specific collection
pf trace delete --run <run-name> # delete traces originated from specific prompt flow run
```
:::

:::{tab-item} SDK
:sync: SDK

```python
from promptflow.client import PFClient

pf = PFClient()
pf.traces.delete(collection="<collection-name>") # delete specific collection
pf.traces.delete(collection="<collection-name>", started_before="2024-03-01T16:00:00.123456") # delete traces started before the time in specific collection
pf.traces.delete(run="<run-name>") # delete traces originated from specific prompt flow run
```

:::

::::
69 changes: 69 additions & 0 deletions docs/how-to-guides/tracing/trace-ui.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Visualize traces

:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::

Prompt flow provides several trace toolkits in `promptflow-devkit`. This page will introduce trace UI, where user can better capture and visualize the internal execution details for flows. With trace UI, user can track and visualize flow execution, which provides critical insights for developer to understand the internal details of execution.

## Overview

With `promptflow-devkit` installed, running python script with `start_trace` will produce below example output:

```text
Prompt flow service has started...
You can view the traces from local: http://localhost:<port>/v1.0/ui/traces/?#collection=basic
```

Click the url, user will see a trace list that corresponding to each LLM calls:
![LLM-trace-list](../../media/trace/LLM-trace-list.png)


Click on one line record, the LLM detail will be displayed with chat window experience, together with other LLM call params:
![LLM-trace-detail](../../media/trace/LLM-trace-detail.png)

When combine trace and flow, trace UI provides a more comprehensive view of the flow execution, user can easily track the flow execution details, and debug the flow execution issues.

### Flow test

If your application is created with DAG flow, all flow test and batch run will be automatically enable trace function. Take the **[chat_with_pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf/)** as example.

Run `pf flow test --flow .`, each flow test will generate single line in the trace UI:

![flow-trace-record](../../media/trace/flow-trace-records.png)

Click a record, the trace details will be visualized as tree view.

![flow-trace-detail](../../media/trace/flow-trace-detail.png)

### Evaluate against batch data

Keep using **[chat_with_pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)** as example, to trigger a batch run, you can use below command under the folder (you can learn more from [Run and evaluate a flow](https://microsoft.github.io/promptflow/how-to-guides/run-and-evaluate-a-flow/index.html) to understand what does below command do):

```shell
pf run create --flow . --data "./data/bert-paper-qna.jsonl" --column-mapping chat_history='${data.chat_history}' pdf_url='${data.pdf_url}' question='${data.question}'
```

Then you will get a run related trace URL, e.g. `http://localhost:<port>/v1.0/ui/traces?run=chat_with_pdf_20240226_181222_219335`

![batch_run_record](../../media/trace/batch_run_record.png)

### Search

Trace UI supports simple Python expression for search experience, which is demonstrated in below GIF:

![advanced_search](../../media/trace/advanced-search.gif)

Currently it supports:

- Operators:
- bool: `and` and `or`
- compare: `==`, `!=`, `>`, `>=`, `<` and `<=`
- Searchable fields:
- metadata: `name`, `kind` and `status`
- time: `start_time`
- token count: `cumulative_token_count.total`, `cumulative_token_count.prompt` and `cumulative_token_count.completion`

You can also find the hints by clicking the button right to the search edit box:

![search_hint](../../media/trace/trace-ui-search-hint.png)

0 comments on commit 4351ea8

Please sign in to comment.