Skip to content

Commit

Permalink
Merge branch 'users/gega/updatellmdocserverless' of https://github.co…
Browse files Browse the repository at this point in the history
…m/microsoft/promptflow into users/gega/updatellmdocserverless
  • Loading branch information
gegao-MS committed Apr 25, 2024
2 parents 118ac1a + 27e132c commit c42c8c0
Show file tree
Hide file tree
Showing 121 changed files with 2,860 additions and 1,120 deletions.
64 changes: 64 additions & 0 deletions .github/workflows/samples_tracing_llm_tracellm.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# This code is autogenerated.
# Code is generated by running custom script: python3 readme.py
# Any manual changes to this file may cause incorrect behavior.
# Any manual changes will be overwritten if the code is regenerated.

name: samples_tracing_llm_tracellm
on:
schedule:
- cron: "36 21 * * *" # Every day starting at 5:36 BJT
pull_request:
branches: [ main ]
paths: [ examples/tutorials/tracing/llm/**, .github/workflows/samples_tracing_llm_tracellm.yml, examples/requirements.txt, examples/connections/azure_openai.yml ]
workflow_dispatch:

env:
IS_IN_CI_PIPELINE: "true"

jobs:
samples_tracing_llm_tracellm:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Setup Python 3.9 environment
uses: actions/setup-python@v4
with:
python-version: "3.9"
- name: Prepare requirements
run: |
python -m pip install --upgrade pip
pip install -r ${{ github.workspace }}/examples/requirements.txt
pip install -r ${{ github.workspace }}/examples/dev_requirements.txt
- name: setup .env file
working-directory: examples/tutorials/tracing/llm
run: |
AOAI_API_KEY=${{ secrets.AOAI_API_KEY_TEST }}
AOAI_API_ENDPOINT=${{ secrets.AOAI_API_ENDPOINT_TEST }}
AOAI_API_ENDPOINT=$(echo ${AOAI_API_ENDPOINT//\//\\/})
if [[ -e .env.example ]]; then
echo "env replacement"
sed -i -e "s/<your_AOAI_key>/$AOAI_API_KEY/g" -e "s/<your_AOAI_endpoint>/$AOAI_API_ENDPOINT/g" .env.example
mv .env.example .env
fi
if [[ -e ../.env.example ]]; then
echo "env replacement"
sed -i -e "s/<your_AOAI_key>/$AOAI_API_KEY/g" -e "s/<your_AOAI_endpoint>/$AOAI_API_ENDPOINT/g" ../.env.example
mv ../.env.example ../.env
fi
- name: Create Aoai Connection
run: pf connection create -f ${{ github.workspace }}/examples/connections/azure_openai.yml --set api_key="${{ secrets.AOAI_API_KEY_TEST }}" api_base="${{ secrets.AOAI_API_ENDPOINT_TEST }}"
- name: Test Notebook
working-directory: examples/tutorials/tracing/llm
run: |
papermill -k python trace-llm.ipynb trace-llm.output.ipynb
- name: Upload artifact
if: ${{ always() }}
uses: actions/upload-artifact@v3
with:
name: artifact
path: examples/tutorials/tracing/llm
10 changes: 9 additions & 1 deletion .github/workflows/sdk-cli-azure-test-production.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ jobs:
run: |
set -xe
poetry install --with ci,test
poetry run pip show promptflow-tracing
poetry run pip show promptflow-core
poetry run pip show promptflow-devkit
Expand All @@ -85,3 +85,11 @@ jobs:
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
poetry run pytest ${{ inputs.filepath }} -n auto -m "unittest or e2etest"
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v3
with:
name: Test Results (Python ${{ matrix.pythonVersion }}) (OS ${{ matrix.os }})
path: |
${{ env.WORKING_DIRECTORY }}/tests/sdk_cli_azure_test/count.json
12 changes: 6 additions & 6 deletions docs/cloud/azureai/consume-connections-from-azure-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,13 @@ Note:

Currently, we support three types of connections:

|Connection provider|Type|Description|Provider Specification|Use Case|
|---|---|---|---|---|
| Local Connections| Local| Enables consume the connections created and locally and stored in local sqlite. |NA| Ideal when connections need to be stored and managed locally.|
|Azure AI connection - For current working directory| Cloud provider| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project.| Specify the resource ID in a `config.json` file placed in the project folder. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#azureml)| A dynamic approach for consuming connections from different providers in specific projects. Allows for setting different provider configurations for different flows by updating the `config.json` in the project folder.|
|Azure AI connection - For this machine| Cloud| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project. | Use a `connection string` to specify a cloud resource as the provider on your local machine. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#full-azure-machine-learning-workspace-resource-id)|A global provider setting that applies across all working directories on your machine.|
|Connection provider|Type|Description| Provider Specification |Use Case|
|---|---|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---|
| Local Connections| Local| Enables consume the connections created and locally and stored in local sqlite. | NA | Ideal when connections need to be stored and managed locally.|
|Azure AI connection - For current working directory| Cloud provider| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project.| Specify the resource ID in a `config.json` file placed in the project folder. <br> [Click here for more details](../../how-to-guides/set-promptflow-configs.md#azureml) | A dynamic approach for consuming connections from different providers in specific projects. Allows for setting different provider configurations for different flows by updating the `config.json` in the project folder.|
|Azure AI connection - For this machine| Cloud| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project. | Use a `connection string` to specify a cloud resource as the provider on your local machine. <br> [Click here for more details](../../how-to-guides/set-promptflow-configs.md#full-azure-machine-learning-workspace-resource-id) |A global provider setting that applies across all working directories on your machine.|

## Next steps

- Set global configs on [connection.provider](../../how-to-guides/set-global-configs.md#connectionprovider).
- Set global configs on [connection.provider](../../how-to-guides/set-promptflow-configs.md#connectionprovider).
- [Manage connections on local](../../how-to-guides/manage-connections.md).
55 changes: 43 additions & 12 deletions docs/dev/dev_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,20 @@

## Set up process

- First create a new [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) environment. Please specify python version as 3.9.
`conda create -n <env_name> python=3.9`.
- Activate the env you created.
- In root folder, run `python scripts/dev-setup/main.py` to install the packages and dependencies; if you are using Visual Studio Code, it is recommended to add `--vscode` (which is `python scripts/dev-setup/main.py --vscode`) to enable VS Code to recognize the packages.
Select either Conda or Poetry to set up your development environment.

1. Conda environment setup
- First create a new [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) environment. Please specify python version as 3.8/3.9/3.10/3.11.
`conda create -n <env_name> python=3.9`.
- Activate the env you created.
- In root folder, run `python scripts/dev-setup/main.py` to install the packages and dependencies; if you are using Visual Studio Code, it is recommended to add `--vscode` (which is `python scripts/dev-setup/main.py --vscode`) to enable VS Code to recognize the packages.

2. Poetry environment setup
- Install [poetry](https://python-poetry.org/docs/). Please specify python version as 3.8/3.9/3.10/3.11.
- Each folder under [src](../../src/) (except the promptflow folder) is a separate package, so you need to install the dependencies for each package.
- `poetry install -C promptflow-core -E <extra> --with dev,test`
- `poetry install -C promptflow-devkit -E <extra> --with dev,test`
- `poetry install -C promptflow-azure -E <extra> --with dev,test`

## How to run tests

Expand All @@ -21,11 +31,19 @@ After above setup process is finished. You can use `pytest` command to run test,

### Run tests via command

- Run all tests under a folder: `pytest src/promptflow/tests -v`
- Run a single test: ` pytest src/promptflow/tests/promptflow_test/e2etests/test_executor.py::TestExecutor::test_executor_basic_flow -v`
1. Conda environment
- Run all tests under a folder: `pytest src/promptflow/tests -v`, `pytest src/promptflow-devkit/tests -v`
- Run a single test: ` pytest src/promptflow/tests/promptflow_test/e2etests/test_executor.py::TestExecutor::test_executor_basic_flow -v`

2. Poetry environment: there is limitation for running tests in src/promptflow folder, you can only run tests under other package folders.
- for example: under the target folder `promptflow-devkit`, you can run `poetry run pytest tests/sdk_cli_test -v`

### Run tests in VSCode

---

#### Conda environment

1. Set up your python interperter

- Open the Command Palette (Ctrl+Shift+P) and select `Python: Select Interpreter`.
Expand Down Expand Up @@ -76,6 +94,16 @@ Open `.vscode/settings.json`, write `"--ignore=src/promptflow/tests/sdk_cli_azur

![img2](../media/dev_setup/set_up_pycharm_2.png)

---

#### Poetry environment

VSCode could pick up the correct environment automatically if you open vscode/pycharm under the package folders.

There are some limitations currently, intellisense may not work properly in poetry environment.

PyCharm behaves differently from VSCode, it will automatically picks up the correct environment.

## How to write docstring

A clear and consistent API documentation is crucial for the usability and maintainability of our codebase. Please refer to [API Documentation Guidelines](./documentation_guidelines.md) to learn how to write docstring when developing the project.
Expand All @@ -90,11 +118,11 @@ A clear and consistent API documentation is crucial for the usability and mainta
- Flow run: `src/promptflow/tests/sdk_cli_test/e2etests/`
- Flow run in azure: `src/promptflow/tests/sdk_cli_azure_test/e2etests/`
- Test file name and the test case name all start with `test_`.
- A basic test example, see [test_connection.py](../../src/promptflow/tests/sdk_cli_test/e2etests/test_connection.py).
- A basic test example, see [test_connection.py](../../src/promptflow-devkit/tests/sdk_cli_test/e2etests/test_connection.py).

### Test structure

In the future, tests will under corresponding source folder, and test_configs are shared among different test folders:
Tests are under corresponding source folder, and test_configs are shared among different test folders:

- src/promptflow/
- test_configs/
Expand All @@ -117,11 +145,14 @@ In the future, tests will under corresponding source folder, and test_configs ar
- unittests/
- src/promptflow-devkit/
- tests/
- executable/ # Test with promptflow-devkit[executable] installed.
- sdk_cli_tests/
- e2etests/
- unittests/
- src/promptflow-azure/
- tests/ # promptflow-azure doesn't have extra-requires, so all tests are under the test folder.
- e2etests/
- unittests/
- tests/
- sdk_cli_azure_test/
- e2etests/
- unittests/

Principal #1: Put the tests in the same folder as the code they are testing, to ensure code can work within minor environment requirements.

Expand Down
18 changes: 12 additions & 6 deletions docs/dev/replay-e2e-test.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Replay end-to-end tests

* This document introduces replay tests for those located in [sdk_cli_azure_test](../../src/promptflow/tests/sdk_cli_azure_test/e2etests/) and [sdk_cli_test](../../src/promptflow/tests/sdk_cli_test/e2etests/).
* This document introduces replay tests for those located in [sdk_cli_azure_test](../../src/promptflow-azure/tests/sdk_cli_azure_test/e2etests/) and [sdk_cli_test](../../src/promptflow-devkit/tests/sdk_cli_test/e2etests/).
* The primary purpose of replay tests is to avoid the need for credentials, Azure workspaces, OpenAI tokens, and to directly test prompt flow behavior.
* Although there are different techniques behind recording/replaying, there are some common steps to run the tests in replay mode.
* The key handle of replay tests is the environment variable `PROMPT_FLOW_TEST_MODE`.
Expand All @@ -22,17 +22,23 @@ There are 3 representative values of the environment variable `PROMPT_FLOW_TEST_
- `record`: Tests run against the real backend, and network traffic will be sanitized (filter sensitive and unnecessary requests/responses) and recorded to local files (recordings).
- `replay`: There is no real network traffic between SDK/CLI and the backend, tests run against local recordings.

## Supported modules
* [promptflow-devkit](../../src/promptflow-devkit)
* [promptflow-azure](../../src/promptflow-azure)

## Update test recordings

To record a test, don’t forget to clone the full repo and set up the proper test environment following [dev_setup.md](./dev_setup.md):
1. Prepare some data.
1. Ensure you have installed dev version of promptflow-recording package.
* If it is not installed, run `pip install -e src/promptflow-recording` in the root directory of the repo.
2. Prepare some data.
* If you have changed/affected tests in __sdk_cli_test__: Copy or rename the file [dev-connections.json.example](../../src/promptflow/dev-connections.json.example) to `connections.json` in the same folder.
* If you have changed/affected tests in __sdk_cli_azure_test__: prepare your Azure ML workspace, make sure your Azure CLI logged in, and set the environment variable `PROMPT_FLOW_SUBSCRIPTION_ID`, `PROMPT_FLOW_RESOURCE_GROUP_NAME`, `PROMPT_FLOW_WORKSPACE_NAME` and `PROMPT_FLOW_RUNTIME_NAME` (if needed) pointing to your workspace.
2. Record the test.
3. Record the test.
* Specify the environment variable `PROMPT_FLOW_TEST_MODE` to `'record'`. If you have a `.env` file, we recommend specifying it there. Here is an example [.env file](../../src/promptflow/.env.example). Then, just run the test that you want to record.
3. Once the test completed.
* If you have changed/affected tests in __sdk_cli_azure_test__: There should be one new YAML file located in `src/promptflow/tests/test_configs/recordings/`, containing the network traffic of the test.
* If you have changed/affected tests in __sdk_cli_test__: There may be changes in the folder `src/promptflow/tests/test_configs/node_recordings/`. Don’t worry if there are no changes, because similar LLM calls may have been recorded before.
4. Once the test completed.
* If you have changed/affected tests in __sdk_cli_azure_test__: There should be one new YAML file located in [Azure recording folder](../../src/promptflow-recording/recordings/azure/), containing the network traffic of the test.
* If you have changed/affected tests in __sdk_cli_test__: There may be changes in the folder [Local recording folder](../../src/promptflow-recording/recordings/local/). Don’t worry if there are no changes, because similar LLM calls may have been recorded before.

## Techniques behind replay test

Expand Down
40 changes: 40 additions & 0 deletions docs/how-to-guides/chat-with-a-flow/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Chat with a flow

:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::

Prompt flow provides the chat window feature to facilitate an interactive chat experience within a local environment.
You can engage in conversation with the flow and view its responses directly within the chat window.

## Initiating a chat window
There are two methods to open a chat window: executing the prompt flow CLI command or clicking the
`Open test chat page` button when viewing a flow YAML file in the Prompt flow VS Code extension.

::::{tab-set}
:::{tab-item} CLI
:sync: CLI

The following CLI command allows you to trigger a chat window.
```shell
pf flow test --flow . --ui
```
Running the above command will yield the following example output:
```
Starting prompt flow service...
...
You can begin chat flow on http://127.0.0.1:**/v1.0/ui/chat?flow=***
```
The browser page corresponding to the chat URL will automatically open and direct the user to a chat page
corresponding to the passed flow:
![chat-basic-dag-flow](../../media/how-to-guides/chat-with-a-flow/chat-basic-dag-flow.png)
:::

:::{tab-item} VS Code Extension
:sync: VSC

Click the `Open test chat page` button while viewing a flow YAML file in the Prompt flow VS Code extension, and you
will be directed to the chat page.
![start-chat-window-in-vsc](../../media/how-to-guides/chat-with-a-flow/start-chat-window-in-vsc.png)
:::
::::
18 changes: 16 additions & 2 deletions docs/how-to-guides/deploy-a-flow/deploy-using-docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,11 +87,25 @@ You'll need to set up the environment variables in the container to make the con
### Run with `docker run`

You can run the docker image directly set via below commands:
#### Run with `flask` serving engine
You can run the docker image directly set via below commands, this will by default use `flask` serving engine:
```bash
# The started service will listen on port 8080.You can map the port to any port on the host machine as you want.
docker run -p 8080:8080 -e OPEN_AI_CONNECTION_API_KEY=<secret-value> web-classification-serve
docker run -p 8080:8080 -e OPEN_AI_CONNECTION_API_KEY=<secret-value> -e PROMPTFLOW_WORKER_NUM=<expect-worker-num> -e PROMPTFLOW_WORKER_THREADS=<expect-thread-num-per-worker> web-classification-serve
```
Note that:
- `PROMPTFLOW_WORKER_NUM`: optional setting, it controls how many workers started in your container, default value is 8.
- `PROMPTFLOW_WORKER_THREADS`: optional setting, it controls how many threads started in one worker, default value is 1. **this setting only works for flask engine**

#### Run with `fastapi` serving engine
Starting from pf 1.10.0, we support new `fastapi` based serving engine, you can choose to use `fastapi` serving engine via below commands:
```bash
# The started service will listen on port 8080.You can map the port to any port on the host machine as you want.
docker run -p 8080:8080 -e OPEN_AI_CONNECTION_API_KEY=<secret-value> -e PROMPTFLOW_SERVING_ENGINE=fastapi -e PROMPTFLOW_WORKER_NUM=<expect-worker-num> web-classification-serve
```
Note that:
- `PROMPTFLOW_WORKER_NUM`: optional setting, it controls how many workers started in your container, default value is 8.
- `PROMPTFLOW_SERVING_ENGINE`: optional setting, it controls which serving engine to use in your container, default value is `flask`, currently only support `flask` and `fastapi`.

### Test the endpoint
After start the service, you can use curl to test it:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ pip install my-tools-package[azure]>=0.0.8

If you are unable to see any options in a dynamic list tool input, you may see an error message below the input field stating:

"Unable to display list of items due to XXX. Please contact the tool author/support team for troubleshooting assistance."
"Unable to retrieve result due to XXX. Please contact the tool author/support team for troubleshooting assistance."

If this occurs, follow these troubleshooting steps:

Expand Down
5 changes: 3 additions & 2 deletions docs/how-to-guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,9 @@ tracing/index
:caption: Flow
:maxdepth: 1
develop-a-flow/index
run-and-evaluate-a-flow/index
execute-flow-as-a-function
chat-with-a-flow/index
run-and-evaluate-a-flow/index
```

```{toctree}
Expand All @@ -33,7 +34,7 @@ enable-streaming-mode
:caption: FAQ
:maxdepth: 1
faq
set-global-configs
set-promptflow-configs
manage-connections
tune-prompts-with-variants
```
Loading

0 comments on commit c42c8c0

Please sign in to comment.