Skip to content
This repository has been archived by the owner on Aug 14, 2024. It is now read-only.

Commit

Permalink
initial push of LLMOps code
Browse files Browse the repository at this point in the history
  • Loading branch information
AbeOmor committed Aug 23, 2023
1 parent 3837d24 commit b36f2e7
Show file tree
Hide file tree
Showing 4 changed files with 195 additions and 0 deletions.
52 changes: 52 additions & 0 deletions .github/workflows/deploy-pf-online-endpoint-pipeline.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: Deploy Prompts with Promptflow

on:
workflow_dispatch:
workflow_run:
workflows: ["run-eval-pf-pipeline"]
branches: [main]
types:
- completed

env:
GROUP: ${{secrets.GROUP}}
WORKSPACE: ${{secrets.WORKSPACE}}
SUBSCRIPTION: ${{secrets.SUBSCRIPTION}}
RUN_NAME: web_classification_variant_1_20230816_215600_605116
EVAL_RUN_NAME: classification_accuracy_eval_default_20230821_111809_077086
ENDPOINT_NAME: web-classification

jobs:
create-endpoint-and-deploy-pf:
runs-on: ubuntu-latest
if: ${{ github.event_name == 'workflow_dispatch' || github.event.workflow_run.conclusion == 'success' }}
steps:
- name: check out repo
uses: actions/checkout@v2
- name: install az ml extension
run: az extension add -n ml -y
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZURE_CREDENTIALS}}
- name: list current directory
run: ls
- name: set default subscription
run: |
az account set -s ${{env.SUBSCRIPTION}}
- name: create Hash
run: echo "HASH=$(echo -n $RANDOM | sha1sum | cut -c 1-6)" >> "$GITHUB_ENV"
- name: create unique endpoint name
run: echo "ENDPOINT_NAME=$(echo 'web-classification-'$HASH)" >> "$GITHUB_ENV"
- name: display endpoint name
run: echo "Endpoint name is:" ${{env.ENDPOINT_NAME}}
- name: setup endpoint
run: az ml online-endpoint create --file promptflow/deployment/endpoint.yaml --name ${{env.ENDPOINT_NAME}} -g ${{env.GROUP}} -w ${{env.WORKSPACE}}
- name: setup deployment
run: az ml online-deployment create --file promptflow/deployment/deployment.yaml --endpoint-name ${{env.ENDPOINT_NAME}} --all-traffic -g ${{env.GROUP}} -w ${{env.WORKSPACE}}
- name: check the status of the endpoint
run: az ml online-endpoint show -n ${{env.ENDPOINT_NAME}} -g ${{env.GROUP}} -w ${{env.WORKSPACE}}
- name: check the status of the deployment
run: az ml online-deployment get-logs --name blue --endpoint-name ${{env.ENDPOINT_NAME}} -g ${{env.GROUP}} -w ${{env.WORKSPACE}}
- name: invoke model
run: az ml online-endpoint invoke --name ${{env.ENDPOINT_NAME}} --request-file promptflow/deployment/sample-request.json -g ${{env.GROUP}} -w ${{env.WORKSPACE}}
122 changes: 122 additions & 0 deletions promptflow/web-classification/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Web Classification

This is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.

## Tools used in this flow
- LLM Tool
- Python Tool

## What you will learn

In this flow, you will learn
- how to compose a classification flow with LLM.
- how to feed few shots to LLM classifier.

## Prerequisites

Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```

## Getting Started

### 1. Setup connection

Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.

```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```

### 2. Configure the flow with your connection
`flow.dag.yaml` is already configured with connection named `azure_open_ai_connection`.

### 3. Test flow with single line data

```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with user specified inputs
pf flow test --flow . --inputs url='https://www.youtube.com/watch?v=kYqRtjDBci8'
```

### 4. Run with multi-line data

```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --stream

# (Optional) create a random run name
run_name="web_classification_"$(openssl rand -hex 12)
# create run using yaml file, run_name will be used in following contents, --name is optional
pf run create --file run.yml --stream --name $run_name
```

```bash
# list run
pf run list
# show run
pf run show --name $run_name
# show run outputs
pf run show-details --name $run_name
```

### 5. Run with classification evaluation flow

create `evaluation` run:
```bash
# (Optional) save previous run name into variable, and create a new random run name for furthur use
prev_run_name=$run_name
run_name="classification_accuracy_"$(openssl rand -hex 12)
# create run using command line args
pf run create --flow ../../evaluation/classification-accuracy-eval --data ./data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run $prev_run_name --stream
# create run using yaml file, --name is optional
pf run create --file run_evaluation.yml --run $prev_run_name --stream --name $run_name
```

```bash
pf run show-details --name $run_name
pf run show-metrics --name $run_name
pf run visualize --name $run_name
```


### 6. Submit run to cloud
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>

# create run
pfazure run create --flow . --data ./data.jsonl --stream --runtime demo-mir --subscription <your_subscription_id> -g <your_resource_group_name> -w <your_workspace_name>
# pfazure run create --flow . --data ./data.jsonl --stream # serverless compute

# (Optional) create a new random run name for furthur use
run_name="web_classification_"$(openssl rand -hex 12)

# create run using yaml file, --name is optional
pfazure run create --file run.yml --runtime demo-mir --name $run_name
# pfazure run create --file run.yml --stream --name $run_name # serverless compute


pfazure run stream --name $run_name
pfazure run show-details --name $run_name
pfazure run show-metrics --name $run_name


# (Optional) save previous run name into variable, and create a new random run name for furthur use
prev_run_name=$run_name
run_name="classification_accuracy_"$(openssl rand -hex 12)

# create evaluation run, --name is optional
pfazure run create --flow ../../evaluation/classification-accuracy-eval --data ./data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run $prev_run_name --runtime demo-mir
pfazure run create --file run_evaluation.yml --run $prev_run_name --stream --name $run_name --runtime demo-mir

pfazure run stream --name $run_name
pfazure run show --name $run_name
pfazure run show-details --name $run_name
pfazure run show-metrics --name $run_name
pfazure run visualize --name $run_name
```
7 changes: 7 additions & 0 deletions promptflow/web-classification/azure_openai.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: Default_AzureOpenAI
type: azure_open_ai
api_key: <api-key>
api_base: "https://eastus.api.cognitive.microsoft.com"
api_type: "azure"
api_version: "2023-03-15-preview"
14 changes: 14 additions & 0 deletions promptflow/web-classification/run.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl

# define cloud resource
runtime: <runtime-name>

connections:
classify_with_llm:
connection: Default_AzureOpenAI
deployment_name: gpt-35-turbo-0301
summarize_text_content:
connection: Default_AzureOpenAI
deployment_name: gpt-35-turbo-0301

0 comments on commit b36f2e7

Please sign in to comment.