Skip to content

Commit

Permalink
Merge branch 'main' into dev/linatang/fix-flex-flow-aggregation-metrices
Browse files Browse the repository at this point in the history
  • Loading branch information
lumoslnt authored Apr 26, 2024
2 parents 250db56 + 4560eea commit acbb9bc
Show file tree
Hide file tree
Showing 47 changed files with 2,671 additions and 1,120 deletions.
5 changes: 3 additions & 2 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -217,10 +217,11 @@
"dcid",
"piezo",
"Piezo",
"cmpop"
"cmpop",
"omap"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
}
1 change: 1 addition & 0 deletions docs/concepts/concept-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Prompt flow provides a variety of pre-built connections, including Azure Open AI
| [Open AI](https://openai.com/) | LLM or Python |
| [Cognitive Search](https://azure.microsoft.com/en-us/products/search) | Vector DB Lookup or Python |
| [Serp](https://serpapi.com/) | Serp API or Python |
| [Serverless](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/deployments-overview#deploy-models-with-model-as-a-service) | LLM or Python |
| Custom | Python |

By leveraging connections in prompt flow, you can easily establish and manage connections to external APIs and data sources, facilitating efficient data exchange and interaction within their AI applications.
Expand Down
10 changes: 5 additions & 5 deletions docs/how-to-guides/develop-a-prompty/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,9 +83,9 @@ model:
configuration:
type: azure_openai
azure_deployment: gpt-35-turbo
api_key: <api-key>
api_version: <api-version>
azure_endpoint: <azure-endpoint>
api_key: ${env:AZURE_OPENAI_API_KEY}
api_version: ${env:AZURE_OPENAI_API_VERSION}
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
parameters:
max_tokens: 128
temperature: 0.2
Expand Down Expand Up @@ -168,8 +168,8 @@ model:
configuration:
type: openai
model: gpt-3.5-turbo
api_key: <api-key>
base_url: <api_base>
api_key: ${env:OPENAI_API_KEY}
base_url: ${env:OPENAI_BASE_URL}
parameters:
max_tokens: 128
temperature: 0.2
Expand Down
4 changes: 2 additions & 2 deletions docs/how-to-guides/develop-a-prompty/prompty-output-format.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ model:
api: chat
configuration:
type: azure_openai
connection: <connection_name>
connection: open_ai_connection
azure_deployment: gpt-35-turbo-0125
parameters:
max_tokens: 128
Expand Down Expand Up @@ -268,4 +268,4 @@ result = prompty_func(first_name="John", last_name="Doh", question=question)
# Type of the result is generator
for item in result:
print(item, end="")
```
```
19 changes: 13 additions & 6 deletions docs/reference/tools-reference/llm-tool.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# LLM

## Introduction
Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/) or [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview) for natural language processing.
Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview), and models in [Azure AI Studio model catalog](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog) for natural language processing.
> [!NOTE]
> The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools.
Expand All @@ -11,7 +11,7 @@ Prompt flow provides a few different LLM APIs:


## Prerequisite
Create OpenAI or Azure OpenAI resources:
Create OpenAI resources, Azure OpenAI resources or MaaS deployment with the LLM models (e.g.: llama2, mistral, cohere etc.) in Azure AI Studio model catalog:

- **OpenAI**

Expand All @@ -23,14 +23,21 @@ Create OpenAI or Azure OpenAI resources:

Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)

- **MaaS deployment**

Create MaaS deployment for models in Azure AI Studio model catalog with [instruction](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/deployments-overview#deploy-models-with-model-as-a-service)

You can create serverless connection to use this MaaS deployment.

## **Connections**

Setup connections to provisioned resources in prompt flow.

| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
| Type | Name | API KEY | API BASE | API Type | API Version |
|-------------|----------|----------|----------|-----------|-------------|
| OpenAI | Required | Required | - | - | - |
| AzureOpenAI | Required | Required | Required | Required | Required |
| Serverless | Required | Required | Required | - | - |


## Inputs
Expand Down
17 changes: 5 additions & 12 deletions src/promptflow-azure/promptflow/azure/_storage/blob/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import logging
import threading
import traceback
from typing import Optional, Tuple
from typing import Callable, Tuple

from azure.ai.ml import MLClient
from azure.ai.ml._azure_environments import _get_storage_endpoint_from_metadata
Expand All @@ -25,17 +25,10 @@ def get_datastore_container_client(
subscription_id: str,
resource_group_name: str,
workspace_name: str,
credential: Optional[object] = None,
get_credential: Callable,
) -> Tuple[ContainerClient, str]:
try:
if credential is None:
# in cloud scenario, runtime will pass in credential
# so this is local to cloud only code, happens in prompt flow service
# which should rely on Azure CLI credential only
from azure.identity import AzureCliCredential

credential = AzureCliCredential()

credential = get_credential()
datastore_definition, datastore_credential = _get_default_datastore(
subscription_id, resource_group_name, workspace_name, credential
)
Expand Down Expand Up @@ -68,7 +61,7 @@ def get_datastore_container_client(


def _get_default_datastore(
subscription_id: str, resource_group_name: str, workspace_name: str, credential: Optional[object]
subscription_id: str, resource_group_name: str, workspace_name: str, credential
) -> Tuple[Datastore, str]:

datastore_key = _get_datastore_client_key(subscription_id, resource_group_name, workspace_name)
Expand Down Expand Up @@ -103,7 +96,7 @@ def _get_datastore_client_key(subscription_id: str, resource_group_name: str, wo


def _get_aml_default_datastore(
subscription_id: str, resource_group_name: str, workspace_name: str, credential: Optional[object]
subscription_id: str, resource_group_name: str, workspace_name: str, credential
) -> Tuple[Datastore, str]:

ml_client = MLClient(
Expand Down
14 changes: 4 additions & 10 deletions src/promptflow-azure/promptflow/azure/_storage/cosmosdb/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import ast
import datetime
import threading
from typing import Optional
from typing import Callable

client_map = {}
_thread_lock = threading.Lock()
Expand All @@ -18,7 +18,7 @@ def get_client(
subscription_id: str,
resource_group_name: str,
workspace_name: str,
credential: Optional[object] = None,
get_credential: Callable,
):
client_key = _get_db_client_key(container_name, subscription_id, resource_group_name, workspace_name)
container_client = _get_client_from_map(client_key)
Expand All @@ -28,13 +28,7 @@ def get_client(
with container_lock:
container_client = _get_client_from_map(client_key)
if container_client is None:
if credential is None:
# in cloud scenario, runtime will pass in credential
# so this is local to cloud only code, happens in prompt flow service
# which should rely on Azure CLI credential only
from azure.identity import AzureCliCredential

credential = AzureCliCredential()
credential = get_credential()
token = _get_resource_token(
container_name, subscription_id, resource_group_name, workspace_name, credential
)
Expand Down Expand Up @@ -77,7 +71,7 @@ def _get_resource_token(
subscription_id: str,
resource_group_name: str,
workspace_name: str,
credential: Optional[object],
credential,
) -> object:
from promptflow.azure import PFClient

Expand Down
5 changes: 4 additions & 1 deletion src/promptflow-azure/tests/sdk_cli_azure_test/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from mock import MagicMock, mock
from pytest_mock import MockerFixture

from promptflow._sdk._constants import FlowType, RunStatus
from promptflow._sdk._constants import FLOW_TOOLS_JSON, PROMPT_FLOW_DIR_NAME, FlowType, RunStatus
from promptflow._sdk.entities import Run
from promptflow._utils.user_agent_utils import ClientUserAgentUtil
from promptflow.azure import PFClient
Expand Down Expand Up @@ -450,6 +450,9 @@ def created_flow(pf: PFClient, randstr: Callable[[str], str], variable_recorder)
"""Create a flow for test."""
flow_display_name = randstr("flow_display_name")
flow_source = FLOWS_DIR / "simple_hello_world"
tool_json_path = f"{flow_source}/{PROMPT_FLOW_DIR_NAME}/{FLOW_TOOLS_JSON}"
if os.path.isfile(tool_json_path):
os.remove(tool_json_path)
description = "test flow description"
tags = {"owner": "sdk-test"}
result = pf.flows.create_or_update(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
import pytest
from sdk_cli_azure_test.conftest import FLOWS_DIR

from promptflow._sdk._constants import FLOW_TOOLS_JSON, PROMPT_FLOW_DIR_NAME
from promptflow.azure._entities._flow import Flow
from promptflow.exceptions import UserErrorException

Expand All @@ -24,6 +25,8 @@ class TestFlow:
def test_create_flow(self, created_flow: Flow):
# most of the assertions are in the fixture itself
assert isinstance(created_flow, Flow)
flow_tools_json_path = FLOWS_DIR / "simple_hello_world" / PROMPT_FLOW_DIR_NAME / FLOW_TOOLS_JSON
assert not flow_tools_json_path.exists()

def test_get_flow(self, pf, created_flow: Flow):
result = pf.flows.get(name=created_flow.name)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from sdk_cli_azure_test.conftest import DATAS_DIR, FLOWS_DIR

from promptflow._constants import FLOW_FLEX_YAML
from promptflow._sdk._constants import DownloadedRun, RunStatus
from promptflow._sdk._constants import FLOW_TOOLS_JSON, PROMPT_FLOW_DIR_NAME, DownloadedRun, RunStatus
from promptflow._sdk._errors import InvalidRunError, InvalidRunStatusError, RunNotFoundError
from promptflow._sdk._load_functions import load_run
from promptflow._sdk.entities import Run
Expand Down Expand Up @@ -79,6 +79,23 @@ def test_run_bulk(self, pf, runtime: str, randstr: Callable[[str], str]):
assert isinstance(run, Run)
assert run.name == name

@pytest.mark.skipif(not is_live(), reason="Recording issue.")
def test_run_without_generate_tools_json(self, pf, runtime: str, randstr: Callable[[str], str]):
name = randstr("name")
flow_dir = f"{FLOWS_DIR}/simple_hello_world"
tools_json_path = Path(flow_dir) / PROMPT_FLOW_DIR_NAME / FLOW_TOOLS_JSON
if tools_json_path.exists():
tools_json_path.unlink()
run = pf.run(
flow=flow_dir,
data=f"{DATAS_DIR}/simple_hello_world.jsonl",
column_mapping={"name": "${data.name}"},
name=name,
)
assert isinstance(run, Run)
assert run.name == name
assert not tools_json_path.exists()

def test_run_resume(self, pf: PFClient, randstr: Callable[[str], str]):
# Note: Use fixed run name here to ensure resume call has same body then can be recorded.
name = "resume_from_run_using_automatic_runtime"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ def check_local_to_cloud_run(pf: PFClient, run: Run, check_run_details_in_cloud:
assert cloud_run.properties["azureml.promptflow.local_to_cloud"] == "true"
assert cloud_run.properties["azureml.promptflow.snapshot_id"]
assert cloud_run.properties[Local2CloudProperties.TOTAL_TOKENS]
assert cloud_run.properties[Local2CloudProperties.EVAL_ARTIFACTS]

# if no description or tags, skip the check, since one could be {} but the other is None
if run.description:
Expand All @@ -74,12 +75,12 @@ def check_local_to_cloud_run(pf: PFClient, run: Run, check_run_details_in_cloud:
"mock_set_headers_with_user_aml_token",
"single_worker_thread_pool",
"vcr_recording",
"mock_isinstance_for_mock_datastore",
"mock_get_azure_pf_client",
"mock_trace_destination_to_cloud",
)
class TestFlowRunUpload:
@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_run(
self,
pf: PFClient,
Expand All @@ -103,9 +104,6 @@ def test_upload_run(
Local2CloudTestHelper.check_local_to_cloud_run(pf, run, check_run_details_in_cloud=True)

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_flex_flow_run_with_yaml(self, pf: PFClient, randstr: Callable[[str], str]):
name = randstr("flex_run_name_with_yaml_for_upload")
local_pf = Local2CloudTestHelper.get_local_pf(name)
Expand All @@ -125,9 +123,6 @@ def test_upload_flex_flow_run_with_yaml(self, pf: PFClient, randstr: Callable[[s
Local2CloudTestHelper.check_local_to_cloud_run(pf, run)

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_flex_flow_run_without_yaml(self, pf: PFClient, randstr: Callable[[str], str]):
name = randstr("flex_run_name_without_yaml_for_upload")
local_pf = Local2CloudTestHelper.get_local_pf(name)
Expand All @@ -148,9 +143,6 @@ def test_upload_flex_flow_run_without_yaml(self, pf: PFClient, randstr: Callable
Local2CloudTestHelper.check_local_to_cloud_run(pf, run)

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_prompty_run(self, pf: PFClient, randstr: Callable[[str], str]):
# currently prompty run is skipped for upload, this test should be finished without error
name = randstr("prompty_run_name_for_upload")
Expand All @@ -167,9 +159,6 @@ def test_upload_prompty_run(self, pf: PFClient, randstr: Callable[[str], str]):
Local2CloudTestHelper.check_local_to_cloud_run(pf, run)

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_run_with_customized_run_properties(self, pf: PFClient, randstr: Callable[[str], str]):
name = randstr("batch_run_name_for_upload_with_customized_properties")
local_pf = Local2CloudTestHelper.get_local_pf(name)
Expand Down Expand Up @@ -200,9 +189,6 @@ def test_upload_run_with_customized_run_properties(self, pf: PFClient, randstr:
assert cloud_run.properties[Local2CloudUserProperties.EVAL_ARTIFACTS] == eval_artifacts

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures(
"mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client", "mock_trace_destination_to_cloud"
)
def test_upload_eval_run(self, pf: PFClient, randstr: Callable[[str], str]):
main_run_name = randstr("main_run_name_for_test_upload_eval_run")
local_pf = Local2CloudTestHelper.get_local_pf(main_run_name)
Expand All @@ -216,8 +202,8 @@ def test_upload_eval_run(self, pf: PFClient, randstr: Callable[[str], str]):

# run an evaluation run
eval_run_name = randstr("eval_run_name_for_test_upload_eval_run")
local_lpf = Local2CloudTestHelper.get_local_pf(eval_run_name)
eval_run = local_lpf.run(
local_pf = Local2CloudTestHelper.get_local_pf(eval_run_name)
eval_run = local_pf.run(
flow=f"{FLOWS_DIR}/simple_hello_world",
data=f"{DATAS_DIR}/webClassification3.jsonl",
run=main_run_name,
Expand All @@ -229,7 +215,6 @@ def test_upload_eval_run(self, pf: PFClient, randstr: Callable[[str], str]):
assert eval_run.properties["azureml.promptflow.variant_run_id"] == main_run_name

@pytest.mark.skipif(condition=not pytest.is_live, reason="Bug - 3089145 Replay failed for test 'test_upload_run'")
@pytest.mark.usefixtures("mock_isinstance_for_mock_datastore", "mock_get_azure_pf_client")
def test_upload_flex_flow_run_with_global_azureml(self, pf: PFClient, randstr: Callable[[str], str]):
with patch("promptflow._sdk._configuration.Configuration.get_config", return_value="azureml"):
name = randstr("flex_run_name_with_global_azureml_for_upload")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from sdk_cli_azure_test.conftest import DATAS_DIR, EAGER_FLOWS_DIR, FLOWS_DIR

from promptflow._sdk._errors import RunOperationParameterError, UploadUserError, UserAuthenticationError
from promptflow._sdk._utils import parse_otel_span_status_code
from promptflow._sdk._utils.tracing import _parse_otel_span_status_code
from promptflow._sdk.entities import Run
from promptflow._sdk.operations._run_operations import RunOperations
from promptflow._utils.async_utils import async_run_allowing_running_loop
Expand Down Expand Up @@ -88,7 +88,7 @@ def test_flex_flow_with_imported_func(self, pf: PFClient):
# TODO(3017093): won't support this for now
with pytest.raises(UserErrorException) as e:
pf.run(
flow=parse_otel_span_status_code,
flow=_parse_otel_span_status_code,
data=f"{DATAS_DIR}/simple_eager_flow_data.jsonl",
# set code folder to avoid snapshot too big
code=f"{EAGER_FLOWS_DIR}/multiple_entries",
Expand Down
5 changes: 5 additions & 0 deletions src/promptflow-core/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
# promptflow-core package

## v1.10.0 (Upcoming)

### Features Added
- Add prompty feature to simplify the development of prompt templates for customers, reach [here](https://microsoft.github.io/promptflow/how-to-guides/develop-a-prompty/index.html) for more details.

### Others
- Add fastapi serving engine support.

## v1.9.0 (2024.04.17)
Expand Down
Loading

0 comments on commit acbb9bc

Please sign in to comment.