Skip to content

Commit

Permalink
[promptflow-evals] Update a missing sample file for the model config …
Browse files Browse the repository at this point in the history
…change (#2833)

# Description

Please add an informative description that covers that changes made by
the pull request and link all relevant issues.

# All Promptflow Contribution checklist:
- [ ] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [ ] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [ ] Title of the pull request is clear and informative.
- [ ] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [ ] Pull request includes test coverage for the included changes.
  • Loading branch information
ninghu authored Apr 16, 2024
1 parent 45337f2 commit 05154ee
Showing 1 changed file with 10 additions and 14 deletions.
24 changes: 10 additions & 14 deletions src/promptflow-evals/samples/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,24 +3,22 @@
import os
from pprint import pprint

from promptflow.entities import AzureOpenAIConnection
from promptflow.core import AzureOpenAIModelConfiguration
from promptflow.evals.evaluate import evaluate
from promptflow.evals.evaluators import RelevanceEvaluator
from promptflow.evals.evaluators.content_safety import ViolenceEvaluator


def built_in_evaluator():
# Initialize Azure OpenAI Connection
model_config = AzureOpenAIConnection(
api_base=os.environ.get("AZURE_OPENAI_ENDPOINT"),
# Initialize Azure OpenAI Model Configuration
model_config = AzureOpenAIModelConfiguration(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
api_type="azure",
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
)

deployment_name = "GPT-4-Prod"

# Initialzing Relevance Evaluator
relevance_eval = RelevanceEvaluator(model_config, deployment_name)
relevance_eval = RelevanceEvaluator(model_config)

# Running Relevance Evaluator on single input row
relevance_score = relevance_eval(
Expand Down Expand Up @@ -52,16 +50,14 @@ def answer_length(answer, **kwargs):
if __name__ == "__main__":
# Built-in evaluators
# Initialize Azure OpenAI Connection
model_config = AzureOpenAIConnection(
api_base=os.environ.get("AZURE_OPENAI_ENDPOINT"),
model_config = AzureOpenAIModelConfiguration(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
api_type="azure",
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
)

deployment_name = "GPT-4-Prod"

# Initialzing Relevance Evaluator
relevance_eval = RelevanceEvaluator(model_config, deployment_name)
relevance_eval = RelevanceEvaluator(model_config)

# Running Relevance Evaluator on single input row
relevance_score = relevance_eval(
Expand Down

0 comments on commit 05154ee

Please sign in to comment.