From f86235b94cf127237789f458658ca18ea156c352 Mon Sep 17 00:00:00 2001 From: melionel Date: Thu, 7 Dec 2023 17:35:13 +0800 Subject: [PATCH] [Tools] change max_tokens default value to 512 (#1417) This pull request includes a single change to the `openai_gpt4v.yaml` file in the `promptflow-tools` package. The default value for `max_tokens` has been updated to 512. * `src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml`: Updated the default value for `max_tokens` to 512.# Description Please add an informative description that covers that changes made by the pull request and link all relevant issues. # All Promptflow Contribution checklist: - [ ] **The pull request does not introduce [breaking changes].** - [ ] **CHANGELOG is updated for new features, bug fixes or other significant changes.** - [ ] **I have read the [contribution guidelines](../CONTRIBUTING.md).** - [ ] **Create an issue and link to the pull request to get dedicated review from promptflow team. Learn more: [suggested workflow](../CONTRIBUTING.md#suggested-workflow).** ## General Guidelines and Best Practices - [ ] Title of the pull request is clear and informative. - [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, [see this page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md). ### Testing Guidelines - [ ] Pull request includes test coverage for the included changes. --------- Co-authored-by: Meng Lan --- docs/reference/tools-reference/openai-gpt-4v-tool.md | 2 +- src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference/tools-reference/openai-gpt-4v-tool.md b/docs/reference/tools-reference/openai-gpt-4v-tool.md index f5dee3b0d13..49721f42480 100644 --- a/docs/reference/tools-reference/openai-gpt-4v-tool.md +++ b/docs/reference/tools-reference/openai-gpt-4v-tool.md @@ -29,7 +29,7 @@ Setup connections to provisioned resources in prompt flow. | connection | OpenAI | the OpenAI connection to be used in the tool | Yes | | model | string | the language model to use, currently only support gpt-4-vision-preview | Yes | | prompt | string | The text prompt that the language model will use to generate it's response. | Yes | -| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is a low value decided by [OpenAI API](https://platform.openai.com/docs/guides/vision). | No | +| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No | | temperature | float | the randomness of the generated text. Default is 1. | No | | stop | list | the stopping sequence for the generated text. Default is null. | No | | top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No | diff --git a/src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml b/src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml index d88778e8e0d..0fc68b0acb5 100644 --- a/src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml +++ b/src/promptflow-tools/promptflow/tools/yamls/openai_gpt4v.yaml @@ -32,7 +32,7 @@ promptflow.tools.openai_gpt4v.OpenAI.chat: type: - double max_tokens: - default: "" + default: 512 type: - int stop: