From 6e1d23d5f6bc6a48726492cbb02937db53e847d1 Mon Sep 17 00:00:00 2001 From: jingyizhu99 <83610845+jingyizhu99@users.noreply.github.com> Date: Tue, 3 Sep 2024 11:49:11 -0700 Subject: [PATCH] Update documents to include rerank tool (#3691) # Description Please add an informative description that covers that changes made by the pull request and link all relevant issues. # All Promptflow Contribution checklist: - [x] **The pull request does not introduce [breaking changes].** - [x] **CHANGELOG is updated for new features, bug fixes or other significant changes.** - [x] **I have read the [contribution guidelines](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md).** - [x] **I confirm that all new dependencies are compatible with the MIT license.** - [x] **Create an issue and link to the pull request to get dedicated review from promptflow team. Learn more: [suggested workflow](../CONTRIBUTING.md#suggested-workflow).** ## General Guidelines and Best Practices - [x] Title of the pull request is clear and informative. - [x] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, [see this page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md). ### Testing Guidelines - [ ] Pull request includes test coverage for the included changes. --- .cspell.json | 3 + docs/reference/index.md | 1 + docs/reference/tools-reference/rerank-tool.md | 69 +++++++++++++++++++ 3 files changed, 73 insertions(+) create mode 100644 docs/reference/tools-reference/rerank-tool.md diff --git a/.cspell.json b/.cspell.json index 9890a83f774..b5453b15676 100644 --- a/.cspell.json +++ b/.cspell.json @@ -244,6 +244,9 @@ "usecwd", "locustio", "euap", + "Rerank", + "rerank", + "reranker", "rcfile", "pylintrc" ], diff --git a/docs/reference/index.md b/docs/reference/index.md index 9f139f263c4..54187fc1275 100644 --- a/docs/reference/index.md +++ b/docs/reference/index.md @@ -66,6 +66,7 @@ tools-reference/open_model_llm_tool tools-reference/openai-gpt-4v-tool tools-reference/contentsafety_text_tool tools-reference/aoai-gpt4-turbo-vision +tools-reference/rerank-tool ``` ```{toctree} diff --git a/docs/reference/tools-reference/rerank-tool.md b/docs/reference/tools-reference/rerank-tool.md new file mode 100644 index 00000000000..ae805deda64 --- /dev/null +++ b/docs/reference/tools-reference/rerank-tool.md @@ -0,0 +1,69 @@ +# Rerank + +## Introduction +Rerank is a semantic search tool that improves search quality with a semantic-based reranking system which can contextualize the meaning of a user's query beyond keyword relevance. This tool works best with look up tool as a ranker after the initial retrieval. The list of current supported ranking method is as follows. + +| Name | Description | +| --- | --- | +| BM25 | BM25 is an open source ranking algorithm to measure the relevance of documents to a given query | +| Scaled Score Fusion | Scaled Score Fusion calculates a scaled relevance score. | +| Cohere Rerank | Cohere Rerank is the market’s leading reranking model used for semantic search and retrieval-augmented generation (RAG). | + +## Requirements +- For AzureML users, the tool is installed in default image, you can use the tool without extra installation. +- For local users, + + `pip install promptflow-vectordb` + +## Prerequisites + +BM25 and Scaled Score Fusion are included as default reranking methods. To use cohere rerank model, you should create serverless deployment to the model, and establish connection between the tool and the resource as follows. + +- Add Serverless Model connection. Fill "API base" and "API key" field to your serverless deployment. + + +## Inputs + +| Name | Type | Description | Required | +|------------------------|-------------|-----------------------------------------------------------------------|----------| +| query | string | the question relevant to your input documents | Yes | +| ranker_parameters | string | the type of ranking methods to use | Yes | +| result_groups | object | the list of document chunks to rerank. Normally this is output from lookup | Yes | +| top_k | int | the maximum number of relevant documents to return | No | + + + +## Outputs + +| Return Type | Description | +|-------------|------------------------------------------| +| text | text of the entity | +| metadata | metadata like file path and url | +| additional_fields | metadata and rerank score | + +
+ Output + + ```json + [ + { + "text": "sample text", + "metadata": + { + "filepath": "sample_file_path", + "metadata_json_string": "meta_json_string" + "title": "", + "url": "" + }, + "additional_fields": + { + "filepath": "sample_file_path", + "metadata_json_string": "meta_json_string" + "title": "", + "url": "", + "@promptflow_vectordb.reranker_score": 0.013795365 + } + } + ] + ``` +
\ No newline at end of file