Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add how to build copilot tutorial #2688

Merged
merged 148 commits into from
May 9, 2024
Merged
Changes from 1 commit
Commits
Show all changes
148 commits
Select commit Hold shift + click to select a range
709e56a
outline
Jan 9, 2024
8378cd7
update
Jan 9, 2024
a2d8efd
update
Jan 11, 2024
0ff7f81
update
Jan 12, 2024
9341681
update
16oeahr Jan 15, 2024
d81e898
update
16oeahr Jan 16, 2024
0c39bf2
move flow to outside
16oeahr Jan 16, 2024
2365015
add data gen flow
Jan 17, 2024
740670e
fix bug and update
Jan 18, 2024
9e5068c
rename
Jan 18, 2024
4f5c6ee
fix conflict
Jan 18, 2024
e50e6f8
refine template
Jan 18, 2024
405de77
update
Jan 18, 2024
ba61e40
update requirements txt and fix links
Jan 18, 2024
6398572
refine generate context prompt
Jan 18, 2024
444185c
update
Jan 19, 2024
4c7c80f
Merge branch 'feature/test_data_gen' of https://github.com/microsoft/…
Jan 19, 2024
4ae8a3e
refine prompt
Jan 19, 2024
a4b7fd8
Merge branch 'feature/test_data_gen' of https://github.com/microsoft/…
Jan 19, 2024
9af8d8c
update test data contract to add debug_info
Jan 22, 2024
8185af8
update flow and fix line 0 cleaned error
Jan 23, 2024
3641b42
update ground truth prompt
Jan 23, 2024
eed9f55
update flow
Jan 25, 2024
86dc787
Refactor code (#1860)
chenslucky Jan 26, 2024
9f7ff03
update data construction doc
Jan 26, 2024
418bdfe
Add mode logs (#1865)
chenslucky Jan 26, 2024
945c0f9
add debug info
Jan 29, 2024
f4bdb4a
refactor structure and refine configs
Jan 30, 2024
9bf15a1
add documents nodes file
Jan 30, 2024
a96e808
rename
Jan 30, 2024
8c46b97
add config
Jan 30, 2024
042133f
refine the structure
Jan 30, 2024
cc6e2a4
remove docs
Jan 30, 2024
9594414
remove config.ini
Jan 30, 2024
afd703c
fix flake 8
Jan 30, 2024
7c0ee33
add logger
Jan 30, 2024
cf03e5f
refine logs
Jan 30, 2024
838f535
rename flow some parameters
Jan 30, 2024
b27c99a
fix flake 8
Jan 30, 2024
23d5076
update to use score
Jan 30, 2024
546209c
Merge branch 'feature/test_data_gen' of https://github.com/microsoft/…
Jan 30, 2024
c840ad0
update docs
Jan 30, 2024
3e82857
resolve conflict
Jan 30, 2024
ab0cc14
refine a simple doc version
Jan 30, 2024
90ef65d
update docs
Jan 31, 2024
762f1a6
improve data gen flow (#1897)
chjinche Feb 1, 2024
b34e198
Update flow structure (#1907)
16oeahr Feb 1, 2024
c5c6b0e
Fix some comments (#1912)
chenslucky Feb 1, 2024
1efe84a
Fix clean data component (#1915)
chenslucky Feb 1, 2024
91066a3
improve log, prompt and dag (#1920)
chjinche Feb 2, 2024
f1ea25e
[data gen] lazy import all azure dependencies, remove connection_name…
chjinche Feb 2, 2024
f405a7c
[data gen] fix mldesigner cannot find component issue (#1928)
chjinche Feb 2, 2024
e17e992
Seprate cloud part (#1926)
16oeahr Feb 2, 2024
feb8063
[data gen] streaming log batch run progress, modify debug info, modif…
chjinche Feb 4, 2024
c52f285
Support relative path (#1938)
16oeahr Feb 4, 2024
5c2318e
[data gen] fix doc links, progress bar, hide max_tokens (#1941)
chjinche Feb 4, 2024
5c16943
Override node inputs (#1937)
chenslucky Feb 4, 2024
0889951
Update config.ini.example
chenslucky Feb 4, 2024
cc0d876
Fix version and invalid file path (#1949)
16oeahr Feb 4, 2024
5807c8a
Override flow component name (#1951)
chenslucky Feb 4, 2024
ffc3ef9
fix flow run on portal (#1953)
16oeahr Feb 4, 2024
4326c1d
fix batch run output json empty error (#1957)
16oeahr Feb 4, 2024
8686ef6
Filter unsupport file types (#1956)
chenslucky Feb 4, 2024
6bd4bcd
rename logger name of node inputs override (#1964)
chjinche Feb 5, 2024
0ddcfa5
[data gen] comment document_nodes_file, fix typo/broken links in doc …
chjinche Feb 5, 2024
cce07d2
fix stuck issue when connection name is wrong (#1975)
chjinche Feb 5, 2024
877d85b
Refine generate document nodes logic (#1985)
chenslucky Feb 7, 2024
776b712
add new line (#1999)
chjinche Feb 7, 2024
5726662
Support data asset input (#2040)
chenslucky Feb 19, 2024
356441e
Convert config.ini to config.yml (#2062)
chenslucky Feb 23, 2024
6f08f09
Merge branch 'main' into feature/test_data_gen
16oeahr Feb 27, 2024
c0bb62c
Tune prompt to avoid question with `in this context/in given code/in …
chenslucky Feb 28, 2024
f6dca7c
Add summary info of data gen details (#2161)
16oeahr Mar 1, 2024
88c546d
add eval flows and simulation flow
gegao-MS Mar 15, 2024
f8934dc
Fix cspell flask compliance and schema validation errors
gegao-MS Mar 18, 2024
febacac
Fix flake8 and flow schema validation errors
gegao-MS Mar 18, 2024
29ca97a
Refine metrics name
gegao-MS Mar 20, 2024
4237a2d
fix name miss match
gegao-MS Mar 20, 2024
b5bf3b9
Refine
gegao-MS Mar 20, 2024
0c38a6e
Merge branch 'main' into feature/test_data_gen
16oeahr Mar 26, 2024
8949c08
Remove copy flow folder, fix cspell and doc link ci (#2253)
16oeahr Mar 26, 2024
77b885e
change file name
gegao-MS Mar 27, 2024
1d2a9da
Change flow name
gegao-MS Mar 28, 2024
f566731
improve gen test data doc, config, error message, and pin llamaindex …
chjinche Mar 28, 2024
fac95f4
Add workflow fow examples
gegao-MS Mar 28, 2024
5332809
rrefine
gegao-MS Mar 28, 2024
f6338d2
Change jinja2 file to contains #
gegao-MS Mar 28, 2024
8ec6aa7
Change typo 'assistent' to 'assistant'
gegao-MS Mar 28, 2024
632eb6a
fix typo
gegao-MS Mar 28, 2024
59273c7
Move flow folder, fix tool warning and fix progress bar (#2520)
16oeahr Mar 28, 2024
ffd6c78
Merge branch 'main' into feature/test_data_gen
16oeahr Mar 28, 2024
7ed1f43
Merge branch 'main' into feature/test_data_gen
16oeahr Mar 29, 2024
9d6bfdb
Fix summary error caused by empty lines (#2544)
16oeahr Mar 29, 2024
f0c7cd5
improve doc, update config.yml.example, double quote node input overr…
chjinche Mar 29, 2024
c7dc65b
Merge branch 'main' into feature/test_data_gen
chjinche Mar 29, 2024
bec08d4
remove unnecessary ignorefile
chjinche Mar 29, 2024
02872e2
remove blank lines
chjinche Mar 29, 2024
0c6be9c
Merge branch 'main' into feature/test_data_gen
chjinche Mar 29, 2024
c19127c
Change from promptflow to from promptflow.core
gegao-MS Mar 29, 2024
1f0a7bf
Fix flake error
gegao-MS Mar 29, 2024
7eb077e
Merge remote-tracking branch 'origin/feature/test_data_gen' into meng…
melionel Apr 3, 2024
9afe8ad
Merge remote-tracking branch 'origin/users/gega/addevalsimulationflow…
melionel Apr 3, 2024
85b045e
add develop-promptflow-copilot tutorial
melionel Apr 8, 2024
b17394b
add promptflow copilot sample flow
melionel Apr 8, 2024
b55043a
refine words
melionel Apr 8, 2024
121df0c
modify links for gen test data doc
melionel Apr 8, 2024
95b443a
add readme file for copilot flow
melionel Apr 8, 2024
7b15352
add tutorial to index
melionel Apr 9, 2024
2b7d4fa
fix flake error
melionel Apr 9, 2024
3968ee8
fix flow yaml schema
melionel Apr 9, 2024
c5b0035
update document
melionel Apr 9, 2024
def4be7
add vectordb to requirements
melionel Apr 9, 2024
144fb72
update image
melionel Apr 9, 2024
8fbdae4
add vector db to examples requirement
melionel Apr 9, 2024
ce376aa
update doc link
melionel Apr 9, 2024
e29fbbc
import tool from core
melionel Apr 10, 2024
1523d41
use PR link for now
melionel Apr 10, 2024
22bb5c5
Merge branch 'main' into mengla/copilot_sample
melionel Apr 10, 2024
ba3a3ae
remove unknow field
melionel Apr 10, 2024
14cc9aa
use PR link for now
melionel Apr 10, 2024
e033035
Merge branch 'mengla/copilot_sample' of https://github.com/microsoft/…
melionel Apr 10, 2024
0aa6405
Merge branch 'main' into mengla/copilot_sample
melionel Apr 10, 2024
90709b4
Merge branch 'main' into mengla/copilot_sample
melionel Apr 23, 2024
78433a1
add to index.md
melionel Apr 23, 2024
efc4e75
Merge branch 'main' into mengla/copilot_sample
melionel Apr 23, 2024
ce48741
Merge branch 'main' into mengla/copilot_sample
melionel Apr 23, 2024
341b052
Merge branch 'main' into mengla/copilot_sample
melionel Apr 24, 2024
a4d2511
Merge branch 'main' into mengla/copilot_sample
melionel Apr 25, 2024
31714b9
Merge branch 'main' into mengla/copilot_sample
melionel Apr 25, 2024
e268845
fix doc
melionel Apr 26, 2024
9614264
Merge branch 'main' into mengla/copilot_sample
melionel Apr 26, 2024
ee44e59
Merge branch 'main' into mengla/copilot_sample
melionel Apr 30, 2024
25fdd14
Merge branch 'main' into mengla/copilot_sample
melionel May 6, 2024
8ee19b5
Merge branch 'main' into mengla/copilot_sample
melionel May 6, 2024
152ed5b
Merge branch 'main' into mengla/copilot_sample
melionel May 6, 2024
f4a43c4
Merge branch 'main' into mengla/copilot_sample
melionel May 6, 2024
7321082
fix doc link
melionel May 6, 2024
0dca96a
run readme
melionel May 7, 2024
0b385b2
Merge branch 'main' into mengla/copilot_sample
melionel May 8, 2024
898477e
Merge branch 'main' into mengla/copilot_sample
melionel May 8, 2024
c9a95c5
Merge branch 'main' into mengla/copilot_sample
melionel May 8, 2024
0e7fd71
Merge branch 'main' into mengla/copilot_sample
melionel May 8, 2024
0d3e4a1
remove unnecessary require
melionel May 8, 2024
c81e30d
Merge branch 'main' into mengla/copilot_sample
melionel May 8, 2024
32c7f65
remove vectordb dependency from python node
melionel May 9, 2024
a0f6072
Merge branch 'mengla/copilot_sample' of https://github.com/microsoft/…
melionel May 9, 2024
466cd0a
add black line
melionel May 9, 2024
93c9a14
Merge branch 'main' into mengla/copilot_sample
melionel May 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refine words
melionel committed Apr 8, 2024
commit b55043a3865623b94a7a420de2d0d5dd5272140c
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# Develop promptflow copilot

In this tutorial, we will show you how to develop a RAG based copilot step by step using the toolsets provided by Azure Machine Learning promptflow. Specifically, we will cover the following topics:
- How to initialize a RAG based copilot flow from AzureML workspace portal
- How to generate synthetic test data for the copilot
- How to evaluate your copilot with test data
- How to improve your copilot flow
- How to bring your copilot to customers
In this tutorial, we will provide a detailed walkthrough on creating a RAG-based copilot using the Azure Machine Learning promptflow toolkit. Our tutorial will cover a range of essential topics, including:

We will develop copilot for promptflow as example in this tutorial, you can develop your own copilot following the similar steps.
- Initiating a RAG-based copilot flow through the AzureML Workspace Portal.
- Generating synthetic test data for the copilot.
- Evaluating the copilot's performance using test data.
- Enhancing the functionality and efficiency of your copilot flow.
- Deploying your copilot for customer use.

While we will focus on constructing a copilot for promptflow as a case study, the methodologies and steps outlined can be adapted to develop your customized copilot solutions.

## Prerequisites

@@ -18,9 +19,9 @@ We will develop copilot for promptflow as example in this tutorial, you can deve

## Step 1: Initialize a RAG based copilot flow

Firstly, clone the promptflow repository to your local machine. Then, in your Azure Machine Learning workspace, create vector index using the document files inside `./docs` folder. For detailed instructions about create vector index, you can reference the document [here](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-vector-index?view=azureml-api-2#create-a-vector-index-by-using-machine-learning-studio).
First, begin by cloning the promptFlow repository to your local machine. Subsequently, within your Azure Machine Learning workspace, proceed to create a vector index utilizing the document files located in the `./docs` folder. For comprehensive guidance on creating a vector index, kindly consult the documentation available at [here](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-vector-index?view=azureml-api-2#create-a-vector-index-by-using-machine-learning-studio).

After the vector index created, an example prompt flow will be automatically generated in the workspace. You can find the example prompt flow's link in the vector index's detail page. The example flow is a typical RAG based copilot flow, it will be a good start point for use to develop our copilot.
Upon successful creation of the vector index, an example flow will be automatically generated within your workspace. This example flow, which is a standard Retrieval-Augmented Generation (RAG) based copilot flow, serves as an excellent starting point for developing your own copilot. You can locate the link to this example flow on the vector index's detail page.

This is how the example flow looks like:

@@ -33,16 +34,16 @@ With some minor configuration, you can open the chat panel and directly chat wit
### Tips

```
Prepare your data carefully. The quality of the data will directly affect the performance of the copilot. Promptflow had prepared rich and insightful document in the `./docs` folder, so we vectorized it as the context for our copilot. Meanwhile, we filter out the image files which cannot be vectorized and some markdown files that contains no meaningful information.
Currently, the volume of test data generated cannot be directly manipulated by the user. Instead, it is contingent upon the number of segments your documents are divided into. This segmentation can be adjusted by modifying the 'document_chunk_size' and 'document_chunk_overlap' parameters in your config.yml file. Additionally, you have the option to alter the 'temperature' parameter of the LLM tool within the 'gen_test_data' example flow. By executing the 'gen_test_data' script multiple times, you can indirectly increase the quantity of test data produced.
```

## Step 2: Generate synthetic test data

To ensure the quality of the promptflow copilot, we need to test it on a large set of data. The test data could be from the real user cases, like questions posted on stackoverflow. However, the test data from real user cases is usually lack of amount and comprehensiveness. Therefore, we need to generate synthetic test data to cover more scenarios.
To ensure the quality of the promptFlow copilot, it's imperative to conduct extensive testing using a broad dataset. Ideally, this dataset would consist of real user inquiries, such as those found on platforms like StackOverflow. However, real-world cases often fall short in both quantity and diversity. To address this gap, the creation of synthetic test data is necessary to encompass a wider array of scenarios.

Promptflow had prepared detailed guidelines on how to generate synthetic test data for your documents by leveraging the capabilities of LLM. For detailed steps, you can reference [this doc](../../../docs/how-to-guides/generate-test-data.md).
Promptflow has provided comprehensive guidelines for generating synthetic test data using Large Language Models (LLMs). For step-by-step instructions, please refer to the document available at [here](../../../docs/how-to-guides/generate-test-data.md).

Create a new Data asset in your workspace if your want to evaluate your copilot with the test data in azure.
To facilitate evaluation of your copilot in Azure, consider creating a new Data Asset in your workspace specifically for this purpose.

### Tips

@@ -51,7 +52,7 @@ Currently, you cannot directly control how much test data you want to generate.
```

## Step 3: Evaluate your copilot with test data
After we prepared the test data, we can use evaluation flow to evaluate the performance of our copilot againt the test data. Promptflow had prepared various of evaluation flows for different scenarios. For our RAG based copilot, we can leverage the evaluation flow in [this folder](../../../examples/flows/evaluation/eval-single-turn-metrics/).
After preparing the test data, we can utilize the evaluation flow to assess the performance of our copilot against the test data. Promptflow has developed various evaluation flows tailored for different scenarios. For our RAG-based copilot, we can leverage the evaluation flow in [this folder](../../../examples/flows/evaluation/eval-single-turn-metrics/) to ensure comprehensive and accurate performance analysis.

Clone this evaluation flow folder to your local machine or upload it to your workspace.

@@ -62,21 +63,19 @@ Clone this evaluation flow folder to your local machine or upload it to your wor
### Tips

```
- The evaluation flow supports calculating multiple metrics, and have detailed explanations for each metric in the readme file. Make sure you understand each of them and choose the metrics that are most relevant to your copilot.
- The evaluation process is designed to compute multiple metrics, each accompanied by comprehensive explanations in the readme file. It is imperative to understand these metrics thoroughly and select those most applicable to your project.

- The answer produced by the initial copilot flow will have a "(Source: citation)" part at the end. This is because we told the model to do that in the prompt. You can modify the default prompt to remove this part in case it affects the evaluation results as we did not append this part when generating the test data.

- The evaluation flow will give you aggregated metrics. It's important to zoom into the metrics result for each line, especially for the line with lower score.

The bad cases usually caused by two reasons: one is the flow is not performing well, whether because the context retrival or prompt; the other is the test data is not good enough.
- Furthermore, the evaluation process will present aggregated metrics. It is essential to closely examine the results for each line, especially for the line with lower metric.
Typically, suboptimal results stem from one of two issues: either the process is underperforming, possibly due to inadequate context retrieval or prompt formulation, or the quality of the test data is insufficient.

For the first case, you can try to debug or tune the flow in local or in the workspace.
For the second case, you can try to modify the test case or abandon it from your test dataset.
To address the first issue, consider debugging or refining the process either locally or within the workspace. For the latter, you might either revise the problematic test cases or exclude them from your test dataset altogether.
```

## Step 4: Improve your copilot flow

After evaluation, you will find that the initial copilot flow works well and can achieve relatively good metrics. We can continue improve the copilot in various ways.
After evaluation, you will find that the initial copilot flow works well and can achieve relatively good metrics. We can continue to improve the copilot in various ways.

### Improve context retrieval
The context retrieval is the most important part of RAG based approach, the quality of the retrieved context will directly affect the performance of the copilot. Take a close look at the initial copilot flow, you will find that the context retrieval is achieved by 'lookup_question_from_indexed_docs' node which is using 'Index Lookup' tool.
@@ -93,9 +92,8 @@ You can tune the prompt of these two nodes by leveraging the variants feature of
### Add doc link to the answer
It's important to add the link of the document which is used as the context to generate the answer. This will help the user to understand where the answer comes from and also help the user to find more information if needed.

The answer produced by the initial copilot flow will have a "(Source: citation)" part at the end. But the citation is not reachable link for end user, and the source:citation format is not suitable to be shown as a hyperlink.

To append the doc link gracefully to the answer, we can slightly modify the code of the 'generate_prompt_context' node to make the citation a reachable hyperlink. And modify the prompt of the 'answer_the_question_with_context' node to make the answer include the doc link with a proper format. The final answer will look like this:
The answer generated by the initial flow will include a citation in the format "(Source: citation)." However, this citation format does not present a clickable link, making it inconvenient for end-users to directly access the source.
To address this, we propose modifications to the code within the 'generate_prompt_context' node. These adjustments aim to transform the citation into an accessible hyperlink. Furthermore, alterations to the prompt in the 'answer_the_question_with_context' node are suggested to ensure the document link is seamlessly integrated into the response. By implementing these changes, the final response will effectively incorporate the document link in a user-friendly format. The final answer will look like this:

![doc-link](doc-link.png)

@@ -107,9 +105,9 @@ Avoid abuse is a critical topic when you want to deploy your copilot to producti

But what if we cannot add the authentication layer or we want to save the login effort for the users ? How do we avoid the abuse of the copilot in this case?

A common way is to adjust the prompt used in the 'answer_the_question_with_context' node to tell the model only answer the question if the answer can be found from the retrived context. But the testing result shows that even if we do so, the model will still answer the questions which are irrelevant to the context, especially when the question is a general question like "what is the capital of China ?" or chat history becomes longer.
One common approach is to refine the prompts used in the 'answer_the_question_with_context' function to instruct the model to only respond if the answer can be sourced from the provided context. Despite this, test results indicate that the model may still respond to queries unrelated to the context, particularly with general inquiries such as "What is the capital of China?" or when chat histories extend over multiple interactions.

A better way could be adding an extra LLM node to determine the relevance of the question to the copilot (in our case, the promptflow) and give a score to the relevance. Then we check the score, if the relevance score is lower than a threshold, we will skip the context retrieval step and directly return a message to the users to tell them that the question is not relevant to the copilot and suggest them to rephrase the question.
A more effective strategy involves integrating an additional LLM node tasked with evaluating the relevance of a query to the copilot's capabilities (in this scenario, referred to as 'promptflow'). This node assigns a relevance score to each query. Queries with a relevance score below a predetermined threshold would bypass the context retrieval phase, and the system would instead inform the user that their question is not pertinent to the copilot's functionality. Users would be encouraged to rephrase their queries for better alignment with the copilot's capabilities.

You can find the specific code changes in the source of the promptflow copilot flow in [this folder](../../../examples/flows/chat/promptflow-copilot/).

@@ -121,19 +119,18 @@ The final step is to bring our intelligent copilot to customers. Obviously, we c
We want our customers to access promptflow copilot through a web page with chat UI experience, so we will deploy the flow as a managed online endpoint. You can find the detailed instructions [here](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference?view=azureml-api-2).

### Host web app with Azure App Service
Currently, managed online endpoint does not support CORS, so we cannot directly access the endpoint from a web page. We need to host a web app to interact with the endpoint. Azure App Service is a fully managed platform for building, deploying, and scaling web apps. You can use Azure App Service to host your web app and interact with the promptflow copilot endpoint.
Currently, the managed online endpoint does not support Cross-Origin Resource Sharing (CORS), preventing direct access from a webpage. To facilitate interaction with the endpoint, it is necessary to host a web application. Azure App Service offers a comprehensive solution for this requirement, providing a fully managed platform designed for building, deploying, and scaling web applications. By utilizing Azure App Service, you can host your web application efficiently and establish interaction with the promptflow copilot endpoint.

### Chat UI experience
The chat UI experience is also a critical part of the copilot, it will directly affect the user's experience. It's not complicated to implement a ChatGPT like UI from scratch, but it will be much easier and faster to leverage the wonderful open source projects. One of the projects we have tried is `chatgpt-lite`, we had built our promptflow copilot's UI based on it. You can find the source code of the chat UI [here](https://github.com/melionel/chatgpt-lite/tree/talk_to_endpoint_appservice).
The chat interface significantly impacts the overall user experience with the copilot, directly influencing how users interact with the system. While constructing a ChatGPT-style interface from the ground up is feasible, utilizing established open-source projects can greatly streamline and expedite the process. One of the projects we have tried is `chatgpt-lite`, we had built our promptflow copilot's UI based on it. You can find the source code of the chat UI [here](https://github.com/melionel/chatgpt-lite/tree/talk_to_endpoint_appservice).

![chat-ui](chat-ui.png)

### Provide suggested follow-up questions

Provide suggested follow-up questions is a good way to improve the user experience and communication efficiency.
A simple solution is to directly tell the model to return the follow-up questions along with the answer in the response, however this is not realiable and increase the complexity of processing the response. Another solution is to use another flow to do the follow-up question generation task. You can leverage the 'question_simulation' flow in [this folder](../../../examples/flows/standard/question-simulation/) to generate the suggestions for the next question based on the previous chat history.
Incorporating follow-up question suggestions is an effective strategy to enhance user experience and communication efficiency. One approach is to instruct the model to include follow-up questions in its responses. However, this method may not always be reliable and could complicate response processing. An alternative strategy involves utilizing a separate flow dedicated to generating follow-up question suggestions. For this purpose, you can employ the 'question_simulation' flow found in [this folder](../../../examples/flows/standard/question-simulation/).

Deploy the `question_simulation` flow as a managed online endpoint and call it in your web app to get the follow-up questions.
Deploying the `question_simulation` flow as a managed online endpoint and integrating it into your web application allows for dynamic generation of pertinent follow-up questions based on previous chat interactions.

### Tips
```