You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instructions to create a Custom Connection [can be found here.](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-integrate-with-langchain?view=azureml-api-2#create-a-connection)
27
+
Instructions to create a Custom Connection [can be found here.](https://microsoft.github.io/promptflow/how-to-guides/manage-connections.html#create-a-connection)
28
28
29
29
The keys to set are:
30
30
31
-
1. endpoint_url
32
-
2. endpoint_api_key
33
-
3. model_family
31
+
1.**endpoint_url**
32
+
- This value can be found at the previously created Inferencing endpoint.
33
+
2.**endpoint_api_key**
34
+
- Ensure to set this as a secret value.
35
+
- This value can be found at the previously created Inferencing endpoint.
36
+
3.**model_family**
37
+
- Supported values: LLAMA, DOLLY, GPT2, or FALCON
38
+
- This value is dependent on the type of deployment you are targetting.
34
39
35
40
*These values can be found at the previously created Inferencing endpoint.*
36
41
@@ -56,4 +61,4 @@ The keys to set are:
56
61
1. Choose a Model from the catalog and deploy.
57
62
2. Setup and select the connections to model deployment.
58
63
3. Configure the model api and its parameters
59
-
4. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
64
+
4. Prepare the Prompt with [guidance](prompt-tool.md#how-to-write-prompt).
0 commit comments